The storage sector is undergoing a profound transformation. This is driven by various factors – such as security, speed, efficiency and cost savings. The volume of data to be stored is expected to grow by a factor of 20 by 2030.

This development will pose enormous challenges for data centers and IT processes, and radically new technologies will be needed.

To stay ahead of the curve in storage, keep an eye on the following new technologies.


DNA Storage

Using DNA Memory as a data storage medium promises a much higher capacity and a more stable environment than conventional storage architectures. DNA storage makes it possible to store data at the molecular level – the information is archived directly in DNA molecules.

The advantages of DNA-based data storage are its density and stability. One gram of DNA can store about 215 petabytes of data with a minimum lifetime of 500 years. The disadvantage is that UV radiation can destroy the DNA.

It is important to note that DNA storage is a long-term trend: While there has been significant progress in the field, it will still take a few years for the technology to make a breakthrough. There is currently no fixed timetable for when DNA storage (media) will be available. Optimists hope that it could be early 2030.

Current DNA sequencing and synthesis technologies are too expensive and too slow to compete with conventional storage infrastructures.

Access latency is still high and is currently measured in minutes to hours – maximum write throughput in kilobits per second. A DNA drive that wants to compete with archiving via tape must support a write throughput in gigabits per second. That would require speeding up DNA synthesis – the writing process – by six times.

The reading process – DNA sequencing – on the other hand, would have to run twice as fast.

But even if engineers can master these problems, there is still a big hurdle to overcome: Tape storage media costs between 16 and 20 dollars per terabyte. The cost of DNA synthesis and sequencing is in the range of $800 million per terabyte.

How we can store digital data in DNA | Dina Zielinski

Storage Security

Data security is a top priority in virtually all companies. But, on the other hand, comprehensive data protection – both at rest and in transit – is often neglected.

Today, many companies use their data storage for on-premises data centers as well as for public and private cloud environments. In the age of ransomware, however, investing in data backups with ‘air-gap’ is essential. Only then are the data copies safe from unauthorized access in the event of a major attack.

Real-time analytics are available to protect primary NAS storage systems, provided by products such as Superna Ransomware Defender for Dell OneFS and NetApp Cloud Insights with Cloud Secure for ONTAP.

For block storage users, multi-factor authentication and protected snapshots are available to protect critical data. While storage security tools are increasingly maturing, many companies are proactively working to implement storage products with built-in security features.


SSD data reduction

Data reduction is the minimisation of the capacity needed to store data. This technology can increase storage efficiency and reduce costs. Data reduction techniques such as compression and deduplication are already used in various storage systems but are not yet widespread for solid-state drives.

To ensure reliability, compression must be lossless. This factor poses challenges for SSD manufacturers. Many all-flash array storage vendors offer options for inline compression, but the technologies are often proprietary.

This situation is likely to improve in the near future as SSD vendors work to deliver maximum capacity at the lowest possible price. In addition, SSD manufacturers will also aim for the PCI Express 4.0 specification to optimize bandwidth and read and write speeds.


Data Transfer and Cloud Storage

To understand how storage is used in the public cloud, it is essential to map and model data usage across the entire enterprise application landscape.

Since public cloud storage solutions typically charge for ingress and egress, as well as for data transit between zones and regions, the ability to predict the extent of data movement is critical to managing the cost and effectiveness of public storage.

Ideally, organizations are aware of the implications before splitting applications with code dependencies between public cloud and on-premises environments.

Storage vendors have improved their analytics capabilities here: HPE InfoSight, NetApp ActiveIQ and Pure Storage Pure1 Meta are among the tools that give organizations more comprehensive storage insights.

Object Storage

In principle, there are three types of storage systems: Object, block and file storage: Object Storage is the only option that offers low cost and high performance in the exabyte range.

The transformation of the storage world is being driven by cloud-native applications such as databases, analytics, data lakes, artificial intelligence and machine language technologies. These applications are driving the adoption of object storage as primary storage.

Object storage has been widely used since the early 2000s. Still, it is only in recent years that modern hybrid storage systems, performance optimizations in NVMe SSDs and significant price reductions have made it economically feasible to deploy on a large scale.

Immutable Backups

More and more companies are becoming interested in immutable backups. An immutable backup is a method of protecting data. It ensures that the data cannot be deleted, encrypted or modified.

Such immutable storage can be applied to hard disk, SSD and tape media, as well as cloud storage. Moreover, it is simple and convenient to use: the user creates a file that contains the desired immutability policy.

Immutable backups are the only way to ensure 100 percent protection against any kind of deletion or modification. This technology is a lifeline in an increasingly data-driven business environment where threats are constantly evolving.

Time Series Databases

A time-series database (TSDB) is designed to support high-speed reads and writes. TSDBs thus open up new levels of flexibility for existing object storage solutions.

In particular, the storage layouts and indexes in these TSDBs are designed to take advantage of scalability, resilience and cost minimization while mitigating the impact of latency.

TSDBs running on object storage are aimed at enterprises, managed service providers and other organizations that collect large amounts of time series data for observation and analysis purposes. Robust builds of time-series databases are already available, such as Cortex, Mimir and InfluxDB IOx.

All major cloud providers widely use object storage solutions, and open source solutions such as MinIO and Ceph offer compatible APIs.

Categories: StorageBlog

James R. Kinley - It Admin

James R. Kindly

My Name is James R. Kindly i am the founder and primary author of Storaclix, a website dedicated to providing valuable resources and insights on Linux administration, Oracle administration, and Storage. With over 20 years of experience as a Linux and Oracle database administrator, i have accumulated extensive knowledge and expertise in managing complex IT infrastructures and databases.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *

Save 30% on Apple AirPods Pro

Get the coolest AirPods ever released for:  $179,99  instead $249

  • Active Noise Cancellation blocks outside noise
  • Transparency mode for hearing and interacting with the world around you
  • Spatial audio with dynamic head tracking places sound all around you