Storage 2.0


While I was discussing new storage and file systems with my friend Chuan Wang, he came up this term: storage 2.0, which is exactly the “catch” word I have been searching for a while.

There have been numerous file systems developed since the birth of modern computers. However, among those mature and widely used file systems, there are basically two groups: the desktop file system (ZFS, NTFS, EXT, etc.) and the supercomputer file system (GPFS, Lustre, Panasas, etc.). In the middle, there is a chaos.

The “middle” is an area where users have a large number of loosely coupled commodity computers, where the supercomputer file system will not work well or additional layer middleware is required to aggregate individual desktop file systems. There are many file or storage systems in this area too (HDFS, Gluster, Dynamo, just to list a few), but they are used in more or less specific use scenarios and none of them becomes so dominant and widely deployed. This is partially due to the fact that requirements from different users can vary drastically from high consistency to high availability, but, in general, it is hard to provide all of these elements at once  (see Brewer’s CAP Theorem).

Yet there are indeed some common characteristics the “middle” file systems tend to (or need to) share. Together, these are the features required by what we would like to call storage 2.0, or the general “middle” file system.

Software level fault tolerance.

Almost all distributed file systems need to deal with hardware and network failure. Supercomputer file systems usually use RAID as a hardware level solution. However, hardware failure in commodity clusters is normal, so a file system must be able to provide fault tolerance within itself, rather than depending on another layer.

Self-healing.

Storage nodes may join and leave frequently, either due to system/network failure or maintenance/upgrade. The file system should continue to work in such situations and hide these internal changes from the clients. In particular, the file system should also  automatically re-balance storage space (e.g., when new nodes are inserted). In short, the file system should always work even if there is only one node left (although files may be lost if there are too many nodes down).

In-storage processing.

If a file system is running on a system where individual storage is attached with CPUs, it is only a waste of resources if these CPUs are used for serving data only. In-storage processing can significantly accelerate such operations as md5sum and grep, as it does not only avoid the need to read data out (to the client), but also can execute the commands on multiple files in parallel.

Ability to treat files differently.

A distributed file system, especially those served across wide area networks, are usually involved with higher level of versatility and flexibility. Files may come from different sources and serve different purposes. The location, security, and replication factor of each file needs to be treated differently and the rules should be dynamically updated whenever necessary.

Scalability.

A file system should be able to handle 10,000 storage nodes if necessary, yet we must be aware that the majority of systems have never come close to this level of scale. Extreme  high Scalability often does not come free. Many highly scalable systems uses P2P routing (e.g., distributed hash table) but consistency and performance are often compromised. Therefore, the file system should have “reasonable” scalability. It is also worth noting that scalability does not only apply to the number of nodes, but also apply to the number and size of files, and sometimes even geographical locations.

Performance.

The file system should support high performance lookup and IO throughput and provide faster IO throughput than a desktop file system due to concurrent data access. However, latency is usually higher than that in those supercomputer file systems.

There are other features that may be less important but can be crucial in certain situations, such as integrated security and integrated monitoring. For distributed file systems, depending on external security and monitoring may not be enough. They need to support these features within the system.

Storage 2.0 is not to replace current desktop and supercomputer file systems, but to fill a void left between them. At VeryCloud, we are trying to shape our Sector DFS within these requirements. We hope you can get involved with us, either you are an open source developer who is interested in distributed file system development or you are a potential user who feels that these features meet your specific requirements.

Advertisements

About Yunhong Gu
Yunhong is a computer scientist and open source software developer. He is the architect and lead developer of open source software UDT, a versatile application level UDP-based data transfer protocol that outperforms TCP in many cases, and Sector/Sphere, a cloud computing system software that supports distributed data storage, sharing, and processing. Yunhong earned a PhD degree in Computer Science from the University of Illinois at Chicago in 2005.

One Response to Storage 2.0

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: