A Private “DropBox” for Your Enterprise

Companies today still use FTP, SFTP, SSH, and/or HTTP as their daily data sharing platform. Some people collect/generate data and upload the data to a data server, while others login the server and download the data they are interested in. There are several problems for this model. First, as the data increases, there are more and more data servers set up, and users often have hard time to figure out which server hosts a particular data file. Second, when there are multiple data servers, data files are often replicated either intentionally or by mistake, it is difficult to tell which version is the most recent, even more difficult is to keep all replicas consistent. Third, when users try to download data files from remote locations via the Internet, they often experience low throughput. Now there have been a business called WAN acceleration to help solve the third problem, but WAN acceleration software cannot help anything about the first two problems.

On the other hand, there are many Internet storage service emerging in recent years, DropBox, Google Doc, Amazon S3, just to name a few. These services put users’ files in a storage “cloud” and provide a single namespace. Replications and conflicts are handled transparently to users. These solved the first two problems described above. In fact, due to these benefits, online storage service is very popular today.

However, enterprises cannot simply move their data to these online storage providers. There is immediately a security concern. Then there are other issues including capacity and cost. Uploading 100TB+ data to an online storage is still less than practical today for most cases, and the cost to keep them in service is very high (e.g., at Amazon S3, 100TB will cost approximately $12K per month plus data transfer cost). In addition, the intranet network connection is usually faster.

An alternative and probably better approach is to own a private data cloud “DropBox” inside the company, managing and serving data to all branches. Such a private data cloud should have the following features:

  • Single name space across multiple servers, even if the servers are located at different locations
  • Allow servers to be added and removed at run time (dynamic scaling)
  • Maintain replicas and take care of consistency between replicas transparently
  • Allow users to control the replication number and location of each file when necessary (e.g., hot files can be replicated more times)

Sector/Sphere meets all the above requirements. Sector can manage your data across thousandths of servers with a single name space. Sector automatically replicates data files to multiple data centers for fault tolerance and to increase read performance. Data location and replication number can be configured at per-file level if necessary and is dynamically changeable. For example, if new files have more readers than old files, the new files can be replicated at a higher degree and gradually reduce the replication number when users are more interested in even newer files. In addition to all of these benefits, Sector also gives you integrated WAN acceleration ability with the UDT protocol, another open source software that we contribute. UDT has helped millions of users with their daily data transfer needs.

Overall, our system can support very large enterprises to share 100+TBs of data  every day among their global branches. You may also refer to our previous blog post to start trying the system by yourself.

Advertisements

Storage 2.0

While I was discussing new storage and file systems with my friend Chuan Wang, he came up this term: storage 2.0, which is exactly the “catch” word I have been searching for a while.

There have been numerous file systems developed since the birth of modern computers. However, among those mature and widely used file systems, there are basically two groups: the desktop file system (ZFS, NTFS, EXT, etc.) and the supercomputer file system (GPFS, Lustre, Panasas, etc.). In the middle, there is a chaos.

The “middle” is an area where users have a large number of loosely coupled commodity computers, where the supercomputer file system will not work well or additional layer middleware is required to aggregate individual desktop file systems. There are many file or storage systems in this area too (HDFS, Gluster, Dynamo, just to list a few), but they are used in more or less specific use scenarios and none of them becomes so dominant and widely deployed. This is partially due to the fact that requirements from different users can vary drastically from high consistency to high availability, but, in general, it is hard to provide all of these elements at once  (see Brewer’s CAP Theorem).

Yet there are indeed some common characteristics the “middle” file systems tend to (or need to) share. Together, these are the features required by what we would like to call storage 2.0, or the general “middle” file system.

Software level fault tolerance.

Almost all distributed file systems need to deal with hardware and network failure. Supercomputer file systems usually use RAID as a hardware level solution. However, hardware failure in commodity clusters is normal, so a file system must be able to provide fault tolerance within itself, rather than depending on another layer.

Self-healing.

Storage nodes may join and leave frequently, either due to system/network failure or maintenance/upgrade. The file system should continue to work in such situations and hide these internal changes from the clients. In particular, the file system should also  automatically re-balance storage space (e.g., when new nodes are inserted). In short, the file system should always work even if there is only one node left (although files may be lost if there are too many nodes down).

In-storage processing.

If a file system is running on a system where individual storage is attached with CPUs, it is only a waste of resources if these CPUs are used for serving data only. In-storage processing can significantly accelerate such operations as md5sum and grep, as it does not only avoid the need to read data out (to the client), but also can execute the commands on multiple files in parallel.

Ability to treat files differently.

A distributed file system, especially those served across wide area networks, are usually involved with higher level of versatility and flexibility. Files may come from different sources and serve different purposes. The location, security, and replication factor of each file needs to be treated differently and the rules should be dynamically updated whenever necessary.

Scalability.

A file system should be able to handle 10,000 storage nodes if necessary, yet we must be aware that the majority of systems have never come close to this level of scale. Extreme  high Scalability often does not come free. Many highly scalable systems uses P2P routing (e.g., distributed hash table) but consistency and performance are often compromised. Therefore, the file system should have “reasonable” scalability. It is also worth noting that scalability does not only apply to the number of nodes, but also apply to the number and size of files, and sometimes even geographical locations.

Performance.

The file system should support high performance lookup and IO throughput and provide faster IO throughput than a desktop file system due to concurrent data access. However, latency is usually higher than that in those supercomputer file systems.

There are other features that may be less important but can be crucial in certain situations, such as integrated security and integrated monitoring. For distributed file systems, depending on external security and monitoring may not be enough. They need to support these features within the system.

Storage 2.0 is not to replace current desktop and supercomputer file systems, but to fill a void left between them. At VeryCloud, we are trying to shape our Sector DFS within these requirements. We hope you can get involved with us, either you are an open source developer who is interested in distributed file system development or you are a potential user who feels that these features meet your specific requirements.

Installing Sector within 5 Minutes

When we develop Sector, we have a rule that a user with reasonable knowledge of Linux should be able to install a working Sector system within 5 minutes. In order to keep this rule, we intentionally limit the number of dependencies. Currently the software only relies on OpenSSL, with FUSE optional. On the other hand, we also try to limit the places of configurations and most of the system parameters should work with default values.

Deploying a distributed software system can be “scary” and time-consuming because there are many components and roles involved and a single mistake can prevent the system from working properly.

You don’t need to be scared away by Sector. If you are interested in Sector, the best way to learn it is to get your hand dirty now: installing it on a Linux box and starting to play with it. You will find it is easier to use than you thought.

Here is a quick guide to install Sector within 5 minutes:

Step 0. You need a Linux box with g++, libssl-dev or openssl-devel.

Step 1. download the most recent version from https://sourceforge.net/projects/sector/.

Step 2. Untar/unzip the tarball file, you will see a directory ./sector-sphere.

Step 3. cd sector-sphere; then do “make”.

Step 4. Configuration: go to sector-sphere/conf. First, edit master.conf to update the security server address (where you want to start the security server; in this case it should be the local machine and leave the port number as default) and a local directory to store system information; Second, edit slave.conf to update the master server address (use local IP and default port) and a local directory to store Sector files; Finally, update client.conf to update the master server address, same as that in slave.conf.

Step 4. cd sector-sphere/security, start the Security server by running ./sserver. The security server manages system security information, including user accounts. Predefined accounts already exists, so there is no need to create a new account for testing purpose.

Step 5. cd sector-sphere/master, start the master server by running ./start_master.

Step 6. cd sector-sphere/slave, start a slave node by running ./start_slave.

Step 7. cd sector-sphere/tools, run ./sector_sysinfo to check the Sector system information.

That is all. Remember that there is a complete manual at http://sector.sourceforge.net/doc/index.htm and you can explore more details of the system.

A High Performance Data Distribution and Sharing Solution with Sector

Over the years, many users have used UDT to power their high speed data transfer applications or tools.  Today, with Sector, we can provide an advanced data distribution and sharing application, in addition to the UDT library. This solution currently works on Linux only, but we will port it to Windows in the near future, first the client side, then the server side.

Here are the several simple steps you can follow to set up a free, open source, advanced, high performance, and simple-to-use data distribution and sharing platform:

1. Download Sector from here, compile and configure the software following the manual.

2. Set up a security server, which allows you control the data access permission, including user accounts, passwords, IP access control list, etc. You can also set up an anonymous account for your public data.

3. Set up one or several Sector master servers, which can be on the same computer that hosts the security server and the data (slave server).

4. Set up Sector Slave servers on the computers that host your data. Unlike FTP or most commercial applications that supports only a single server, Sector allows you to install the servers on multiple computers, even 1000s of them, and yet provides a uniform namespace for the complete system.

5. Install client software on your users’ computers and mount the Sector file system as a local file directory using the Sector-FUSE module. Your users can browse and access the data in Sector just as browsing and accessing data on a local directory by using file system commands “ls”, “cp”, etc. They will not feel the existence of Sector, although this “local directory” can actually run on 1000 servers across multiple continents!

All the data transfer occurring between the clients and the slave servers are on top of UDT. Therefore, data transfer throughput can be guaranteed even over wide area networks.

If you have any questions, please post them on the Sector project forum.

Sector vs. Hadoop

When I try to introduce Sector/Sphere to people  I meet at conferences, I usually start with one sentence: “Sector is a system similar to Hadoop”, because many people know Hadoop and understand how it works more or less, while Sector provides similar functionalities. This claim, however, is not very accurate, as there are many critical differences between the two systems.

Sector is not simply a direct implementation of GFS/MapReduce. In fact, when I started to work on Sector in 2005, I have not read the Google paper yet, and I was not aware of Hadoop until 2007. Sector originated from a content distribution system for very large scientific datasets (Sloan Digital Sky Survey). The current version of Sector still supports efficient data access and distribution over wide area networks, a goal that was not considered by the GFS/Hadoop community. Unlike GFS, Sector does not split files. On the one hand, this limit the sizes of files stored in the Sector file system and hence the system usability. On the other hand, it also greatly improves data transfer and processing performance when proper file sizes are used.

Sector – to be accurate, Sphere, as part of Sector – supports arbitrary user defined functions (UDFs)  to be applied to any data segment (a record, a group of records, a file, etc.) and allows the result to be written to independent files or to be sent to multiple bucket files by a user-defined key. The UDF model turned out to be equivalent to the MapReduce model as each UDF can simulate a Map operation, while the output organization according to keys can simulate a Reduce operation. Note that the “key” in Sector UDF is not part of a data record; it is used for output destination only. While MapReduce treats each record as a <key, value> pair, Sector sees all data in binary format and leaves the specific processing to the UDF.

The table below compares Sphere UDF model and MapReduce. You can rewrite any MapReduce computation using Sphere UDFs. Sphere uses persistent record index instead of a run-time parser. The Map and Reduce operations can be replaced with one or more UDFs. Finally, the Sphere output can be written into Sector files. A more detailed list of different technologies used in Sector can be found on the Sector website.

Sphere MapReduce
Record Offset Index Parser / Input Reader
UDF Map
Bucket Partition
Compare
UDF Reduce
Output Writer

Overall, Sector performs 2 – 20 times faster than Hadoop in our benchmark applications. It its worth trying, especially when you a C++ developer.

Sector 2.0

Sector version 2.0 is in the QA stage and will be released soon. This is a major milestone for the Sector project: 1) Sector 2.x is ready for production; 2) the code structure is re-designed to accept contributions from a large community.

Technical Improvement

Since the last version 1.24a, we have added several new features, including on-disk metadata and in-memory objects. Previously, Sector keeps metadata in memory, which is very fast but may limit the number of files supported in the system. The new on-disk metadata will support much large number of files. Both metadata systems will exist in version 2.0 with the default set to the in-memory one for performance reason (we expect that  in the majority of systems the in-memory metadata structure will be enough to hold all the file information).

In fact, in Sector 2.0, it is easy to support a new metadata  structure by inheriting from a base C++ class. Switching between different metadata management can be done via the configuration files. In addition, because all the metadata system define the same interface, it is possible to have different metadata structures on different nodes.

Version 2.0 introduces in-memory objects. A UDF can create an in-memory object (a pointer to an allocated data structure in memory) while another UDF can access it. This can significantly increase certain iterative algorithms. The in-memory object can be released by a UDF when it is not needed any longer.

Code Structure Reorganization

We have reorganize the code structure so that the Sector system can be developed by a large development group.  Most functionalities, including security control, metadata, master synchronization, services, and client API have been separated from each other . These parts uses internal protocols to communicate between each other. By this way, each part can be independently developed or improved as long as it implements the protocol interface.

In version 2.0, only a single header file is needed to include when programming Sector. This makes it easy for a standard installation (one header file and one library file in the system directory).

During the reorganization, we have also improved the code quality. The new clearer and more loosely-coupled structure make Sector less prone to design and programming bugs.

Plans for 2010

We will continue to improve the software by providing better performance, more reliable software, and more detailed documentations. Minor versions (2.1, 2.2, etc.) may be released once a quarter.