Random thoughts on Internet bandwidth

A decade ago when I worked temporarily for a company in Beijing, I had an interesting conversation with the CEO about Internet bandwidth (I intentionally avoid using the word Internet speed as it is likely to refer heavily on latency nowadays). It was a casual talk but I still clearly remember two arguments we made. He said: “Internet is like highway, no matter how wide you build the road, there will always be more cars that its capacity can handle”. It was hard to argue with him about this when we were all using dial-up modems and the company, by the way, developed Internet video conference products.

Today there is still no clear answer. On the one hand, 100Gb/s Ethernet is being deployed and will probably become popular soon (recall the time when 1Gb/s and 10Gb/s were out), even cellular network can easily beat wired home Internet connection just a few years ago.  Do not forget that Google Fiber is going to be available in Kansas City soon: that is 1Gb/s to home. On the other hand, applications like HDTV over Internet is probably going to become mainstream as well. So will the Internet Bandwidth one day become so abundant that we would not need to worry about it any more. Considering the following facts: during the last decade,

  • CPU performance doubles every 18 months (Moore’s Law)
  • Storage capacity on a typical PC increases roughly from 10GB (2001) to 1TB (2010).
  • Home Internet bandwidth increases roughly from 56Kb/s dial-up to 5Mb/s broadband.

The increase speed of Internet bandwidth is not a clear winner over disk storage yet, but my bet will be on that in the next decade, bandwidth will no longer be on our top concerns when building a new Internet applications.

The other argument we both agreed, and is already happening, was that one day we would be able to use Internet video conference on a commercial flight.

A Private “DropBox” for Your Enterprise

Companies today still use FTP, SFTP, SSH, and/or HTTP as their daily data sharing platform. Some people collect/generate data and upload the data to a data server, while others login the server and download the data they are interested in. There are several problems for this model. First, as the data increases, there are more and more data servers set up, and users often have hard time to figure out which server hosts a particular data file. Second, when there are multiple data servers, data files are often replicated either intentionally or by mistake, it is difficult to tell which version is the most recent, even more difficult is to keep all replicas consistent. Third, when users try to download data files from remote locations via the Internet, they often experience low throughput. Now there have been a business called WAN acceleration to help solve the third problem, but WAN acceleration software cannot help anything about the first two problems.

On the other hand, there are many Internet storage service emerging in recent years, DropBox, Google Doc, Amazon S3, just to name a few. These services put users’ files in a storage “cloud” and provide a single namespace. Replications and conflicts are handled transparently to users. These solved the first two problems described above. In fact, due to these benefits, online storage service is very popular today.

However, enterprises cannot simply move their data to these online storage providers. There is immediately a security concern. Then there are other issues including capacity and cost. Uploading 100TB+ data to an online storage is still less than practical today for most cases, and the cost to keep them in service is very high (e.g., at Amazon S3, 100TB will cost approximately $12K per month plus data transfer cost). In addition, the intranet network connection is usually faster.

An alternative and probably better approach is to own a private data cloud “DropBox” inside the company, managing and serving data to all branches. Such a private data cloud should have the following features:

  • Single name space across multiple servers, even if the servers are located at different locations
  • Allow servers to be added and removed at run time (dynamic scaling)
  • Maintain replicas and take care of consistency between replicas transparently
  • Allow users to control the replication number and location of each file when necessary (e.g., hot files can be replicated more times)

Sector/Sphere meets all the above requirements. Sector can manage your data across thousandths of servers with a single name space. Sector automatically replicates data files to multiple data centers for fault tolerance and to increase read performance. Data location and replication number can be configured at per-file level if necessary and is dynamically changeable. For example, if new files have more readers than old files, the new files can be replicated at a higher degree and gradually reduce the replication number when users are more interested in even newer files. In addition to all of these benefits, Sector also gives you integrated WAN acceleration ability with the UDT protocol, another open source software that we contribute. UDT has helped millions of users with their daily data transfer needs.

Overall, our system can support very large enterprises to share 100+TBs of data  every day among their global branches. You may also refer to our previous blog post to start trying the system by yourself.

Storage 2.0

While I was discussing new storage and file systems with my friend Chuan Wang, he came up this term: storage 2.0, which is exactly the “catch” word I have been searching for a while.

There have been numerous file systems developed since the birth of modern computers. However, among those mature and widely used file systems, there are basically two groups: the desktop file system (ZFS, NTFS, EXT, etc.) and the supercomputer file system (GPFS, Lustre, Panasas, etc.). In the middle, there is a chaos.

The “middle” is an area where users have a large number of loosely coupled commodity computers, where the supercomputer file system will not work well or additional layer middleware is required to aggregate individual desktop file systems. There are many file or storage systems in this area too (HDFS, Gluster, Dynamo, just to list a few), but they are used in more or less specific use scenarios and none of them becomes so dominant and widely deployed. This is partially due to the fact that requirements from different users can vary drastically from high consistency to high availability, but, in general, it is hard to provide all of these elements at once  (see Brewer’s CAP Theorem).

Yet there are indeed some common characteristics the “middle” file systems tend to (or need to) share. Together, these are the features required by what we would like to call storage 2.0, or the general “middle” file system.

Software level fault tolerance.

Almost all distributed file systems need to deal with hardware and network failure. Supercomputer file systems usually use RAID as a hardware level solution. However, hardware failure in commodity clusters is normal, so a file system must be able to provide fault tolerance within itself, rather than depending on another layer.

Self-healing.

Storage nodes may join and leave frequently, either due to system/network failure or maintenance/upgrade. The file system should continue to work in such situations and hide these internal changes from the clients. In particular, the file system should also  automatically re-balance storage space (e.g., when new nodes are inserted). In short, the file system should always work even if there is only one node left (although files may be lost if there are too many nodes down).

In-storage processing.

If a file system is running on a system where individual storage is attached with CPUs, it is only a waste of resources if these CPUs are used for serving data only. In-storage processing can significantly accelerate such operations as md5sum and grep, as it does not only avoid the need to read data out (to the client), but also can execute the commands on multiple files in parallel.

Ability to treat files differently.

A distributed file system, especially those served across wide area networks, are usually involved with higher level of versatility and flexibility. Files may come from different sources and serve different purposes. The location, security, and replication factor of each file needs to be treated differently and the rules should be dynamically updated whenever necessary.

Scalability.

A file system should be able to handle 10,000 storage nodes if necessary, yet we must be aware that the majority of systems have never come close to this level of scale. Extreme  high Scalability often does not come free. Many highly scalable systems uses P2P routing (e.g., distributed hash table) but consistency and performance are often compromised. Therefore, the file system should have “reasonable” scalability. It is also worth noting that scalability does not only apply to the number of nodes, but also apply to the number and size of files, and sometimes even geographical locations.

Performance.

The file system should support high performance lookup and IO throughput and provide faster IO throughput than a desktop file system due to concurrent data access. However, latency is usually higher than that in those supercomputer file systems.

There are other features that may be less important but can be crucial in certain situations, such as integrated security and integrated monitoring. For distributed file systems, depending on external security and monitoring may not be enough. They need to support these features within the system.

Storage 2.0 is not to replace current desktop and supercomputer file systems, but to fill a void left between them. At VeryCloud, we are trying to shape our Sector DFS within these requirements. We hope you can get involved with us, either you are an open source developer who is interested in distributed file system development or you are a potential user who feels that these features meet your specific requirements.

Installing Sector within 5 Minutes

When we develop Sector, we have a rule that a user with reasonable knowledge of Linux should be able to install a working Sector system within 5 minutes. In order to keep this rule, we intentionally limit the number of dependencies. Currently the software only relies on OpenSSL, with FUSE optional. On the other hand, we also try to limit the places of configurations and most of the system parameters should work with default values.

Deploying a distributed software system can be “scary” and time-consuming because there are many components and roles involved and a single mistake can prevent the system from working properly.

You don’t need to be scared away by Sector. If you are interested in Sector, the best way to learn it is to get your hand dirty now: installing it on a Linux box and starting to play with it. You will find it is easier to use than you thought.

Here is a quick guide to install Sector within 5 minutes:

Step 0. You need a Linux box with g++, libssl-dev or openssl-devel.

Step 1. download the most recent version from https://sourceforge.net/projects/sector/.

Step 2. Untar/unzip the tarball file, you will see a directory ./sector-sphere.

Step 3. cd sector-sphere; then do “make”.

Step 4. Configuration: go to sector-sphere/conf. First, edit master.conf to update the security server address (where you want to start the security server; in this case it should be the local machine and leave the port number as default) and a local directory to store system information; Second, edit slave.conf to update the master server address (use local IP and default port) and a local directory to store Sector files; Finally, update client.conf to update the master server address, same as that in slave.conf.

Step 4. cd sector-sphere/security, start the Security server by running ./sserver. The security server manages system security information, including user accounts. Predefined accounts already exists, so there is no need to create a new account for testing purpose.

Step 5. cd sector-sphere/master, start the master server by running ./start_master.

Step 6. cd sector-sphere/slave, start a slave node by running ./start_slave.

Step 7. cd sector-sphere/tools, run ./sector_sysinfo to check the Sector system information.

That is all. Remember that there is a complete manual at http://sector.sourceforge.net/doc/index.htm and you can explore more details of the system.

Nine Years of UDT

This August marks the 9th year of the development of UDT. The project originated from SABUL (Simple Available Bandwidth Utilization Library)  that I worked together with Xinwei Hong (now at Microsoft) between 2001 and 2003. Around year 2000, researchers had noticed that stock TCP (newReno) was not efficient for the wide spreading OC-12 and  1GE networks connecting research labs around the world. SABUL was one of the first research projects to resolve the problem.

SABUL used UDP to transfer large data blocks and TCP to transfer control packets (e.g., for reliability). It ran very well for our own applications on private networks, but there were three areas that would require (significant)  future work. The congestion control algorithm was not suitable for shared networks. The use of TCP for its control channel limited the protocol’s design choices. And the API was not friendly to generic application development.

In the second half of 2002, I started to design a new protocol to remove these limitations. This protocol was later named as UDT by Bob because it is completed based on top of UDP and it uses single UDP socket for both data and control packets. The first version of UDT was out in early 2003. Compared to SABUL, UDT provides streaming style API so that it can simulate the TCP socket semantics, which is an important step to gain a large user community.

I spent about one year to investigate the congestion control algorithm. Having considered many approaches, I chose to modify the traditional loss-based AIMD algorithm, which had worked stably with TCP. Delay-based approaches have several attractions, especially that they are less affected by non-congestion related packet loss, but they face a fundamental problem, which is to learn the “base” delay value. An inaccurate “base” value can make a delay-based algorithm either too aggressive  if the base is overestimated or too “friendly” (co-existing flows may simply kill it) otherwise.

UDT uses packet-pair to estimate the available bandwidth so that it can rapidly explore the large bandwidth, while it can still share it fairly and friendly with other flows.

In addition to its native congestion algorithm, I have also implemented most of the major congestion control algorithms available by 2005. To this end, I have made UDT a composable framework so that a new control algorithm can be easily implemented by overriding several call back functions.  Due to the nature of the feedback delay and unknown coexisting flows, there is no “perfect” control algorithm. Each algorithm may work well in some situation but may behave poorly in others. UDT can be quickly customized to suit a specific environment.

By 2005, UDT3 already became production ready and had a large user community. As the user community grows, people had started to use UDT in commodity networks with relative small bandwidth (e.g., Cable, DSL, etc.). There was an important feature of UDP that accelerated this change: it is much easier to use UDP to punch a firewall than using TCP and UDT is completed based on UDP.

This is a completely different use scenario compared to the original design goal. While UDT can easily scale to inter-continental multi-10GE networks, it did not scale well to high concurrency. This motivated the  birth of UDT4 in 2007.

UDT4 introduces UDP multiplexing and buffer sharing to allow an application to start a very large number of UDT connections. UDP multiplexing makes it much easier to support firewall management and NAT punching because a single UDP socket can carry multiple UDT connections. Buffer sharing significantly reduced the memory usage. Today UDT can efficiently support 100,000 concurrent connections on a commodity computer. The scalability will be further improved when the epoll API and session multiplexing over UDT connections are completed.

Over these years, I have received numerous useful feedbacks and learned a lot from the users. In particular, many features were motivated by users’ requirements. Several users have even developed their own UDT implementation, while some others created and shared wrappers for non-C++ programming languages.

UDT also benefits a lot from the open source approach, which helps UDT to reach to a greater community. Users can do code review, debug and submit bug fixes whenever necessary, which greatly increases the code quality.

At a user space protocol, UDT is able to include related new technologies from computer networking and adapt to new network environments and use scenarios. I am confident that UDT will continue to evolve and serve the data transfer jobs in more and more applications.

A High Performance Data Distribution and Sharing Solution with Sector

Over the years, many users have used UDT to power their high speed data transfer applications or tools.  Today, with Sector, we can provide an advanced data distribution and sharing application, in addition to the UDT library. This solution currently works on Linux only, but we will port it to Windows in the near future, first the client side, then the server side.

Here are the several simple steps you can follow to set up a free, open source, advanced, high performance, and simple-to-use data distribution and sharing platform:

1. Download Sector from here, compile and configure the software following the manual.

2. Set up a security server, which allows you control the data access permission, including user accounts, passwords, IP access control list, etc. You can also set up an anonymous account for your public data.

3. Set up one or several Sector master servers, which can be on the same computer that hosts the security server and the data (slave server).

4. Set up Sector Slave servers on the computers that host your data. Unlike FTP or most commercial applications that supports only a single server, Sector allows you to install the servers on multiple computers, even 1000s of them, and yet provides a uniform namespace for the complete system.

5. Install client software on your users’ computers and mount the Sector file system as a local file directory using the Sector-FUSE module. Your users can browse and access the data in Sector just as browsing and accessing data on a local directory by using file system commands “ls”, “cp”, etc. They will not feel the existence of Sector, although this “local directory” can actually run on 1000 servers across multiple continents!

All the data transfer occurring between the clients and the slave servers are on top of UDT. Therefore, data transfer throughput can be guaranteed even over wide area networks.

If you have any questions, please post them on the Sector project forum.

Sector vs. Hadoop

When I try to introduce Sector/Sphere to people  I meet at conferences, I usually start with one sentence: “Sector is a system similar to Hadoop”, because many people know Hadoop and understand how it works more or less, while Sector provides similar functionalities. This claim, however, is not very accurate, as there are many critical differences between the two systems.

Sector is not simply a direct implementation of GFS/MapReduce. In fact, when I started to work on Sector in 2005, I have not read the Google paper yet, and I was not aware of Hadoop until 2007. Sector originated from a content distribution system for very large scientific datasets (Sloan Digital Sky Survey). The current version of Sector still supports efficient data access and distribution over wide area networks, a goal that was not considered by the GFS/Hadoop community. Unlike GFS, Sector does not split files. On the one hand, this limit the sizes of files stored in the Sector file system and hence the system usability. On the other hand, it also greatly improves data transfer and processing performance when proper file sizes are used.

Sector – to be accurate, Sphere, as part of Sector – supports arbitrary user defined functions (UDFs)  to be applied to any data segment (a record, a group of records, a file, etc.) and allows the result to be written to independent files or to be sent to multiple bucket files by a user-defined key. The UDF model turned out to be equivalent to the MapReduce model as each UDF can simulate a Map operation, while the output organization according to keys can simulate a Reduce operation. Note that the “key” in Sector UDF is not part of a data record; it is used for output destination only. While MapReduce treats each record as a <key, value> pair, Sector sees all data in binary format and leaves the specific processing to the UDF.

The table below compares Sphere UDF model and MapReduce. You can rewrite any MapReduce computation using Sphere UDFs. Sphere uses persistent record index instead of a run-time parser. The Map and Reduce operations can be replaced with one or more UDFs. Finally, the Sphere output can be written into Sector files. A more detailed list of different technologies used in Sector can be found on the Sector website.

Sphere MapReduce
Record Offset Index Parser / Input Reader
UDF Map
Bucket Partition
Compare
UDF Reduce
Output Writer

Overall, Sector performs 2 – 20 times faster than Hadoop in our benchmark applications. It its worth trying, especially when you a C++ developer.