The Web10G Project: White Paper

Home 5 Research 5 Networking 5 The Web10G Project 5 Web10G: The White Paper

White Paper: Web10G
Taking TCP Instrumentation to the Next Level

 

Introduction and Overview

We propose to make advanced per-connection TCP instrumentation available to all Linux users for all time by fully transitioning the existing Web100 prototype [1,2] to be included in future production Linux kernel releases. Both the prototype and standard for the TCP instrumentation as specified by RFC 4898 [3] were developed under the Web100 project, but have not been incorporated into production Linux releases. This project will correct a number of weaknesses in the prototype implementation and eliminate some barriers to deployment. Portions of the software that belong in the kernel will be groomed for inclusion in the Linux main line, where it will eventually become part of all future releases. Non-kernel portions will be moved to a public open source code development site, such as sourceforge.net or code.google.com, where any number of people can contribute additions and improvements. Once the code is migrated to these shared public repositories it will eventually be maintained by the people who are using it, without explicitly requiring ongoing effort on our part or additional funding from NSF.

The Web100 code is still in heavy use, even though we are now more than six years beyond the end of project funding. It is still being downloaded from 100-200 unique IP addresses every month. Google scholar lists 395 papers that mention Web100. Spot-checking reveals that nearly all of the authors used the Web100 TCP instrumentation in part of their work, although a few made reference to some of the ideas or techniques used by Web100, especially autotuning. Searching the web for “Web100” (excluding “web 100” and “web100.com”) finds about one hundred and fifty thousand pages, although that number is substantially inflated by spurious matches on content relating to the “top 100 web sites”, however at least half seem to be legitimate references to our work.

At this point our success is still somewhat fragile. The current Web100 code was a prototype, and does not scale well enough to be included in production Linux releases. It is a patch that has to be applied to main line Linux releases; installing it can be extremely difficult in many common environments. Furthermore, any significant change to Linux TCP might make it unfeasible for us to generate any further updates, which would completely strand the current user base. Additionally, continuing to provide ongoing support for Web100 is problematic – it is still being maintained on the margin of other projects and by donated personal time.

The plan is to address all of these weaknesses by producing a production quality implementation of the RFC specified standard TCP instrumentation. That implementation would be derived from the existing Web100 code and targeted to be included in all Linux releases. We have successfully done this once already with TCP autotuning, which was prototyped under Web100, then fully integrated into Linux and is now shipped in every operating system.

The Web10G project is patterned after the original Web100 project. We will build on the successful parts of Web100, including reusing much of the existing code, but will improve on the parts where hindsight has shown us a better way to do things.

Although our target audience is the Linux community, it should be noted that our Linux autotuning implementation has since been implemented in all major operating systems including Windows 7, Mac OS and several BSD variants. We are aware of prototypes of the TCP instrumentation for some Microsoft and IBM operating systems. A production quality implementation for Linux will likely cause these to be completed and brought up to the same production standards.

History

Basil Irwin, a former network researcher at the National Center for Atmospheric Research (NCAR), first proposed the idea for the Web100 project in a position paper back in 1998. The inspiration was borne from the frustrating and time consuming experience of repeatedly having to teach researchers how to ‘tune” their applications to achieve better performance – and realizing that the network research community really could, and should, do better in this area. In Basil’s own words:

“I remember the wakeup call for me occurred in a class at NCSA while I was teaching scientists how to manually tune each of their individual TCP sessions. And it occurred to me, and I said out loud to them: “Wait a minute. This is ridiculous! We should fix this!” And then, a couple of weeks later in a hotel room in DC, it was like a vision in which I realized that we REALLY COULD FIX THIS! That we had the people that knew how to fix it, we had built the necessary people-network infrastructure via NLANR…”

From this was borne the original Web100 project, which had three simple goals:

  • To enable ordinary users to attain full network data rates across all networks without requiring help from network experts
  • Automatically tune the network stack for the underlying network environment
  • Provide network diagnostic and measurement information

Although this last item was not part of Basil’s initial vision, it is just as critical. A huge part of the success of the Internet comes from the extent to which TCP hides the details of the network from applications and users. This is critically important for the independent evolution of applications and the network, but it has a negative side as well – it hides all flaws in the network. For example, if a piece of networking equipment is randomly dropping packets, TCP silently retransmits the missing data, with the only symptom being reduced performance. TCP’s ability to compensate for such problems is so powerful that vast portions of the Internet have hidden defects that limit performance. Exposing and correcting these defects is absolutely required to attain full network performance.

Web100 was jump-started early in the year 2000 with gift funds from Cisco Systems; National Science Foundation funding for the project started in September of 2000. The project was a collaboration between staff members at the Pittsburgh Supercomputing Center (PSC), the National Center for Atmospheric Research (NCAR), and the National Center for Supercomputing Applications (NCSA).

The project was scheduled to end in the fall 2003, but a small amount of remaining funds allowed us to do 2 one-year no cost extensions that kept the project alive through the fall of 2005. This was followed by 5 years of ongoing support that was done on the margin of other projects and donated personal time. The most important accomplishments of the project required socialization and action in the broader community and were not completed until after the official end of the project:

  • Autotuning was incorporated into main line Linux in September 2005, one month after the official end of the project, 2 years after end of funding.
  • The standard MIB [RFC 4898] finally passed IETF approval and was published in May 2007, 4 years after the end of funding.

Our belief and commitment to the importance of this project is so strong that to this day we are still updating the kernel patch to track the latest kernels from Linux.

Problem Statement and Evolution

Web100 was inspired by the following very simple idea – If there is a network performance problem, why don’t we just ask TCP why it is running at the speed that it is? TCP already measures the network as part of its mandatory built in congestion control algorithms. The idea was simply to expose all of TCP’s hidden machinery, such that diagnostic tools can “look under the hood” and see how TCP actually functions under normal network applications.

As the Web100 project matured, the Web100 concept evolved into three software components, an Internet standard to specify extended TCP statistics, and a user support organization. The software components were: the kernel TCP instrumentation; a suite of user software; and TCP autotuning, which is also part of the kernel.

The foundation of the project was the Kernel Instrument Set (KIS), the TCP instrumentation in the Linux kernel. It instruments approximately 120 different parameters of each TCP connection. Each instrument is typically implemented by one line of code, inserted into the system TCP code at a strategic location to collect information about one specific network parameter or event. The instrumentation was implemented as a kernel patch that is applied to the sources prior to compiling the kernel. It has to be implemented this way, because TCP itself is part of basic kernel functionality, and cannot be dynamically loaded.

The per-connection TCP instrumentation supported by the kernel instrument set enables a wide variety of research, measurement and diagnostic applications. Many network researchers, working in areas such as improving TCP performance and Internet measurement, have used Web100 to gather data about how TCP interacts with the network itself. This has the advantage that the researcher can collect detailed performance data from a real protocol implementation running on a real network, which yields far more convincing results than idealized simulations. Perhaps more important to everyday Internet users are the diagnostic applications that can be used to find otherwise hidden flaws in the network. Today the widest Web100 deployment is to support NDT [4] and NPAD [5] diagnostic servers, which are have become the mainstay for debugging high performance research and education networks. These tools have recently been deployed on MLab [6] where they are being used to foster transparency of broadband Internet services. When readily available to ordinary users, per-connection TCP instrumentation will foster the creation of user tools to diagnose problems with ordinary Internet applications. When readily available to educators, it will enable students to see first hand, how the Internet really works. When supported on industrial scale web servers, it will permit large content providers to monitor the network’s ability to deliver content to their customers.

As part of the kernel, the instruments are subject to a full Gnu General Public License (GPL), which is viral (i.e., all derived software must also be published as open source, bearing the same license). The instruments are exported to measurement and diagnostic applications via an Application Binary Interface (ABI), based on the /proc file system. Although the KIS and the /proc file system were the foundation of the whole project, we knew from the beginning that they were fundamentally only a prototype, and would probably be subject to future reimplementation.

To facilitate Web100 adoption we developed a suite of basic diagnostic tools and example measurement applications. These can be used as-is, or can be used by other researchers as patterns for their own tools. The “userland” software included a library that provides a uniform Application Programming Interface (API) for accessing Web100 instruments. The library API was designed to “future proof” applications by hiding most of the details of the kernel ABI. However, we have since realized that the Web100 library API is more complicated than necessary, and can be simplified. Since the userland software was our own creation, it was released under a BSD style open source license, which permits other developers to incorporate our open source software into non-open source commercial products.

Part of the success of the Web100 project can be attributed to including a separately staffed support organization that pro-actively pursued input and advice from our users. The support organization solicited alpha testers and early adopters, hosted User Group meetings and rigorously followed up on all bug reports and feedback from the community. Very early in the project, Web100 was presented to a number of non-networking research communities, including High Energy Physics, Meteorology and Astronomy. Our goal was to engage researchers who we anticipated would benefit from our tools and entice them into providing early field experience and feedback.

TCP autotuning follows from Basil Irwin’s epiphany: for diagnosed problems that lie entirely within TCP itself, it would be best to just make TCP properly adjust itself without requiring any participation by the user or application. This primarily entails dynamically adjusting TCP’s buffer space according to the needs of the application and network, but just as critical is making sure that TCP negotiates the proper options (especially window scale) at the time the connection is established.

The Web100 autotuning code has already changed the world. It was incorporated into the Linux main line in September of 2005, and subsequently dropped from the Web100 kernel patch. At that time it passed out of our control and fully transitioned to community support. At about the same time we also described our autotuning techniques and rationale to the TCP developers at Microsoft who implemented it in Windows Vista. (We could not share our actual code with them due to the viral nature of the mandatory Linux kernel license.) Autotuning has subsequently been deployed in all major operating systems, including Windows 7, Mac OS, and several flavors of BSD. Nearly every system shipped today includes some variant of autotuning.

The other primary deliverable from the Web100 project was the TCP extended statistics MIB, RFC 4898, published in May of 2007 [3]. A MIB is an OS independent formal description of network instrumentation and describes the data that the TCP instrumentation can provide to diagnostic tools. Unlike our kernel instrumentation and example tools, which were known to be somewhat ephemeral, the MIB is now a Proposed Internet Standard, and is already a permanent legacy of the Web100 project. Since the MIB was developed with IETF input (as are all Internet standards) it evolved away from the prototype instruments that we developed under the Web100 project. Most of the changes were as simple as renaming instruments (e.g. Web100 “DataPktsOut” became “DataSegsOut”), but there were a few more substantial changes. There was a preliminary (unpublished) effort to bring the kernel instruments back into alignment with the Internet standard MIB, however since the MIB was finalized well after the end of the Web100 project, the work was never completed or updated for any kernels newer than September 2007.

The most serious problem with the existing Web100 implementation is that the application binary interface (ABI) used to access the kernel instruments is built on the /proc file system. The statistics for each TCP connection appear as special binary files in separate directories, one for each TCP connection. The /proc file system was chosen for its conceptual simplicity and ease of prototyping, but has too much overhead and does not scale. It is completely unsuitable for industrial scale servers because it is limited to about 30,000 TCP connections on one system. (Large servers often carry many hundreds of thousands of connections.) As a consequence, the /proc ABI is a non-starter for inclusion into the Linux main line, or for that matter, any other large Linux distribution, such as RedHat, Debian, Ubuntu, Suse, or even the Scientific Linux distribution produced by Fermi National Accelerator Laboratory (FNAL) and the European Organization for Nuclear Research (CERN).

Since the /proc file system was unsuitable for inclusion in any major Linux releases, people who want to use Web100 are forced to build their own kernels from source which can be very difficult, depending on what kind of hardware they have. Main line Linux, supported by Linus Torvalds and friends, is 100% open source and quite easy to build. However, most modern systems include specialized hardware such as graphics accelerators, wireless cards etc., for which there are no full function open source drivers. The downstream distributions, such as Redhat, Debian, and Ubuntu, etc., all incorporate non-open source binary drivers (provided by the hardware vendors) into the prebuilt kernels that they distribute. Reconstructing these kernels has proven to be beyond the capabilities of most potential Web100 users, and as a consequence they are forced to choose between a Web100 enabled kernel that does not support all of their hardware and a full function binary distribution kernel without Web100. The difficulty in rebuilding distribution kernels from source is by far the largest barrier to Web100 deployment and use.

One might assert that some commercial enterprise should complete the work of raising Web100 code to production quality. That has not happened yet, and it is pretty easy to understand why: for any individual network performance problem, there are always easier short-term approaches, such as having a network expert inspect a packet trace. However, these short-term approaches are only cost effective in the short term; they do not move the overall technology forward, and are far more expensive in the long run than investing in making all TCP measurement and diagnosis easier. It is not profitable for any one commercial entity to think in terms of reducing everyone’s long term costs by supporting this project. At the same time it does make sense for NSF to consider everyone’s long term costs and the potential to make network debugging easier for everyone.

Moving Forward: An Overview of Web10G

The ultimate goal of the Web10G project is to cause standard TCP instrumentation to be included in the main line Linux kernel such that it will propagate out to all Linux versions and all users without continued effort from us or continued funding from the NSF. The most critical step, inclusion in the Linux main line, is not under our direct control, and cannot be assured. After all, once the main line kernel developers absorb our code, they will incur the very costs we are trying to shed. Our strategy in the  Web10G Project is to produce: production quality code which is ready for inclusion in the Linux main line; a suite of open source diagnostic and measurement tools that it will enable; and a community of active users who crave these tools for a wide spectrum of uses and systems and who can justify the value of the TCP instrumentation to the rest of the Linux community.

We are going to make one very important design change from the Web100 project. Under Web100, the TCP instrumentation and /proc ABI were combined into a single kernel patch. We are going to separate them and implement the new ABI as a dynamically loadable kernel module (dlkm). This has a number of advantages, all which stem from the ability to support multiple ABIs and switch between them at will. First, the legacy /proc ABI from Web100 can be preserved almost for free, which will facilitate a faster launch of some of the testing and tools efforts, since they would otherwise be blocked waiting for the new ABI. Second, it greatly lowers the memory footprint for people who don’t need TCP instrumentation at all, since the ABI can be omitted entirely. Third, if somebody in the kernel community raises an objection to our design late in the project, it provides a relatively inexpensive path for starting over, since we would not have to abandon or freeze all of the application effort while reworking the ABI and library. And fourth, it provides the opportunity for network researchers to implement their own specialized, non-standard ABIs in the future.

Implementing the ABI as a dlkm does add complexity and an additional interface that needs to be designed carefully. However, the interface only needs to be consistent within each release, and can be changed at will between releases. Thus the dlkm design does not need to be concerned about is own future legacy, as do the ABI and API designs.

Clearly the quality of the kernel code itself is paramount. And while our own standards have to be quite high, it is most important that we accept and address all input from other experts in the kernel development community.

The suite of basic diagnostic tools and example applications will serve to prove both the usefulness and the correctness of the kernel instruments. They will be transitioned to an external open source shared development site, such as sourceforge.net or code.google.com, where anybody can contribute additions and patches, and the software can continue to evolve and grow beyond the end of this project without requiring a commitment for ongoing effort from us or funding from NSF. This software will bear a BSD style non-restrictive open source license as the Web100 prototypes do already.

We will maximize our engagement with the user community by holding two user group meetings during the project, and actively solicit input about any additional tools and the relative priorities of various aspects of the project. The faster we identify and fill unmet needs, the more traction we will have in the community, and the better they will be able to help us to justify including the instrumentation in the kernel main line.

Deliverables and Activities

The foundation of the project will be the kernel instruments, implemented as a patch against main line Linux. They will not be considered completed until they are included in the Linux main line, and will be groomed for inclusion throughout the project and beyond, if necessary. The kernel instrument patch will include the hooks to support an Application Binary Interface (ABI) as a dynamically loadable kernel module (dlkm).

The new kernel instruments will be bootstrapped by updating the preliminary standard MIB compliant instruments from the Web100 project to the current Linux kernel, and converting the included /proc ABI to a dlkm. This will permit some of the instrument testing and application development to proceed using the legacy Web100 libraries, without waiting for the new ABI, API and library to be completed.

The new Application Binary Interface (ABI) has to be designed quite carefully, since it has to balance resource constraints against performance constraints, and has the potential to define a new legacy that future implementers will have to address. A key question will be what kernel subsystem should it use to export the instruments. Netlink is one obvious choice, since it is used to control other networking subsystems. However Linux also supports multiple “message bus” style internal interfaces that are fast and efficient and any one of them might be ideal. Implementing the ABI as a dlkm gives us the option of implementing more than one and choosing the best based on actual measured performance or other criteria. We want to get as much early feedback as possible about our ABI design. There is a risk that somebody in the Linux kernel community will object to some specific details of our design, but it is very unlikely that we will get their attention early enough to prevent some duplicate effort, if they ask for changes when we attempt to submit our code for main line inclusion.

We may choose to support kernel patches for a second version of Linux. Early on during the Web100 project we were supporting kernel patches against two different kernels (a late 2.4 version and an early 2.6 version). At our first user group meeting we received overwhelming advice to focus on only one series of kernel versions, the very newest main line kernels from Linus Torvalds. Today, the situation is a bit more complicated. Although all Linux kernels are still ultimately derived from Torvald’s main line, other versions such as Red Hat Enterprise Linux (RHEL), have established themselves as the basis for production quality operating systems, by deliberately lagging slightly behind Torvald’s. This is done so they have the opportunity to select the best main line vintages and can then cherry pick bug fixes, updates and new features to maximize their overall quality and robustness. We will seriously consider supporting patches for RHEL and perhaps other Linux versions. Supporting patches to the main line is absolutely critical for our primary goal of main line inclusion, but supporting patches for RHEL has the potential to provide access to a far larger user base. If we support a patch for RHEL, we can also strive for inclusion in some down stream distributions that are derived from it, such as the Scientific Linux (SL), which is widely used in communities such as High Energy Physics. Supporting a patch for RHEL or other Linux versions would be relatively expensive: they are not using compatible source management technologies so the effort to maintain and update the patches has to be duplicated – there is no economy of scale. Each patch would have to be maintained as though it were a separate deliverable, independent of all others. We will seek input from the user community as we consider our options.

The library API, used to access the kernel ABI, will be based on the existing Web100 library, but simplified slightly to eliminate some details of the /proc file system that were unnecessarily exposed. Like the Web100 library API, the design goal is to provide a simple uniform OS independent method to access kernel TCP instruments such that applications written today for Linux might easily be ported to other operating systems and kernel implementations in the future.

Early in the project we will launch a Web10G website, which will be a fairly static, traditional project information site, to serve as an anchor for the project. All early work on the “userland” tools, including the library, will be done using the legacy Web100 source repository, which will facilitate a faster launch.

Before the alpha release we will migrate all non-kernel source code to a public code development site, such as sourceforge.net or code.google.com, that includes an integrated source repository, a wiki for project plans and documentation, a ticketing system for tracking bugs and message boards for developer, contributor and user discussions. By doing this transition early, we will facilitate a seamless transition to community support later, when it is time to reduce our own role in the project. The source repository will be seeded with the example diagnostic and measurement tools developed under the Web100 project and used to update them to access the new instrument set (mostly name changes) and new library API. Depending which facility we choose to host the source repository, we may elect to mirror it on a server at PSC as insurance against site failures.

We will issue our first Alpha Release as soon as the core features of the kernel, dlkm ABI, library and an initial suite of tools are robust enough. It is important that the code builds on common systems and is reasonably stable (crash free) when accessing instruments on single core systems. The first alpha release might not be robust on multi core systems, or when loading and unloading the dlkm while busy. It might not build on the full breadth of systems we expect to support and it might not include updated versions of all of the Web100 diagnostic tools. We want it to work well enough where early adopters can get some use out of it, and can start to provide us with feedback and possibly even debugging help. It has been noted in the open source community that creating software is a serial process, but debugging it can be parallelized.

Web10G will not have an independent Support Organization, as did Web100. Rather we will make use of the programming development staff, but to encourage some of the same independence, support activities will be cross-tasked: kernel people will be the prime responders for user tools, and user tool people will be the prime responders for potential kernel issues. We want to avoid the situation where people are doing QA on their own work.

We want to start to phase out Web100 as soon as Web10G is a credible replacement. Critical to phasing out Web100 is encouraging the diagnostic application developers to overhaul their code to use the new API and instruments. Given that much of this will be as simple as updating a large number of names, we can provide some semi-automatic tools to help. However, once we announce that we are not supporting Web100 any more, it will be dead.

We will develop some educational tools and courseware, specifically designed to use TCP instrumentation to teach about TCP’s internal operation. These materials will first be field tested in three different undergraduate classes offered at Carnegie Mellon University and the University of Pittsburgh. They will then be published with the rest of the tools, but in a separate educational area designed to facilitate rapid adoption by harried instructors.

We developed internal tools for validating Web100 robustness that were never released. These were scripts that ran pathologically abusive test sequences on large multi-core server class machines to verify that all the locking code is correct. These will be extended to confirm that the ABI dlkm can be safely unloaded and reloaded on a busy server. We will update these scripts and post them in testing area on the open source site.

Under Web100 we never developed a formal instrument correctness-testing suite. The idea would be to start from some existing trace based TCP analysis tools, such as Sean Osterman’s tcptrace [7], and build scripts to validate that the kernel and trace analysis instruments agree. Such testing is not possible for all instruments, since many of the kernel instruments export otherwise hidden state that cannot be observed from a trace. In any case, even a partial set of correctness testing tools will make it considerably easier to support instruments in multiple kernel versions.

As soon as the kernel components are tested well enough so that we have some confidence that they are correct, we want to present them to the Linux “netdev” team to start the Kernel Inclusion process.

We will host two User Group meetings, which will be used to discuss our plans and gather input from the community. The primary input we will be seeking is advice on allocating our finite resources such that we deliver the best possible code to the largest possible user base. The number one question we will be seeking advice about is how best to address the proliferation of Linux distributions and the difficulties users have building kernels from source. We need to carefully balance the tradeoffs of supporting additional kernel source or even binary releases. Our user base is quite likely to have better insights into all ramifications of the various options. We suspect that an optimal strategy may be to support patches against RHEL source, but groom them for inclusion in downstream binary releases produced by others.

Packaging, Testing and Distribution

Web10G will build on the software packaging, testing and distribution techniques successfully employed by the Web100 project. The key feature of this process was an entirely automatic facility for building software releases (“tarballs”) from the sources kept in a source management system (CVS at the time). The process was sufficiently automated such that we could easily cut releases daily or even more frequently if needed. Everybody (except the lead developers) used the same automatically generated tarballs to build and install Web100 software. This included our own internal testing, our recruited alpha testers, and everyone else who used our software. The ease of cutting releases is why, to this day, Web100 version strings include the encoded release date and time in addition to more conventional major.minor.release version triplets.

The support team put especially high priority on any issues that seemed to be related to differences between user platforms. We did such things as encouraging anybody (Web100 users and our own staff) who received new personal workstations to install Web100 ahead of all other customizations, to make sure we had a precise understanding of all dependencies between Web100 and other system software. Note that every routine Web100 upgrade tested our software against an extremely diverse pool of system customizations and Linux distributions. Through this process we occasionally discovered dependencies on specific versions of other tools or system software, such as perl. These were generally corrected by recoding our software to not use version sensitive features, and as a consequence we were able to minimize our dependencies on any specific features of system software, while maximizing the range of acceptable base versions and eliminating unnecessary dependencies on other resources.

The assured repeatability of the release process minimized the need for formal regression testing. If somebody discovered a bug that required a one-line fix, we were confident that a fresh release would not exhibit new bugs unrelated to the one line fix.

The automated release software is still functional today, more than 8 years after Web100’s initial release. It is a big part of the reason that we have been able to continue to support Web100 software for so long after the end of formal funding. When the main line Linux kernel is updated, it usually takes less than an hour of our time to merge changes, generate a new release tarball and post it to the web.

We will extend our testing to include the use of the NMI Build and Test facility. However, since the NMI Build and Test facility does not include support for building or using non-standard kernels, it can only be used to verify that the diagnostic tools build in a wide variety of environments. Once the NMI Build and Test facility supports virtual machines with run-as-root capabilities we will consider using it for testing tools and kernels as well.

References

[1] M. Mathis, J. Heffner, R. Reddy, “Web100: Extended TCP Instrumentation for Research, Education and Diagnosis” ACM Computer Communications Review, Vol 33, Num 3, July 2003.

[2] The Web100 project web pages: www.web100.org

[3] M. Mathis, J. Heffner, Rajiv Raghunarayan, “TCP Extended Statistics MIB”, Proposed Internet Standard RFC4898, May 2007. Obtain via: http://www.ietf.org/rfc/rfc4898

[4] M. Mathis, J. Heffner, P. O’Neil, P. Siemsen “Pathdiag: Automated TCP Diagnosis”, Passive and Active Measurement, 2008. M. Claypool and S. Uhlig (Eds.) LNCS 4979. Obtain via: http://staff.psc.edu/mathis/papers/mathis08pathdiag.pdf

[5] R.A. Carlson, “Developing the Web100 based network diagnostic tool (NDT)”, Passive and Active Measurement, 2003. Obtain via: http://www.nlanr.net/PAM2003/PAM2003papers/3703.pdf

[6] “Measurement Lab”, http://www.measurementlab.net/

[7] S, Ostermann, “TCP trace”, http://www.tcptrace.org