NPAD Diagnostics Servers
The NPAD diagnostic server, Pathdiag, is designed to easily and accurately diagnose problems in the last-mile network and end-systems that are the most common causes of all severe performance degradation over long end-to-end paths. Our goal is to make the test procedures easy enough and the report it generates clear enough to be suitable for end-users who are not networking experts. In most situations a single test run, launched from a web page, will generate a report that enumerates all problems affecting downloading (fetching) of data from a remote site. Although the report contains extensive explanations of the results, we do notassume that end-users will be able to correct network problems themselves. The reports include guidance to help end-users properly engage a system or network administrator and the necessary information to help the administrator locate the problem.
NPAD diagnostic servers are one of the initial tools running on Measurement Lab, an open platform for researchers to deploy Internet measurement tools. By enhancing Internet transparency, Measurement Lab aims to help sustain a healthy, innovative Internet.
For information on how install an NPAD server, see the Installing NPAD Diagnostic Servers document.
Table of Contents
- Support status
- Theory and Method
- Procedure for Using the NPAD Diagnostic Server
- Current NPAD Diagnostic Servers
- Interpreting Results
- About NPAD
This is still an experimental service. The procedures and reports are not quite as clear and easy to use as we would like. There is still room for improvement. Individual servers may be down for extended intervals and we reserve the right to make changes in the future. You can help us improve this service by using it and providing feedback. We are particularly interested in cases where the documentation or results are inaccurate, incomplete, or misleading.
BTW: If you are hot on the trail of a network performance problem and pathdiag is not helpful, please get us involved before you fix your network problem, so we can debug pathdiag on live measurement results. If you manage to find a situation that confuses pathdiag you have an opportunity to get some free network consulting while we figure out why it missed the mark!
Please send questions, comments and suggestions to email@example.com
The NPAD project addresses the set of problems associated with end-hosts and their connections (the “last-mile”) to a high speed backbones network.
Universities and research institutions are typically connected to high speed backbone through a GigaPoP or other network providing regional traffic aggregation. Since backbones such as Internet2 and ESnet are generally well provisioned and monitored, when there is a performance problem it is usually within the edge network or somewhere along its connection to the GigaPoP, or in the end-system itself. But as described in the next section, TCP’s robustness in the presence of flaws often makes it difficult for local tests to detect and troubleshoot these problems in the last mile.
The diagnostic servers made available though this project are intended to help in troubleshooting these performance problems. There are two ways for end-user to access these diagnostic servers:
- Java Clients – To be used by machines that are capable of running Java-enabled web browsers.
- Command line programs – To be run on machines that are not capable of running Java-enabled web browsers (such as supercomputers, etc). This mode requires users to download and compile a small C.
In addition, expert users will (in the future) be able to run the pathdiag tool in standalone mode without the web-server framework. This will permit networking experts to use local techniques for diagnosing flaws in the interior sections of their network.
Network performance debugging, often called “TCP tuning”, is an extremely difficult task because nearly all flaws have identical symptoms: reduced performance (data throughput). For example, if the network card is dropping packets because of a bad cable, the lost packets are silently retransmitted by the TCP retransmission algorithm. The user would never observe missing data or data corruption. The only symptom is that the connection took a little longer than it should have, while the missing data was retransmitted.
The consequences of this “single symptom” property are compounded by another effect: TCP’s ability to compensate for flaws is inversely proportional to the round trip time (RTT) of the path being tested. For example, a flaw that causes an application to take an extra second on a 1 millisecond path will generally cause the same application to take an extra 10 seconds on a 10 millisecond path. This “symptom scaling” effect arises because TCP’s ability to compensate for flaws is metered in “round trips” or RTTs: if a given flaw is compensated in 50 round trips (typical for losses on a medium speed link), then a single loss affects a 1 ms path for only 50 ms, whereas a 10 ms path will be affected for 500 ms.
Anybody who has been involved much in network diagnosis is likely to have run into the following situation:
client Server | | +-+----------------+---+ A B C D
Say you are trying to debug an application on a long path from (A) to (D) that passes through (B) and (C). You can easily test (A) to (B) and (C) to (D), both of which pass your tests, so you think you can inductively “prove” that the flaw is between (B) and (C). But the truth may be that the real flaw is between (A) and (B), which has a very short RTT, so the flaw is effectively masked by TCP. The flaw is only detectable with long RTT connections that include not only the section from (A) to (B) but also a high delay section such as the one from (B) to (C).
The pathdiag tool accounts for RTT scaling effects by taking advantage of the instruments available in a Web100 instrumented kernel. In order to do this, pathdiag needs to know some key parameters of the TCP connection over the long path: the target data rate for the application, the round trip time of the entire path, and (in the future) any MTU limit imposed elsewhere in the path. Continuing our example above, by knowing the RTT between (A) and (D), the target data rate for the application, and by measuring the effect of any flaws in the path from (A) to (B) it can estimate the impact of these flaws on the application running over the entire path from (A) to (D).
Unlike other testing methodologies, pathdiag gets more sensitive as you shorten the path section from (A) to (B). (e.g. pick a new (B) closer to (A)). If the RTT is small enough, flaws that are show-stoppers for the entire path do not interfere with other diagnostic tests, permitting a single pathdiag run to detect multiple flaws. Typically, when debugging a long end-to-end path with conventional techniques, each flaw has to be diagnosed and corrected before you can even detect the next flaw – debugging on a long path is highly serial. With pathdiag, a single run is likely to fully diagnose multiple flaws.
Although pathdiag can be deployed in a number of ways, the approach of embedding the tool in diagnostic servers at a number of GigaPoPs makes it easy to diagnose networks flaws at the edges of the network.
The server itself is located at (B), typically in a GigaPop or near the edge of a high speed backbone. The diagnostic client that runs at (A) is either a lightweight Java applet that can run in any standard web browser or a simple C program that can be compiled on any unix-like system.
Note that the data has to flow from the diagnostic server at (B) towards the client at (A). This is because pathdiag relies on the Web100 instrumentation in the TCP sender to measure critical TCP parameters. For most applications, where a user at (A) is retrieving data from (D) this is the correct direction for the test. If the primary flow is in the opposite direction, pathdiag may not be able to detect some flaws. However, since most flaws affect data flowing in both directions, most would still be diagnosed.
To test your network connection with pathdiag, you need to do the following things:
- Pick a goal: You must have a remote target (D) in mind. Otherwise pathdiag cannot extrapolate the results to evaluate the link.
- Estimate the path RTT: Estimate the end-to-end round trip time of the full end-to-end path from (A) to (D). You can measure this with the readily available tools pingor traceroute from the client (A) to the application server (D).
- Estimate the application data rate: You must have a realistic expectation for the application data rate. Start from the known application performance on a local (short) path and reduce it to an appropriate fair share of the expected bottleneck link bandwidth, less link overhead. For example if you are connected via a dedicated Fast Ethernet (100 Mbit/s), at best you can only expect to get 90 Mbits/s due to the link overhead. If that link is shared with other users you may only get some fraction of the link, e.g. 20 Mbit/s.
Note that the sensitivity to some types of flaws (especially packet loss) goes up as the square of the RTT and data rate. It is unreasonable to say “fix every flaw” because some flaws (such as background bit errors) are intrinsically statistical and expensive to totally correct. It might be appropriate to choose less stringent goals.
- Locate a suitable diagnostic server: Pick a diagnostic server from the list below, near the end of the path that you want to test. Assuming the section (B) to (D) is the backbone and you would like to test the end at (A), then choose (B) on your Campus or nearest ISP (e.g. your GigaPoP). The RTT from your end-system to the chosen server should preferably be less than 10 milliseconds (the precise limit has not been verified yet). If you have Java enabled in your web browser, then you should be able to simply fill in the form with estimates of RTT and data rate and click “Start Test” on the form. You should see log messages from the running test, followed by a diagnostic report.Alternatively, you can download the C program on that web page and follow the instructions for compiling and running that program.
- Correct all failed tests: since they will prevent the application from meeting the end-to-end performance goals. See the general comments on interpreting the results. Detailed advise is shown in the help ([?]) for each message in the report.
- For advice on what to do next in the broader debugging context, see the Outcomes section.
Select the NPAD diagnostic server that is the closest to you in terms of network round trip time. This will generally be the geographically closest server connected to the same national backbone as you are. Most of the deployed NPAD servers were built as part of Internet2’s perfSONAR Performance Toolkit distribution. perfSONAR includes several measurement tools and services in addition to NPAD.
- perfSONAR in Pittsburgh, PA via PSC/3ROX/I2 (1GbE connected, 1500B MTU)
- perfSONAR in Pittsburgh, PA via PSC/XSEDE (10GbE connected to XSEDE, 9000B MTU)
- perfSONAR list maintained by ESnet (includes many hosts other than those on ESnet)
- Measurement Lab sites
When you go the nearest NPAD server and run a diagnostic test as suggested above, pathdiag returns a web page which reports all of the test results. The messages indicate which tests passed or failed, and appropriate actions for further debugging. Consider bookmarking each report so you can refer back to an earlier test, or forward it on to an expert for further analysis.
Briefly, the results page shows the following:
- Most messages have a help link “[?]” to get additional information about the test or the results.
- Pass: Green indicates that the test did not detect any flaws. The help ([?]) attached to passing tests also contains version-specific information about flaws that might not be detected by this test.
- Warnings: Ambiguous results are shown in orange. Results can be ambiguous for two reasons: a non-serious flaw that might not affect performance on the end-to-end path, or a flaw with the tester or test process, such that the particular test could not determine if a more serious flaw was present. For example if the target is improperly tuned (which will be marked as a failing test), it may also prevent some path tests from completing properly, which will be marked a warnings.
- Fail: Red color indicates that the test detected a flaw, which must be fixed before the network path or end-system can meet the application requirements.
- Actions start with a “>” and are in italics. They follow warnings and failed tests and give specific information about how to correct the flaw. The help ([?]) attached to actions always gives more details and additional advice on how to correct the flaw.
The NPAD diagnostic server can detect nearly all flaws in the last mile and end-system under test. But it cannot repair the flaws, nor can it detect flaws elsewhere, so once you have test results in hand you have to use them to get the right people to take corrective action and/or perform additional tests.
For this reason it is especially important that you keep good notes of your experiments and record the results (add the reports to your bookmarks or favorites). When you report a problem to somebody else, expect to be asked for the test results. We suspect that most people would rather that you paste the report URL into email than send the entire report as an attachment.
The test outcomes fall into several broad categories:
These are flaws in the computer system that is acting as the test target (the web client) at one end of the path under test. They are best corrected by having a system administrator refer to the detailed tuning directions at PSC’s TCP tuning page or the similar pages at LBL. Note that some operating systems may be missing required TCP features. Such systems cannot be expected to perform well and should to be upgraded or replaced.
In most organizations, the networking group is only responsible for the network as far as the connector on the wall. Generally they cannot (or will not) make changes to computer systems which are not theirs. Only the owner or a properly authorized system administrator should make changes to the end system.
To further localize the flaws, test shorter subsections of this path or partial alternate paths by using additional testers and targets. Since there can be hidden switches and other invisible infrastructure, it is rarely effective to debug a network path without participation by the responsible network engineer. Unless you have access to the physical network and software configurations, you should not try to debug the path, except for a couple of specific checks:
- Excess loss rate and/or reduced data rate can symptoms of true network congestion caused by other traffic. If you have reason to suspect that the problems may be caused by other network traffic, (e.g. the test results vary from run to run) try re-running the tests early in the morning or other times when you expect the network to be lightly loaded. Although problems of insufficient capacity can sometimes be fixed simply by re-balancing the network (e.g. by moving some users to different network ports) they are often impossible to repair without the proper application of money. You may want to keep this in mind when you report the problem.
- If it was the loss rate test that failed and that you have access to the last few feet of cabling you may want to try swapping the cable, the Network Interface Card (NIC) or the entire end-system, to eliminate them as sources of loss.
- If the “duplex miss-match” test failed, the system administrator should check and possibly adjust the network duplex settings on the NIC. This may require the help of somebody who can check settings in the network switch.
- If there are several nearby NPAD diagnostic servers run by different organizations (e.g. Campus and the ISP) it can be helpful to test to more than one diagnostic server. In general you always want to start by reporting the problem to the closest networking group (e.g. the department) and working toward the global Internet (e.g. Campus then the GigaPoP or Regional Network Provider and finally the national ISP, such as Internet 2). Having results from multiple diagnostic servers will give everybody a better perspective on the problem.
Do NOT attempt to do detailed path debugging unless you have access to both the physical network (e.g. keys to the closets) and the configurations of the switches and routers (e.g. passwords), as well as the details of the network design. Modern networking gear can have a complex logical (virtual) topology that is entirely different than the physical topology. Unless you know exactly how that data flows through the hardware, you cannot locate flaws using intuitive debugging techniques.
If you have the access to the physical network and configurations, the easiest way to debug the path using an NPAD diagnostic server is to connect a portable diagnostic client to various places in the network, either by physically carrying a laptop to various wiring hubs or connecting it logically by reconfiguring vLANs.
In the future, we plan to support a standalone version of pathdiag, that does not use the web-based client-server framework described in this document. This “expert mode” will permit much greater flexibility in placing testers and targets at arbitrary locations in the network, at the expense of requiring significantly more expertise to configure and deploy.
Often tester flaws are not persistent, and will not be repeated on later runs of the same tests. If they do, flaws that seem to be related to this particular server (e.g. server bottlenecks) should be reported to the site contact for the server. Flaws that may indicate oversights or bugs in the tester itself (e.g. messages about unexpected events) should be reported to firstname.lastname@example.org“>email@example.com. In any case we periodically retrieve results from public NPAD diagnostic servers and inspect the reports for accuracy. We pay particular attention to all reports indicating tester problems.
If the target and path both pass all tests, you should be done, and if you are lucky, your application will work. If not, you need to test the path with a traditional end-to-end diagnostic tool (e.g. iperf, ttcp, etc). If the traditional diagnostic test fails:
- Think carefully about any ignored warnings. Review the detailed help for each message.
- Think carefully about caveats in the detailed help for the pass messages.
- Report it to us at firstname.lastname@example.org“>email@example.com. We are not a aware of any undocumented “false pass” results and would like to eliminate or document any that are found.
If the traditional end-to-end diagnostic test passes:
- The application itself may not be delay tolerant. It is very difficult to write an application with any sort of complicated internal controls that is completely delay tolerant. To do so requires that the application be structured such that all three elements (the sender, receiver and network) are fully overlapped, that is all three elements have to be busy at the same time. Furthermore, since the network delay is largely determined by the speed of light and the distance between the endpoints, the network has to carry a variable amount of data (depending on the distance) to match the performances of the sender and receiver. Systematically exploring this problem will be the subject of a future page on application diagnosis.In the short term the only method is to be aware of application reputations. Is this application know to work well over any long path? If the answer is no, and you fully pass both pathdiag and a classical end-to-end diagnostic, then the chances are that the application itself is flawed and would not function over a perfect network.
- If you received a warning about “Insufficient buffering (queue space)”, it is possible that the traditional end-to-end diagnostic tool causes bursts that fit within the router or switch queue, but the end-to-end application causes larger bursts and overflows the queue. Although we suspect that such cases are common, they are very hard to observe in the field. Please try to collect a packet trace and contact us firstname.lastname@example.org“>email@example.com for further investigation.
- The computer system at one end of a network connection or path. While this term can encompass any device that can be connected to the network, in this document it most frequently refers to a PC or computer system used by end-users.
- A network or application user who is an expert in something other than networking, computer systems or network applications – a typical user.
- End-to-end path (or test)
- The path all the way from one end-system to another.
- An imperfection, often concealed, that impairs soundness (www.dictionary.com).
Any defect in hardware / software / configuration with respect to the network connection of a host or a network component such as a switch or a router.
- The part of the network that goes from a host to the high speed backbone such as Internet2, ESNet, etc.
- “The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm”, a paper by Matthew Mathis, Jeffrey Semke, Jamshid Mahdavi, and Teunis Ott, Computer Communications Review, volume 27, number 3, July 1997, that introduced the one over the square root of the loss rate model for TCP performance.
- Pathdiag server (or service)
- A web wrapper that makes pathdiag easy to run with no requirement to install software on the local machine
- Pathdiag client
- A small program that a user uses to invoke a test from a pathdiag server back to the user’s machine. If you are using the directions on this page, the client is generally your web browser, which is also the test target.
- Pathdiag (stand alone)
- The pathdiag tool that can be run without the server. This will require a test host with a web100 kernel and other supporting software, and will be covered under a future document on advanced pathdiag techniques.
- (Path) section
- A part of an end-to-end path. The first step to debugging a long network path is often determining which section has a flaw.
- Performance Measurement Point
- A “landmark” along a long network path, used to determine which section of the long path has a flaw, by providing a stable, well known end-system for testing.
- Single Symptom
- Situation in which many different varieties of flaws at various locations all have the same symptom, reduced performance.
- A characteristic sign or indication of the existence of something not being right.
Something a user may observe in the presence of a flaw that only indicates that something is wrong, but not identify to the location or the nature of the problem.
- Symptom Scaling
- Situation in which a symptom caused by a flaw which is clearly observable on a long path is almost undetectable when tested on a short path. The observable symptom scales with the Round Trip Time (RTT) of the path.
This can be a serious impediment in diagnosing the problem because testing on a short path is easy and can be done in a controlled environment whereas testing on long path introduces many unknown variables beyond the control of an organization.
- Receiver Window
- The portion of the TCP protocol that implements flow control. When the receiving application slows down, it signals the sending application by closing the receiver window. Note that the receiver window is actually the amount of free space in the TCP receivers buffers and therefore constrained to be smaller than the receiver’s TCP buffer size.
- Pathdiag tests the network using a TCP connection between the tester and target. If you are using the directions on this page, the target is always the same as the pathdiag client.
- Target data rate
- The user specified data rate, which is the goal for the application over the entire end-to-end path.
- Target round trip time (RTT)
- The user specified round trip time of the entire end-to-end path.
- TCP (socket) buffer size
- The amount of buffer space that TCP is permitted to use to store unacknowledged data (on the sending side) or undelivered data (on the receiving side).
- Pathdiag runs in the tester to test the path between the tester and target. If you are using the directions on this page, the tester is always the same as the pathdiag server.
Network Path and Application Diagnosis is a joint project of the PSC and NCAR, funded under NSF grant ANI-0334061. This project is focused on using Web100and other methods to extend fairly standard diagnostic techniques to compensate for the “symptom scaling” that leads to false positive diagnostic results on short paths.Matt Mathis, John Heffner, and Raghu Reddy
Please send comments and suggestions to firstname.lastname@example.org