Using Supercomputers to Regulate How Supercomputers Buy and Sell Stocks

With access to supercomputing through the National Science Foundation, researchers are beginning to understand how ultra-fast computer trading is changing Wall Street.

CHAMPAIGN-URBANA, October 4, 2012 — The increasing role of computers in Wall Street trading has gathered wide attention, including a September hearing before the U.S. Senate Subcommittee on Securities, Insurance and Investment. At that hearing, testimony cited a study — referring to it as “ground-breaking” — that a team at the University of Illinois at Urbana-Champaign reported as “the first paper to explore the impact of high-frequency trading in a nanosecond environment.”

For this study, Mao Ye, assistant professor of finance, and Illinois colleagues Chen Yao and Jiading Gai used supercomputing systems through the National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) program. Launched in July 2011, the program supports research by U.S. scientists and engineers in a diverse range of fields, and has encouraged work — such as the Illinois team and their study in stock trading — in areas not traditionally involved with high performance computing (HPC).

Since the U.S. Securities and Exchange Commission (SEC) authorized electronic trades in 1998, trading firms have developed the speed and sophistication of high-frequency trading (HFT), and over the last few years, it has dominated the market. Ye, who led the Illinois team, notes that the amount of data produced by HFT has exploded to such an extent that it challenges data processing’s ability to study it. 

“Fifteen years ago, trade was done by humans,” says Ye, “and you didn’t need supercomputing to understand and regulate the markets. Now the players in the trading game are superfast computers. To study them you need the same power. The size of trading data has increased exponentially, and the raw data of a day can be as large as ten gigabytes.”

This problem was highlighted by the “flash crash” of May 6, 2010. In the biggest one-day drop in its history, the Dow Jones Industrial Average fell nearly 1,000 points — 9 percent of its value — in about 20 minutes. Analysis eventually revealed HFT-related glitches as the culprit, but it took the SEC months, notes Ye, to analyze the data and arrive at answers.

To directly address the data problem and other questions related to HFT, Ye and colleagues turned to XSEDE, specifically the shared-memory resources of Blacklight at the Pittsburgh Supercomputing Center (PSC) and Gordon at the San Diego Supercomputing Center (SDSC). Scientist Anirban Jana of PSC and XSEDE’s Extended Collaborative Support Services worked with Ye to use these systems effectively.

In a study reported in July 2011, Ye and Yao, along with Maureen O’Hara of Cornell, processed prodigious quantities of NASDAQ historical market data — two years of trading — to look at how a lack of transparency in odd-lot trades (trades of fewer than 100 shares) might skew perceptions of the market. Their paper, “What’s Not There: The Odd-Lot Bias in TAQ Data,” received a “revise and resubmit” from The Journal of Finance, the top journal in this field. The study received wide attention, and in September, as a result, the Financial Industry Regulatory Authority (FINRA), which oversees the securities exchanges, reported plans to reconsider the odd-lots policy and to vote on whether to include odd lots in the “consolidated tape” that reports trade-and-quote (TAQ) data.

In more recent work, as cited in U.S. Senate testimony, Ye, Yao and Gai examined effects of increasing trading speed from microseconds to nanoseconds. Using Gordon and Blacklight and processing 55 days of NASDAQ trading data from 2010, their calculations looked at the ratio of orders cancelled to orders executed, finding evidence of a manipulative practice called “quote stuffing,” in which HFT traders place an order only to cancel it within 0.001 seconds or less, with the aim of generating congestion. Their analysis provides justification for regulatory changes, such as a speed limit on orders or a fee for order cancellation.

In June, The Wall Street Journal reported that trading had entered the nanosecond age. A London firm called Fixnetix announced a microchip that “prepares a trade in 740 billionths of a second,” noted the WSJ, and investment banks and trading firms are spending millions to shave infinitesimal slivers of time off their “latency” to get to picoseconds — trading in trillionths of a second.

“While it is naive to eliminate high-frequency traders,” Ye and colleagues wrote, “it is equally naive to let the arms race of speed proceed without any restriction.”

Both of the XSEDE systems they used are “shared-memory” systems, which reduce programming difficulties for access to large quantities of data, as compared to distributed memory. “Without XSEDE and shared memory,” says Ye, “we wouldn’t be able to effectively study these large amounts of data produced by high-frequency trading.”

The research reflects an emphasis of XSEDE — through its program in Novel and Innovative Projects (NIP) — in providing allocations of time for non-traditional HPC applications. “We’re happy to see these proposals succeed,” said Sergiu Sanielevici of PSC, who leads the NIP program. “The NIP team seeks out strong projects that differ from the more typical simulation and modeling applications that have dominated HPC research in previous decades.”

 

More information: http://www.psc.edu/science/2012/trading

About XSEDE:
XSEDE, the Extreme Science and Engineering Discovery Environment, is the most advanced, powerful, and robust collection of integrated digital resources and services in the world. It is a single virtual system that scientists and researchers can use to interactively share computing resources, data, and expertise. XSEDE integrates the resources and services, makes them easier to use, and helps more people use them. The five-year, $121 million project is supported by the National Science Foundation.
 

CONTACTS:

Susan McKenna
National Center for Supercomputing Applications
mckennas@ncsa.illinois.edu
217-265-5167

Michael Schneider
Pittsburgh Supercomputing Center
schneider@psc.edu

412-268-4960

Jan Zverina
San Diego Supercomputer Center
jzverina@sdsc.edu
858-534-5000