Chris Tracy

ctracy photo
Chris Tracy
Group Lead (acting)

Chris Tracy has worked in computing and networking since 1997. Prior to joining ESnet, he was a Co-PI on the GENI MANFRED proposal and was one of the systems/optical network engineers for the NSF DRAGON project, a $6.5M research program to deploy "experimental" optical networks utilizing novel technologies and services to provide real and measurable advantage to advanced e-science applications. The program was a collaborative project between the Mid-Atlantic Crossroads, the USC Information Sciences Institute East, George Mason University, and the University of Maryland College Park. Tracy has also directly contributed to Internet2's HOPI/DCN project since April 2004.

He was also responsible for the deployment and operational management of the DRAGON network. This lambda-switched network includes over 100 miles of fiber in the Washington DC/Baltimore metro area connecting 7+ POPs with ROADMs, OADMs, L2 switches, routers, etc. Tracy has helped with strategic planning and providing engineering support for the DRAGON, HOPI, and DCN networks. Prior to MAX, Tracy was a Senior Network Engineer for seven years at a regional ISP in the Pittsburgh, Pennsylvania area. He has presented at LISA 2002, and was also involved with the Pittsburgh SAGE organization. Tracy has been actively involved with the SCinet planning committee between 2002-2006, primarily working with the IT and WAN groups.

Tracy received a Bachelor of Science degree in Computer Engineering from the University of Pittsburgh in 2001 and is currently pursuing a master’s degree in telecommunications management at the University of Maryland University College (UMUC).

Journal Articles

Jonathan B. Ajo-Franklin, Shan Dou, Nathaniel J. Lindsey, Inder Monga, Chris Tracy, Michelle Robertson, Veronica Rodriguez Tribaldos, Craig Ulrich, Barry Freifeld, Thomas Daley and Xiaoye Li, “Distributed Acoustic Sensing Using Dark Fiber for Near-Surface Characterization and Broadband Seismic Event Detection”, Nature, February 4, 2019,

Zhenzhen Yan, Chris Tracy, Malathi Veeraraghavan, Tian Jin, Zhengyang Liu, “A Network Management System for Handling Scientific Data Flows”, Journal of Network and Systems Management, October 11, 2015,

Z. Yan, M. Veeraraghavan, C. Tracy, C. Guok, “On How to Provision Virtual Circuits for Network-Redirected Large-Sized, High-Rate Flows”, International Journal on Advances in Internet Technology, vol. 6, no. 3 & 4, 2013, November 1, 2013,

Conference Papers

Verónica Rodríguez Tribaldos, Shan Dou, Nate Lindsey, Inder Monga, Chris Tracy, Jonathan Blair Ajo-Franklin, “Monitoring Aquifers Using Relative Seismic Velocity Changes Recorded with Fiber-optic DAS”, AGU Meeting, December 10, 2019,

Ranjana Addanki, Sourav Maji, Malathi Veeraraghavan, Chris Tracy, “A measurement-based study of big-data movement”, 2015 European Conference onNetworks and Communications (EuCNC), July 29, 2015,

Z. Yan, M. Veeraraghavan, C. Tracy, and C. Guok, “On how to Provision Quality of Service (QoS) for Large Dataset Transfers”, Proceedings of the Sixth International Conference on Communication Theory, Reliability, and Quality of Service, April 21, 2013,


Chris Tracy, 100G Deployment--Challenges & Lessons Learned from the ANI Prototype & SC11, NANOG 55, June 2012,

This presentation will discuss the challenges and lessons learned in the deployment of the 100GigE ANI Prototype network and support of 100G circuit services during SC11 in Seattle. Interoperability, testing, measurement, debugging, and operational issues at both the optical and layer-2/3 will be addressed. Specific topics will include: (1) 100G pluggable optics – options, support, and standardization issues, (2) Factors negatively affecting 100G line-side transmission, (3) Saturation testing and measurement with hosts connected at 10G, (4) Debugging and fault isolation with creative use of loops/circuit services, (5) Examples of interoperability problems in a multi-vendor environment, and (6) Case study: Transport of 2x100G waves to SC11.

Chris Tracy, Introduction to OpenFlow: Bringing Experimental Protocols to a Network Near You, NANOG50 Conference, Atlanta, Oct. 4, 2010, October 4, 2010,

Chris Tracy, Eli Dart, Science DMZs: Understanding their role in high-performance data transfers, CANS 2010, September 20, 2010,


Jason Zurawski, Eli Dart, Zach Harlan, Carol Hawk, John Hess, Justin Hnilo, John Macauley, Ramana Madupu, Ken Miller, Christopher Tracy, Andrew Wiedlea, “Biological and Environmental Research Network Requirements Review Final Report”, Report, September 11, 2023, LBNL LBNL-2001542

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between August 2022 and April 2023, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized an ESnet requirements review of BER-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BER ESS program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.