Menu

Jason Zurawski

zurawski
Jason Zurawski
Science Engagement Engineer
Science Engagement Group

Jason Zurawski is a Science Engagement Engineer at the Energy Sciences Network (ESnet) in the Scientific Networking Division of the Computing Sciences Directorate of the Lawrence Berkeley National Laboratory. ESnet is the high performance networking facility of the US Department of Energy Office of Science. ESnet''s mission is to enable those aspects of the DOE Office of Science research mission that depend on high performance networking for success.  Jason's primary responsibilities include working with members of the research community to identify the role of networking in scientific workflows, evaluate current requirements, and suggest improvements for future innovations.

Jason has worked in computing and networking since 2004, and has a B.S. in Computer Science & Engineering from The Pennsylvania State University earned in 2002, and an M.S. in Computer and Information Science from The University of Delaware earned in 2007. He has previously worked for the University of Delaware and Internet2. Jason resides and works in Bloomington, IN, and may be reached via email at zurawski@es.net.

Journal Articles

L. Zuo, M. Zhu, C. Wu and J. Zurawski, “Fault-tolerant Bandwidth Reservation Strategies for Data Transfers in High-performance Networks”, Computer Networks, February 1, 2017, 113:1-16,

Swany M. Brown A., Zurawski J., “A General Encoding Framework for Representing Network Measurement and Topology Data”, Concurrency and Computation: Practice and Experience, 2009, 21:1069--1086,

J. Zurawski, D Wang, “Fault-tolerance schemes for clusterheads in clustered mesh networks”, International Journal of Parallel, Emergent and Distributed Systems, 2008, 23:271--287,

Conference Papers

S. Stepanov, O. Makarov, M. Hilgart, S.B. Pothineni J. Zurawski, J.L. Smith, R.F. Fischetti, “Integration of Fast Detectors into Beamline Controls at the GM/CA Macromolecular Crystallogra- phy Beamlines at the Advanced Photon Source”, The 11th New Opportunities for Better User Group Software (NOBUGS) Conference, Copenhagen Denmark, October 1, 2016,

S. Stepanov, O. Makarov, M. Hilgart, S.B. Pothineni, J. Zurawski, J.L. Smith, R.F. Fischetti, “Integration of Fast Detectors into Beamline Controls at GM/CA@APS: Pilatus3 6M and Eiger 16M”, 12th International Conference on Biology and Synchrotron Radiation (BSR-16), Palo Alto CA, August 1, 2016,

Shawn McKee, Marian Babik, Simone Campana, Tony Wildish, Joel Closier, Costin Grigoras, Ilija Vukotic, Michail Salichos, Kaushik De, Vincent Garonne, Jorge Alberto Diaz Cruz, Alessandra Forti, Christopher John Walker, Duncan Rand, Alessandro De Salvo, Enrico Mazzoni, Ian Gable, Frederique Chollet, Hsin Yen Chen, Ulf Bobson Severin Tigerstedt, Guenter Duckeck, Andreas Petzold, Fernando Lopez Munoz, Josep Flix, John Shade, Michael O'Connor, Volodymyr Kotlyar, Bruno Heinrich Hoeft, Jason Zurawski, “Integrating network and transfer metrics to optimize transfer efficiency and experiment workflows”, 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa Japan, April 13, 2015,

Eli Dart, Lauren Rotman, Brian Tierney, Mary Hester, and Jason Zurawski, “The Science DMZ: A Network Design Pattern for Data-Intensive Science”, SC13: The International Conference for High Performance Computing, Networking, Storage and Analysis, Best Paper Nominee. Denver CO, USA, ACM. DOI:10.1145/2503210.2503245, November 19, 2013, LBNL 6366E.

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The Science DMZ paradigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that create an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.

 

Campana S., Bonacorsi D., Brown A., Capone E., De Girolamo D., Fernandez Casani A., Flix Molina J., Forti A., Gable I., Gutsche O., Hesnaux A., Liu L., Lopez Munoz L., Magini N., McKee S., Mohammad K., Rand D., Reale M., Roiser S., Zielinski M., and Zurawski J.}, “Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology”, 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2013), October 2013,

Jason Zurawski, Sowmya Balasubramanian, Aaron Brown, Ezra Kissel, Andrew Lake, Martin Swany, Brian Tierney, Matt Zekauskas, “perfSONAR: On-board Diagnostics for Big Data”, 1st Workshop on Big Data and Science: Infrastructure and Services Co-located with IEEE International Conference on Big Data 2013 (IEEE BigData 2013), October 6, 2013,

Gunter D., Kettimuthu R., Kissel E., Swany M., Yi J., Zurawski J., “Exploiting Network Parallelism for Improving Data Transfer Performance”, IEEE/ACM Annual SuperComputing Conference (SC12) Companion Volume, Salt Lake City Utah, USA, November 2012,

Wu, Q., Yun, D., Zhu, M., Brown, P., and Zurawski, J., “A Workflow-based Network Advisor for Data Movement with End-to-end Performance Optimization”, The Seventh Workshop on Workflows in Support of Large-Scale Science (WORKS12), Salt Lake City Utah, USA, November 2012,

Zurawski J., Ball R., Barczyk A., Binkley M., Boote J., Boyd E., Brown A., Brown R., Lehman T., McKee S., Meekhof B., Mughal A. Newman H., Rozsa S., Sheldon P., Tackett A., Voicu R., Wolff S., and Yang X., “The DYNES Instrument: A Description and Overview”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

McKee S., Lake A., Laurens P., Severini H., Wlodek T., Wolff S., and Zurawski J., “Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

Zurawski J., Boyd E., Lehman T., McKee S., Mughal A., Newman H., Sheldon P, Wolff S., and Yang X., “Scientific data movement enabled by the DYNES instrument”, Proceedings of the first international workshop on Network-aware data management (NDM ’11), Seattle WA, USA, November 2011,

Swany M. Portnoi M., Zurawski J., “Information services algorithm to heuristically summarize IP addresses for distributed, hierarchical directory”, 11th IEEE/ACM International Conference on Grid Computing (Grid2010), 2010,

Tierney B., Metzger J., Boote J., Brown A., Zekauskas M., Zurawski J., Swany M., Grigoriev M., “perfSONAR: Instantiating a Global Network Measurement Framework”, 4th Workshop on Real Overlays and Distributed Systems (ROADS’09) Co-located with the 22nd ACM Symposium on Operating Systems Principles (SOSP), January 1, 2009, LBNL LBNL-1452

Grigoriev M., Boote J., Boyd E., Brown A., Metzger J., DeMar P., Swany M., Tierney B., Zekauskas M., Zurawski J., “Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites”, 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2009), January 1, 2009,

Zurawski, J., Swany M., and Gunter D., “A Scalable Framework for Representation and Exchange of Network Measurements”, 2nd International IEEE/Create-Net Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TridentCom 2006), Barcelona, Spain, 2006,

Hanemann A., Boote J. , Boyd E., Durand J., Kudarimoti L., Lapacz R., Swany M., Trocha S., and Zurawski J., “PerfSONAR: A Service-Oriented Architecture for Multi-Domain Network Monitoring”, International Conference on Service Oriented Computing (ICSOC 2005), Amsterdam, The Netherlands, 2005,

Wang D. Zurawski J., “Fault-Tolerance Schemes for Hierarchical Mesh Networks”, The 6th International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT 2005), Dalian, China, 2005,

Zurawski J., Swany M., Beck M. and Ding Y., “Logistical Multicast for Data Distribution”, Proceedings of CCGrid, Workshop on Grids and Advanced Networks 2005 (GAN05), Cardiff, Wales, 2005,

Presentation/Talks

Jason Zurawski, Kenneth Miller, EPOC Office Hours, The 2023 NSF Campus Cyberinfrastructure PI Workshop, September 28, 2023,

Jason Zurawski, Jennifer Schopf, Doug Southworth, Using NetSage to Support ACCESS, Internet2 Technology Exchange 2023, September 20, 2023,

Jason Zurawski, Science Requirements to Support NOAA & NIST, Alaska Region Technology Interchange Consortium (ARTIC), September 11, 2023,

Jason Zurawski, Kenneth Miller, EPOC Report on NIST Science Drivers, NIST, August 28, 2023,

Jason Zurawski, Jennifer Schopf, Nathaniel Mendoza, Doug Southworth, EPOC Support for Cyberinfrastructure and Data Movement, PEARC 2023, July 17, 2023,

Jason Zurawski, Kenneth Miller, "Science DMZ Architecture", "Data Transfer Hardware", "Science DMZ Security Policy", "perfSONAR / Measurement", and "NetSage Network Visibility", Cyberinfrastructure for Research Data Management Workshop, May 23, 2023,

Jason Zurawski, Jennifer Schopf, Engagement and Performance Operations Center (EPOC) Support to Share and Collaborate, Internet2 Community Exchange 2023, May 10, 2023,

Jason Zurawski, Kenneth Miller, Fasterdata DTN Framework, Globus World 2023, April 26, 2023,

Jason Zurawski, ESnet & EPOC: Support for Remote Scientific Use Cases, NWAVE Stakeholder and Science Engagement Summit, March 29, 2023,

Jason Zurawski, Kenneth Miller, "Introduction to Science DMZ, Science Data Movement", "BGP Essentials", "Advanced BGP Concepts", and "Network Monitoring and perfSONAR", UCF / FLR Workshop on Networking Topics, February 16, 2023,

Jason Zurawski, Jennifer Schopf, NetSage and the NRP/CENIC, National Research Platform 2023 Conference (4NRP), February 8, 2023,

Jason Zurawski, Jennifer Schopf, EPOC and NetSage for WestNet, WestNet Winter 2023 Member Meeting, January 23, 2023,

Jason Zurawski, Bridging the Technical Gap: Science Engagement at ESnet, Great Plains Network Annual Meeting, May 28, 2015,

Jason Zurawski, Network Monitoring with perfSONAR, BioTeam & ESnet Webinar, May 18, 2015,

Jason Zurawski, Cybersecurity: Protecting Against Things that go “bump” in the Net, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, May 8, 2015,

Jason Zurawski, Understanding Big Data Trends and the Key Role of the Regionals in Bridging Needs and Solutions, PNWGP Board Meeting, April 21, 2015,

Jason Zurawski, The perfSONAR Effect: Changing the Outcome of Networks by Measuring Them, 2015 KINBER Annual Conference, April 16, 2015,

Jason Zurawski, Improving Scientific Outcomes at the APS with a Science DMZ, Globus World 2015, April 15, 2015,

Jason Zurawski, Cybersecurity: Protecting Against Things that go “bump” in the Net, Southern Partnership in Advanced Networking, April 9, 2015,

Jason Zurawski, The Science DMZ: A Network Design Pattern for Data-Intensive Science, Southern Partnership in Advanced Networking, April 8, 2015,

Jason Zurawski, Science DMZ Architecture and Security, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, April 3, 2015,

Joe Metzger, Jason Zurawski, ESnet's LHCONE Service, 2015 OSG All Hands Meeting, March 23, 2015,

Jason Zurawski, perfSONAR and Network Monitoring, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, March 13, 2015,

Jason Zurawski, Understanding Big Data Trends and the Key Role of the Regionals in Bridging Needs and Solutions, 2015 Quilt Winter Meeting, February 11, 2015,

Jason Zurawski, Wagging the Dog: Determining Network Requirements to Drive Modern Network Design, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, February 6, 2015,

Jason Zurawski, perfSONAR at 10 Years: Cleaning Networks & Disrupting Operation, perfSONAR Focused Technical Workshop, January 22, 2015,

Science Engagement: A Non-Technical Approach to the Technical Divide, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, January 16, 2015,

The Science DMZ and the CIO: Data Intensive Science and the Enterprise, RMCMOA Workshop, January 13, 2015,

Jason Zurawski, The Science DMZ: A Network Design Pattern for Data-Intensive Science, New Mexico Technology in Education (NMTIE) Cyber Infrastructure Day, November 19, 2014,

Jason Zurawski, Science Engagement: A Non-Technical Approach to the Technical Divide, Cyber Summit 2014: Crowdsourcing Innovation, September 25, 2014,

Jason Zurawski, Mary Hester, Of Mice and Elephants: Supporting Research with the Science DMZ and Software Defined Networking, Cyber Summit 2014: Crowdsourcing Innovation, September 24, 2014,

Jason Zurawski, FAIL-transfer: Removing the Mystery of Network Performance from Scientific Data Movement, XSEDE Campus Champions Webinar, August 20, 2014,

Jason Zurawski, A Brief Overview of the Science DMZ, Open Science Grid Campus Grids Webinar, May 23, 2014,

Jason Zurawski, Brian Tierney, Mary Hester, The Role of End-user Engagement for Scientific Networking, TERENA Networking Conference (TNC), May 20, 2014,

Jason Zurawski, Brian Tierney, Jason Zurawski, An Overview in Emerging (and not) Networking Technologies, TERENA Networking Conference (TNC), May 19, 2014,

Jason Zurawski, Fundamentals of Data Movement Hardware, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, Essentials of perfSONAR, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, The perfSONAR Project at 10 Years: Status and Trajectory, GN3 (GÉANT) NA3, Task 2 - Campus Network Monitoring and Security Workshop, April 25, 2014,

Jason Zurawski, Network and Host Design to Facilitate High Performance Data Transfer, Globus World 2014, April 15, 2014,

Jason Zurawski, Brian Tierney, ESnet perfSONAR Update, 2014 Winter ESnet Site Coordinators Committee (ESCC) Meeting, February 25, 2014,

Jason Zurawski, Security and the perfSONAR Toolkit, Second NSF Workshop on perfSONAR based Multi-domain Network Performance Measurement and Monitoring (pSW 2014), February 21, 2014,

Overview of recent security breaches and practices for the perfSONAR Toolkit. 

Jason Zurawski, The Science DMZ - Architecture, Monitoring Performance, and Constructing a DTN, Operating Innovative Networks (OIN), October 3, 2013,

Jason Zurawski, Kathy Benninger, Network PerformanceTutorial featuring perfSONAR, XSEDE13: Gateway to Discovery, July 22, 2013,

Jason Zurawski, A Completely Serious Overview of Network Performance for Scientific Networking, Focused Technical Workshop: Network Issues for Life Sciences Research, July 18, 2013,

Jason Zurawski, Site Performance Measurement & Monitoring Best Practices, 2013 Summer ESnet Site Coordinators Committee (ESCC) Meeting, July 16, 2013,

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, SSERCA / FLR Summit, June 14, 2013,

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Great Plains Network Annual Meeting, May 29, 2013,

Jason Zurawski, Matt Lessins, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Merit Member Conference 2013, May 15, 2013,

Lauren Rotman, Jason Zurawski, Building User Outreach Strategies: Challenges & Best Practices, Internet2 Annual Meeting, April 22, 2013,

Jason Zurawski, Network Tools Tutorial, Internet2 Annual Meeting, April 11, 2013,

Jason Zurawski, Networking Potpourri, OSG All Hands Meeting, March 11, 2013,

Jason Zurawski, Debugging Network Performance With perfSONAR, eduPERT Performance U! Winter school, March 6, 2013,

Eli Dart, Brian Tierney, Raj Kettimuthu, Jason Zurawski, Achieving the Science DMZ, January 13, 2013,

Tutorial at TIP2013, Honolulu, HI

  • Part 1: Architecture and Security
  • Part 2: Data Transfer Nodes and Data Transfer Tools
  • Part 3: perfSONAR

 

 

Reports

Eli Dart, Jason Zurawski, Carol Hawk, Benjamin Brown, Inder Monga, “ESnet Requirements Review Program Through the IRI Lens”, LBNL, October 16, 2023, LBNL 2001552

The Department of Energy (DOE) ensures America’s security and prosperity by addressing its energy, environmental, and nuclear challenges through transformative science and technology solutions. The DOE’s Office of Science (SC) delivers groundbreaking scientific discoveries and major scientific tools that transform our understanding of nature and advance the energy, economic, and national security of the United States. The SC’s programs advance DOE mission science across a wide range of disciplines and have developed the research infrastructure needed to remain at the forefront of scientific discovery.

The DOE SC’s world-class research infrastructure — exemplified by the 28 SC scientific user facilities — provides the research community with premier observational, experimental, computational, and network capabilities. Each user facility is designed to provide unique capabilities to advance core DOE mission science for its sponsor SC program and to stimulate a rich discovery and innovation ecosystem.

Research communities gather and flourish around each user facility, bringing together diverse perspectives. A hallmark of many facilities is the large population of students, postdoctoral researchers, and early-career scientists who contribute as full-fledged users. These facility staff and users collaborate over years to devise new approaches to utilizing the user facility’s core capabilities. The history of the SC user facilities has many examples of wildly inventive researchers challenging operational orthodoxy to pioneer new vistas of discovery; for example, the use of the synchrotron X-ray light sources for study of proteins and other large biological molecules. This continual reinvention of the practice of science — as users and staff forge novel approaches expressed in research workflows — unlocks new discoveries and propels scientific progress.

Within this research ecosystem, the high performance computing (HPC) and networking user facilities stewarded by SC’s Advanced Scientific Computing Research (ASCR) program play a dynamic cross-cutting role, enabling complex workflows demanding high performance data, networking, and computing solutions. The DOE SC’s three HPC user facilities and the Energy Sciences Network (ESnet) high-performance research network serve all of the SC’s programs as well as the global research community. Argonne Leadership Computing Facility (ALCF), the National Energy Research Scientific Computing Center (NERSC), and Oak Ridge Leadership Computing Facility (OLCF) conceive, build, and provide access to a range of supercomputing, advanced computing, and large-scale data-infrastructure platforms, while ESnet interconnects DOE SC research infrastructure and enables seamless exchange of scientific data. All four facilities operate testbeds to expand the frontiers of computing and networking research. Together, the ASCR facilities enterprise seeks to understand and meet the needs and requirements across SC and DOE domain science programs and priority efforts, highlighted by the formal requirements reviews (RRs) methodology.

In recent years, the research communities around the SC user facilities have begun experimenting with and demanding solutions integrated with HPC and data infrastructure. This rise of integrated-science approaches is documented in many community and high-level government reports. At the dawn of the era of exascale science and the acceleration of artificial intelligence (AI) innovation, there is a broad need for integrated computational, data, and networking solutions.

In response to these drivers, DOE has developed a vision for an Integrated Research Infrastructure (IRI): To empower researchers to meld DOE’s world-class research tools, infrastructure, and user facilities seamlessly and securely in novel ways to radically accelerate discovery and innovation.

The IRI vision is fundamentally about establishing new data-management and computational paradigms within which DOE SC user facilities and their research communities work together to improve existing capabilities and create new possibilities by building bridges across traditional silos. Implementation of IRI solutions will give researchers simple and powerful tools with which to implement multi-facility research data workflows.

In 2022, SC leadership directed the Advanced Scientific Computing Research (ASCR) program to conduct the Integrated Research Infrastructure Architecture Blueprint Activity (IRI ABA) to produce a reference framework to inform a coordinated, SC-wide strategy for IRI. This activity convened the SC science programs and more than 150 DOE national laboratory experts from all 28 SC user facilities across 13 national laboratories to consider the technological, policy, and sociological challenges to implementing IRI.

Through a series of cross-cutting sprint exercises facilitated by the IRI ABA leadership group and peer facilitators, participants produced an IRI Framework based on the IRI Vision and comprising:

  • IRI Science Patterns spanning DOE science domains;
  • IRI Practice Areas needed for implementation;
  • IRI blueprints that connect Patterns and Practice Areas;
  • Overarching principles for realizing the DOE-wide IRI ecosystem.

The resulting IRI framework and blueprints provide the conceptual foundations to move forward with organized, coordinated DOE implementation efforts. The next step is to identify urgencies and ripe areas for focused efforts that uplift multiple communities.

Upon completion of the IRI ABA framework, ESnet applied the IRI Science Patterns lens and undertook a metaanalysis of ESnet’s Requirements Reviews (RRs), the core strategic planning documents that animate the multiyear partnerships between ESnet and five of the DOE SC programs. Between 2019 and 2023, ESnet completed a new round of RRs with the following SC programs: Nuclear Physics (2019-20), High Energy Physics (2020-21), Fusion Energy Sciences (2021-22), Basic Energy Sciences (2021-22), and Biological and Environmental Research (2022-23). Together these ESnet RRs provide a rich trove of insights into opportunities for immediate IRI progress and investment.

Our meta-analysis of 74 high-priority case studies reveals that:

  • -There are a significant number of research workflows spanning materials science, fusion energy, nuclear physics, and biological science that have a similar structure. Creation of common software components to improve these workflows’ performance and scalability will benefit researchers in all of these areas.
  • There is broad opportunity to accelerate scientific productivity and scientific output across DOE facilities by integrating them with each other and with high performance computing and networking.
  • The ESnet RRs’ blending of retrospective and prospective insight affirms that the IRI patterns are persistent across time and likely to persist into the future, offering value as a basis for analysis and strategic planning going forward.

 

Jason Zurawski, Eli Dart, Zach Harlan, Carol Hawk, John Hess, Justin Hnilo, John Macauley, Ramana Madupu, Ken Miller, Christopher Tracy, Andrew Wiedlea, “Biological and Environmental Research Network Requirements Review Final Report”, Report, September 11, 2023, LBNL LBNL-2001542

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between August 2022 and April 2023, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized an ESnet requirements review of BER-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BER ESS program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

Jason Zurawski, Jennifer Schopf, “National Institute of Standards and Technology Requirements Analysis Report”, Lawrence Berkeley National Laboratory, April 21, 2023, LBNL LBNL-2001525

Jason Zurawski, Jennifer Schopf, Doug Southworth, Austin Gamble, Byron Hicks, Amy Schultz, “St. Mary’s University Requirements Analysis Report”, Lawrence Berkeley National Laboratory, January 16, 2023, LBNL LBNL-2001503

Jason Zurawski, Ben Brown, Dale Carder, Eric Colby, Eli Dart, Ken Miller, Abid Patwa, Kate Robinson, Andrew Wiedlea, “High Energy Physics Network Requirements Review: One-Year Update”, ESnet Network Requirements Review, December 22, 2022, LBNL LBNL-2001492

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy​ (DOE) ​Office​ of​ Science​ (SC)​ and​ delivers​ highly​ reliable​ data​transport ​capabilities​ optimized​ for ​the​ requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the United States and abroad. ESnet is funded and stewarded​ by​ the​ Advanced​ Scientific ​Computing​ Research​ (ASCR)​ program​ and​ managed​ and​operated​ by​ the​ Scientific ​Networking​ Division​ at ​Lawrence​ Berkeley ​National​ Laboratory​ (LBNL). ​ESnet ​is ​widely​ regarded​ as​ a global leader in the research and education networking community.

In April 2022, ESnet and the Office of High Energy Physics (HEP) of the DOE SC organized an ESnet requirements review of HEP-supported activities. Preparation for the review included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about the group’s relationship to the HEP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

ESnet and ASCR use requirements reviews to discuss and analyze current and planned science use cases and anticipated data output of a particular program, user facility, or project to inform ESnet’s strategic planning, including network operations, capacity upgrades, and other service investments. A requirements review comprehensively surveys major science stakeholders’ plans and processes in order to investigate data management requirements over the next 5–10 years.

Jason Zurawski,Dale Carder,Matthias Graf,Carol Hawk,Aaron Holder,Dylan Jacob,Eliane Lessner,Ken Miller,Cody Rotermund,Thomas Russell,Athena Sefat,Andrew Wiedlea, “2022 Basic Energy Sciences Network Requirements Review Final Report”, Report, December 2, 2022, LBNL LBNL-2001490

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewardedby the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between March and September 2022, ESnet and the Office of Basic Energy Sciences (BES) of the DOE SC organized an ESnet requirements review of BES-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BES program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski,Eli Dart,Ken Miller,Lauren Rotman,Andrew Wiedlea, “ARIES Network Requirements Review”, Report, November 28, 2022, LBNL LBNL-2001476

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL).

On May 1, 2021, ESnet and the DOE Office of Energy Efficiency and Renewable Energy (EERE), organized an ESnet requirements review of the ARIES (Advanced Research on Integrated Energy Systems) platform. Preparation for this event included identification of key stakeholders to the process: program and facility management, research groups, technology providers, and a number of external observers. These individuals were asked to prepare formal case study documents in order to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski, Ben Brown, Eli Dart, Carol Hawk, Saswata Hier-Majumder, Josh King, John Mandrekas, Ken Miller, William Miller, Lauren Rotman, Andrew Wiedlea, “2021 Fusion Energy Sciences Network Requirements Review”, May 23, 2022, LBNL 2001462

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL).

 

ESnet is widely regarded as a global leader in the research and education networking community. Throughout 2021, ESnet and the Office of Fusion Energy Sciences (FES) of the DOE SC organized an ESnet requirements review of FES-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about their relationship to the FES program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski, Hans Addleman, Ken Miller, “University of Central Florida Campus-Wide Deep Dive”, LBNL Report, August 20, 2021, LBNL LBNL-2001419

Jason Zurawski, Hans Addleman, Ken Miller, Doug Southworth, “NOAA National Centers for Environmental Information Fisheries Acoustics ArchiveNetwork Deep Dive”, LBNL Report, August 12, 2021, LBNL LBNL-2001417

Jason Zurawski, Ben Brown, Dale Carder, Eric Colby, Eli Dart, Ken Miller, Abid Patwa, Kate Robinson, Lauren Rotman, Andrew Wiedlea, “2020 High Energy Physics Network Requirements Review Final Report”, ESnet Network Requirements Review, June 29, 2021, LBNL LBNL-2001398

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy​ (DOE) ​Office​ of​ Science​ (SC)​ and​ delivers​ highly​ reliable​ data​transport ​capabilities​ optimized​ for ​the​ requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the United States and abroad. ESnet is funded and stewarded​ by​ the​ Advanced​ Scientific ​Computing​ Research​ (ASCR)​ program​ and​ managed​ and​operated​ by​ the​ Scientific ​Networking​ Division​ at ​Lawrence​ Berkeley ​National​ Laboratory​ (LBNL). ​ESnet ​is ​widely​ regarded​ as​ a global leader in the research and education networking community.

Throughout ​2020,​ESnet​ and​ the ​Office ​of ​High ​Energy​ Physics​ (HEP)​ of ​the ​DOE​ SC​ organized​ an​ ESnet​ requirements ​review​ of ​HEP-supported​ activities.​ Preparation ​for ​this​ event​included​ identification ​of​ key​ stakeholders: program and facility management, research groups, technology providers, and a number of external observers. These individuals were asked to prepare formal case study documents about their relationship to the HEP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

ESnet and ASCR use requirements reviews to discuss and analyze current and planned science use cases and anticipated data output of a particular program, user facility, or project to inform ESnet’s strategic planning, including network operations, capacity upgrades, and other service investments. A requirements review comprehensively surveys major science stakeholders’ plans and processes in order to investigate data management requirements over the next 5–10 years.

Jason Zurawski, Benjamin Brown, Eli Dart, Ken Miller, Gulshan Rai, Lauren Rotman, Paul Wefel, Andrew Wiedlea, Editors, “Nuclear Physics Network Requirements Review: One-Year Update”, ESnet Network Requirements Review, September 2, 2020, LBNL LBNL-2001381

Jason Zurawski, Jennifer Schopf, Hans Addleman, “University of Wisconsin-Madison Campus-Wide Deep Dive”, LBNL Report, May 26, 2020, LBNL LBNL-2001325

Jason Zurawski, Jennifer Schopf, Hans Addleman, Scott Chevalier, George Robb, “Great Plains Network - Kansas State University Agronomy Application Deep Dive”, LBNL Report, November 11, 2019, LBNL LBNL-2001321

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “University of Cincinnati Campus-Wide Deep Dive”, LBNL Report, November 1, 2019, LBNL LBNL-2001320

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “Trinity University Campus-Wide Deep Dive”, LBNL Report, November 1, 2019, LBNL LBNL-2001319

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, Scott Chevalier, “Purdue University Application Deep Dive”, LBNL Report, November 1, 2019, LBNL LBNL-2001318

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “Arcadia University Bioinformatics Application Deep Dive”, LBNL Report, July 8, 2019, LBNL LBNL-2001317

Jason Zurawski, Eli Dart, Lauren Rotman, Paul Wefel, Editors, “Nuclear Physics Network Requirements Review 2019 - Final Report”, ESnet Network Requirements Review, May 8, 2019, LBNL LBNL-2001281

Chevalier, S., Schopf, J. , M., Miller, K., Zurawski, J., “Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure”, July 1, 2016, LBNL 1005797

Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets.

The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends.

This work explores low cost alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. We present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.

Eli Dart, Mary Hester, and Jason Zurawski, Editors, “Biological and Environmental Research Network Requirements Review 2015 - Final Report”, ESnet Network Requirements Review, September 18, 2015, LBNL 1004370

Eli Dart, Mary Hester, Jason Zurawski, “Advanced Scientific Computing Research Network Requirements Review - Final Report 2015”, ESnet Network Requirements Review, April 22, 2015, LBNL 1005790

Eli Dart, Mary Hester, Jason Zurawski, “Basic Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, September 2014, LBNL 6998E

Eli Dart, Mary Hester, Jason Zurawski, “Fusion Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, August 2014, LBNL 6975E

Eli Dart, Mary Hester, Jason Zurawski, Editors, “High Energy Physics and Nuclear Physics Network Requirements - Final Report”, ESnet Network Requirements Workshop, August 2013, LBNL 6642E

J. van der Ham, F. Dijkstra, R. Łapacz, J. Zurawski, “Network Markup Language Base Schema version 1”, Open Grid Forum, GFD-R-P.206, 2013,

Eli Dart, Brian Tierney, editors, “Fusion Energy Network Requirements Workshop, December 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5905E

Eli Dart, Lauren Rotman, Brian Tierney, editors, “Nuclear Physics Network Requirements Workshop, August 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5518E

Eli Dart, Brian Tierney, editors, “Basic Energy Sciences Network Requirements Workshop, September 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4363E

Office of Basic Energy Sciences, DOE Office of Science; Energy Sciences Network; Gaithersburg, MD — September 22 and 23, 2010

Participants and Contributors; Alan Biocca, LBNL (Advanced Light Source); Rich Carlson, DOE/SC/ASCR (Program Manager); Jackie Chen, SNL/CA (Chemistry/Combustion); Steve Cotter, ESnet (Networking); Eli Dart, ESnet (Networking); Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager); Jim Davenport, DOE/SC/BES (BES Program); Alexander Gaenko, Ames Lab (Chemistry); Paul Kent, ORNL (Materials Science, Simulations); Monica Lamm, Ames Lab (Computational Chemistry); Stephen Miller, ORNL (Spallation Neutron Source); Chris Mundy, PNNL (Chemical Physics); Thomas Ndousse, DOE/SC/ASCR (ASCR Program); Mark Pederson, DOE/SC/BES (BES Program); Amedeo Perazzo, SLAC (Linac Coherent Light Source); Razvan Popescu, BNL (National Synchrotron Light Source); Damian Rouson, SNL/CA (Chemistry/Combustion); Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager); Bobby Sumpter, ORNL (Computer Science and Mathematics and Center for Nanophase; Materials Sciences); Brian Tierney, ESnet (Networking); Cai-Zhuang Wang, Ames Lab (Computer Science/Simulations); Steve Whitelam, LBNL (Molecular Foundry); Jason Zurawski, Internet2 (Networking)

Eli Dart, Brian Tierney, editors, “Biological and Environmental Research Network Requirements Workshop, April 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4089E

Office of Biological and Environmental Research, DOE Office of Science Energy Sciences Network Rockville, MD — April 29 and 30, 2010. This is LBNL report LBNL-4089E.

Participants and Contributors: Kiran Alapaty, DOE/SC/BER (Atmospheric System Research) Ben Allen, LANL (Bioinformatics) Greg Bell, ESnet (Networking) David Benton, GLBRC/University of Wisconsin (Informatics) Tom Brettin, ORNL (Bioinformatics) Shane Canon, NERSC (Data Systems) Rich Carlson, DOE/SC/ASCR (Network Research) Steve Cotter, ESnet (Networking) Silvia Crivelli, LBNL (JBEI) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager) Narayan Desai, ANL (Networking) Richard Egan, ANL (ARM) Jeff Flick, NOAA (Networking) Ken Goodwin, PSC/NLR (Networking) Susan Gregurick, DOE/SC/BER (Computational Biology) Susan Hicks, ORNL (Networking) Bill Johnston, ESnet (Networking) Bert de Jong, PNNL (EMSL/HPC) Kerstin Kleese van Dam, PNNL (Data Management) Miron Livny, University of Wisconsin (Open Science Grid) Victor Markowitz, LBNL/JGI (Genomics) Jim McGraw, LLNL (HPC/Climate) Raymond McCord, ORNL (ARM) Chris Oehmen, PNNL (Bioinformatics/ScalaBLAST) Kevin Regimbal, PNNL (Networking/HPC) Galen Shipman, ORNL (ESG/Climate) Gary Strand, NCAR (Climate) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories, Middleware) Dean Williams, LLNL (ESG/Climate) Jason Zurawski, Internet2 (Networking)  

Editors: Eli Dart, ESnet; Brian Tierney, ESnet

“HEP (High Energy Physics) Network Requirements Workshop, August 2009 - Final Report”, ESnet Network Requirements Workshop, August 27, 2009, LBNL LBNL-3397E

Office of High Energy Physics, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-3397E.

Participants and Contributors: Jon Bakken, FNAL (LHC/CMS) Artur Barczyk, Caltech (LHC/Networking) Alan Blatecky, NSF (NSF Cyberinfrastructure) Amber Boehnlein, DOE/SC/HEP (HEP Program Office) Rich Carlson, Internet2 (Networking) Sergei Chekanov, ANL (LHC/ATLAS) Steve Cotter, ESnet (Networking) Les Cottrell, SLAC (Networking) Glen Crawford, DOE/SC/HEP (HEP Program Office) Matt Crawford, FNAL (Networking/Storage) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (HEP/LHC/ATLAS) Ian Fisk, FNAL (LHC/CMS) Rob Gardner, University of Chicago (HEP/LHC/ATLAS) Bill Johnston, ESnet (Networking) Steve Kent, FNAL (Astroparticle) Stephan Lammel, FNAL (FNAL Experiments and Facilities) Stewart Loken, LBNL (HEP) Joe Metzger, ESnet (Networking) Richard Mount, SLAC (HEP) Thomas Ndousse-Fetter, DOE/SC/ASCR (Network Research) Harvey Newman, Caltech (HEP/LHC/Networking) Jennifer Schopf, NSF (NSF Cyberinfrastructure) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alan Stone, DOE/SC/HEP (HEP Program Office) Brian Tierney, ESnet (Networking) Craig Tull, LBNL (Daya Bay) Jason Zurawski, Internet2 (Networking)

 

Gunter D., Leupolt M., Tierney B., Swany M. and Zurawski J., “A Framework for the Representation of Network Measurements”, LBNL Technical Report, 2004,