Menu

Eli Dart

dart.png
Eli Dart
Computer Systems Engineer
Science Engagement Group

Eli Dart is a network engineer in the ESnet Science Engagement Group, which seeks to use advanced networking to improve scientific productivity and science outcomes for the DOE science facilities, their users, and their collaborators. Eli is a primary advocate for the Science DMZ design pattern and works with facilities, laboratories, universities, science collaborations, and science programs to deploy data-intensive science infrastructure based on the Science DMZ model. Eli is also a key contributor to the ESnet network requirements program, which collects, synthesizes, and aggregates the networking needs of the science programs ESnet serves.

Eli has over 20 years of experience in network architecture, design, engineering, performance, and security in scientific and research environments. His primary professional interests are high-performance architectures and effective operational models for networks that support scientific missions, and building collaborations to bring about the effective use of high-performance networks by science projects.

As a member of ESnet's Network Engineering Group, Eli was a primary contributor to the design and deployment of two iterations of the ESnet backbone network - ESnet4 and ESnet5. Prior to ESnet Eli was a lead network engineer at NERSC, DOE's primary supercomputing facility, where he co-led a complete redesign and several years of successful operation of the high-performance network infrastructure there. In addition, Eli spent 14 years as a member of SCinet, the group of volunteers that builds and operates the network for the annual IEEE/ACM Supercomputing conference series, from 1997 through 2010. He served as Network Security Chair for SCinet for the 2000 and 2001 conferences and was a member of the SCinet routing group from 2001 through 2010. Eli holds a Bachelor of Science degree in Computer Science from the Oregon State University College of Engineering.

Journal Articles

W Bhimji, D Carder, E Dart, J Duarte, I Fisk, R Gardner, C Guok, B Jayatilaka, T Lehman, M Lin, C Maltzahn, S McKee, MS Neubauer, O Rind, O Shadura, NV Tran, P van Gemmeren, G Watts, BA Weaver, F Würthwein, “Snowmass 2021 Computational Frontier CompF4 Topical Group Report Storage and Processing Resource Access”, Computing and Software for Big Science, April 2023, 7,

The Snowmass 2021 CompF4 topical group’s scope is facilities R&D, where we consider “facilities” as the hardware and software infrastructure inside the data centers plus the networking between data centers, irrespective of who owns them, and what policies are applied for using them. In other words, it includes commercial clouds, federally funded High Performance Computing (HPC) systems for all of science, and systems funded explicitly for a given experimental or theoretical program. However, we explicitly consider any data centers that are integrated into data acquisition systems or trigger of the experiments out of scope here. Those systems tend to have requirements that are quite distinct from the data center functionality required for “offline” processing and storage.

Sean Peisert, William Barnett, Eli Dart, James Cuff, Robert L Grossman, Edward Balas, Ari Berman,
Anurag Shankar, Brian Tierney,
“The Medical Science DMZ”, Journal of the American Medical Informatics Association, May 2, 2016,

Conference Papers

Brian Tierney, Dart, Kissel, Eashan Adhikarla, “Exploring the BBRv2 Congestion Control Algorithm for use on Data Transfer Nodes”, IEEE Workshop on Innovating the Network for Data-Intensive Science, INDIS@SC 2021, St. Louis, MO, USA, November 15, 2021, IEEE, 2021, 23--33,

Eli Dart, Lauren Rotman, Brian Tierney, Mary Hester, and Jason Zurawski, “The Science DMZ: A Network Design Pattern for Data-Intensive Science”, SC13: The International Conference for High Performance Computing, Networking, Storage and Analysis, Best Paper Nominee. Denver CO, USA, ACM. DOI:10.1145/2503210.2503245, November 19, 2013, LBNL 6366E.

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The Science DMZ paradigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that create an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.

 

William E. Johnston, Eli Dart, Michael Ernst, Brian Tierney, “Enabling high throughput in widely distributed data management and analysis systems: Lessons from the LHC”, TERENA Networking Conference, June 3, 2013,

Book Chapters

William Johnston, Evangelos Chaniotakis, Eli Dart, Chin Guok, Joe Metzger, Brian Tierney, “The Evolution of Research and Education Networks and their Essential Role in Modern Science”, Trends in High Performance & Large Scale Computing, ( November 1, 2008)

Published in: "Trends in High Performance & Large Scale Computing" Lucio Grandinetti and Gerhard Joubert, Editors

Presentation/Talks

Eli Dart, The Science DMZ, CDC OID/ITSO Science DMZ Workhsop, April 15, 2015,

The Science DMZ and the CIO: Data Intensive Science and the Enterprise, RMCMOA Workshop, January 13, 2015,

Eli Dart, Brian Tierney, Raj Kettimuthu, Jason Zurawski, Achieving the Science DMZ, January 13, 2013,

Tutorial at TIP2013, Honolulu, HI

  • Part 1: Architecture and Security
  • Part 2: Data Transfer Nodes and Data Transfer Tools
  • Part 3: perfSONAR

 

 

Eli Dart, Network expectations, or what to tell your system administrator, ALS user group meeting tomography workshop, October 2012,

Eli Dart, Networks for Data Intensive Science Environments, BES Neutron and Photon Detector Workshop, August 2012,

Eli Dart, High Performance Networks to Enable and Enhance Scientific Productivity, WRNP 13, May 2012,

Eli Dart, Cyberinfrastructure for Data Intensive Science, Joint Techs: Internet2 Spring Member Meeting, April 2012,

Eli Dart, Network Impacts of Data Intensive Science, Ethernet Technology Summit, February 2012,

Eli Dart, Brent Draney, National Laboratory Success Stories, Joint Techs, January 24, 2012,

Reports from ESnet and National Laboratories that have successfully deployed methods to enhance their infrastructure support for data intensive science.

Joe Breen, Eli Dart, Eric Pouyoul, Brian Tierney, Achieving a Science "DMZ", Winter 2012 Joint Techs, Full day tutorial, January 22, 2012,

There are several aspects to building successful infrastructure to support data intensive science. The Science DMZ Model incorporates three key components into a cohesive whole: a high-performance network architecture designed for ease of use; well-configured systems for data transfer; and measurement hosts to provide visibility and rapid fault isolation. This tutorial will cover aspects of network architecture and network device configuration, the design and configuration of a Data Transfer Node, and the deployment of perfSONAR in the Science DMZ. Aspects of current deployments will also be discussed.

Eli Dart, The Science DMZ, Winter 2011 Joint Techs, February 1, 2011,

Chris Tracy, Eli Dart, Science DMZs: Understanding their role in high-performance data transfers, CANS 2010, September 20, 2010,

Eli Dart, High Performance Data Transfer, Joint Techs, Summer 2010, July 15, 2010,

Reports

Eli Dart, Jason Zurawski, Carol Hawk, Benjamin Brown, Inder Monga, “ESnet Requirements Review Program Through the IRI Lens”, LBNL, October 16, 2023, LBNL 2001552

The Department of Energy (DOE) ensures America’s security and prosperity by addressing its energy, environmental, and nuclear challenges through transformative science and technology solutions. The DOE’s Office of Science (SC) delivers groundbreaking scientific discoveries and major scientific tools that transform our understanding of nature and advance the energy, economic, and national security of the United States. The SC’s programs advance DOE mission science across a wide range of disciplines and have developed the research infrastructure needed to remain at the forefront of scientific discovery.

The DOE SC’s world-class research infrastructure — exemplified by the 28 SC scientific user facilities — provides the research community with premier observational, experimental, computational, and network capabilities. Each user facility is designed to provide unique capabilities to advance core DOE mission science for its sponsor SC program and to stimulate a rich discovery and innovation ecosystem.

Research communities gather and flourish around each user facility, bringing together diverse perspectives. A hallmark of many facilities is the large population of students, postdoctoral researchers, and early-career scientists who contribute as full-fledged users. These facility staff and users collaborate over years to devise new approaches to utilizing the user facility’s core capabilities. The history of the SC user facilities has many examples of wildly inventive researchers challenging operational orthodoxy to pioneer new vistas of discovery; for example, the use of the synchrotron X-ray light sources for study of proteins and other large biological molecules. This continual reinvention of the practice of science — as users and staff forge novel approaches expressed in research workflows — unlocks new discoveries and propels scientific progress.

Within this research ecosystem, the high performance computing (HPC) and networking user facilities stewarded by SC’s Advanced Scientific Computing Research (ASCR) program play a dynamic cross-cutting role, enabling complex workflows demanding high performance data, networking, and computing solutions. The DOE SC’s three HPC user facilities and the Energy Sciences Network (ESnet) high-performance research network serve all of the SC’s programs as well as the global research community. Argonne Leadership Computing Facility (ALCF), the National Energy Research Scientific Computing Center (NERSC), and Oak Ridge Leadership Computing Facility (OLCF) conceive, build, and provide access to a range of supercomputing, advanced computing, and large-scale data-infrastructure platforms, while ESnet interconnects DOE SC research infrastructure and enables seamless exchange of scientific data. All four facilities operate testbeds to expand the frontiers of computing and networking research. Together, the ASCR facilities enterprise seeks to understand and meet the needs and requirements across SC and DOE domain science programs and priority efforts, highlighted by the formal requirements reviews (RRs) methodology.

In recent years, the research communities around the SC user facilities have begun experimenting with and demanding solutions integrated with HPC and data infrastructure. This rise of integrated-science approaches is documented in many community and high-level government reports. At the dawn of the era of exascale science and the acceleration of artificial intelligence (AI) innovation, there is a broad need for integrated computational, data, and networking solutions.

In response to these drivers, DOE has developed a vision for an Integrated Research Infrastructure (IRI): To empower researchers to meld DOE’s world-class research tools, infrastructure, and user facilities seamlessly and securely in novel ways to radically accelerate discovery and innovation.

The IRI vision is fundamentally about establishing new data-management and computational paradigms within which DOE SC user facilities and their research communities work together to improve existing capabilities and create new possibilities by building bridges across traditional silos. Implementation of IRI solutions will give researchers simple and powerful tools with which to implement multi-facility research data workflows.

In 2022, SC leadership directed the Advanced Scientific Computing Research (ASCR) program to conduct the Integrated Research Infrastructure Architecture Blueprint Activity (IRI ABA) to produce a reference framework to inform a coordinated, SC-wide strategy for IRI. This activity convened the SC science programs and more than 150 DOE national laboratory experts from all 28 SC user facilities across 13 national laboratories to consider the technological, policy, and sociological challenges to implementing IRI.

Through a series of cross-cutting sprint exercises facilitated by the IRI ABA leadership group and peer facilitators, participants produced an IRI Framework based on the IRI Vision and comprising:

  • IRI Science Patterns spanning DOE science domains;
  • IRI Practice Areas needed for implementation;
  • IRI blueprints that connect Patterns and Practice Areas;
  • Overarching principles for realizing the DOE-wide IRI ecosystem.

The resulting IRI framework and blueprints provide the conceptual foundations to move forward with organized, coordinated DOE implementation efforts. The next step is to identify urgencies and ripe areas for focused efforts that uplift multiple communities.

Upon completion of the IRI ABA framework, ESnet applied the IRI Science Patterns lens and undertook a metaanalysis of ESnet’s Requirements Reviews (RRs), the core strategic planning documents that animate the multiyear partnerships between ESnet and five of the DOE SC programs. Between 2019 and 2023, ESnet completed a new round of RRs with the following SC programs: Nuclear Physics (2019-20), High Energy Physics (2020-21), Fusion Energy Sciences (2021-22), Basic Energy Sciences (2021-22), and Biological and Environmental Research (2022-23). Together these ESnet RRs provide a rich trove of insights into opportunities for immediate IRI progress and investment.

Our meta-analysis of 74 high-priority case studies reveals that:

  • -There are a significant number of research workflows spanning materials science, fusion energy, nuclear physics, and biological science that have a similar structure. Creation of common software components to improve these workflows’ performance and scalability will benefit researchers in all of these areas.
  • There is broad opportunity to accelerate scientific productivity and scientific output across DOE facilities by integrating them with each other and with high performance computing and networking.
  • The ESnet RRs’ blending of retrospective and prospective insight affirms that the IRI patterns are persistent across time and likely to persist into the future, offering value as a basis for analysis and strategic planning going forward.

 

Jason Zurawski, Eli Dart, Zach Harlan, Carol Hawk, John Hess, Justin Hnilo, John Macauley, Ramana Madupu, Ken Miller, Christopher Tracy, Andrew Wiedlea, “Biological and Environmental Research Network Requirements Review Final Report”, Report, September 11, 2023, LBNL LBNL-2001542

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between August 2022 and April 2023, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized an ESnet requirements review of BER-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BER ESS program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

Jason Zurawski, Ben Brown, Dale Carder, Eric Colby, Eli Dart, Ken Miller, Abid Patwa, Kate Robinson, Andrew Wiedlea, “High Energy Physics Network Requirements Review: One-Year Update”, ESnet Network Requirements Review, December 22, 2022, LBNL LBNL-2001492

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy​ (DOE) ​Office​ of​ Science​ (SC)​ and​ delivers​ highly​ reliable​ data​transport ​capabilities​ optimized​ for ​the​ requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the United States and abroad. ESnet is funded and stewarded​ by​ the​ Advanced​ Scientific ​Computing​ Research​ (ASCR)​ program​ and​ managed​ and​operated​ by​ the​ Scientific ​Networking​ Division​ at ​Lawrence​ Berkeley ​National​ Laboratory​ (LBNL). ​ESnet ​is ​widely​ regarded​ as​ a global leader in the research and education networking community.

In April 2022, ESnet and the Office of High Energy Physics (HEP) of the DOE SC organized an ESnet requirements review of HEP-supported activities. Preparation for the review included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about the group’s relationship to the HEP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

ESnet and ASCR use requirements reviews to discuss and analyze current and planned science use cases and anticipated data output of a particular program, user facility, or project to inform ESnet’s strategic planning, including network operations, capacity upgrades, and other service investments. A requirements review comprehensively surveys major science stakeholders’ plans and processes in order to investigate data management requirements over the next 5–10 years.

Jason Zurawski,Dale Carder,Matthias Graf,Carol Hawk,Aaron Holder,Dylan Jacob,Eliane Lessner,Ken Miller,Cody Rotermund,Thomas Russell,Athena Sefat,Andrew Wiedlea, “2022 Basic Energy Sciences Network Requirements Review Final Report”, Report, December 2, 2022, LBNL LBNL-2001490

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewardedby the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between March and September 2022, ESnet and the Office of Basic Energy Sciences (BES) of the DOE SC organized an ESnet requirements review of BES-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BES program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski,Eli Dart,Ken Miller,Lauren Rotman,Andrew Wiedlea, “ARIES Network Requirements Review”, Report, November 28, 2022, LBNL LBNL-2001476

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL).

On May 1, 2021, ESnet and the DOE Office of Energy Efficiency and Renewable Energy (EERE), organized an ESnet requirements review of the ARIES (Advanced Research on Integrated Energy Systems) platform. Preparation for this event included identification of key stakeholders to the process: program and facility management, research groups, technology providers, and a number of external observers. These individuals were asked to prepare formal case study documents in order to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski, Ben Brown, Eli Dart, Carol Hawk, Saswata Hier-Majumder, Josh King, John Mandrekas, Ken Miller, William Miller, Lauren Rotman, Andrew Wiedlea, “2021 Fusion Energy Sciences Network Requirements Review”, May 23, 2022, LBNL 2001462

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL).

 

ESnet is widely regarded as a global leader in the research and education networking community. Throughout 2021, ESnet and the Office of Fusion Energy Sciences (FES) of the DOE SC organized an ESnet requirements review of FES-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about their relationship to the FES program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski, Ben Brown, Dale Carder, Eric Colby, Eli Dart, Ken Miller, Abid Patwa, Kate Robinson, Lauren Rotman, Andrew Wiedlea, “2020 High Energy Physics Network Requirements Review Final Report”, ESnet Network Requirements Review, June 29, 2021, LBNL LBNL-2001398

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy​ (DOE) ​Office​ of​ Science​ (SC)​ and​ delivers​ highly​ reliable​ data​transport ​capabilities​ optimized​ for ​the​ requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the United States and abroad. ESnet is funded and stewarded​ by​ the​ Advanced​ Scientific ​Computing​ Research​ (ASCR)​ program​ and​ managed​ and​operated​ by​ the​ Scientific ​Networking​ Division​ at ​Lawrence​ Berkeley ​National​ Laboratory​ (LBNL). ​ESnet ​is ​widely​ regarded​ as​ a global leader in the research and education networking community.

Throughout ​2020,​ESnet​ and​ the ​Office ​of ​High ​Energy​ Physics​ (HEP)​ of ​the ​DOE​ SC​ organized​ an​ ESnet​ requirements ​review​ of ​HEP-supported​ activities.​ Preparation ​for ​this​ event​included​ identification ​of​ key​ stakeholders: program and facility management, research groups, technology providers, and a number of external observers. These individuals were asked to prepare formal case study documents about their relationship to the HEP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

ESnet and ASCR use requirements reviews to discuss and analyze current and planned science use cases and anticipated data output of a particular program, user facility, or project to inform ESnet’s strategic planning, including network operations, capacity upgrades, and other service investments. A requirements review comprehensively surveys major science stakeholders’ plans and processes in order to investigate data management requirements over the next 5–10 years.

Jason Zurawski, Benjamin Brown, Eli Dart, Ken Miller, Gulshan Rai, Lauren Rotman, Paul Wefel, Andrew Wiedlea, Editors, “Nuclear Physics Network Requirements Review: One-Year Update”, ESnet Network Requirements Review, September 2, 2020, LBNL LBNL-2001381

Jason Zurawski, Eli Dart, Lauren Rotman, Paul Wefel, Editors, “Nuclear Physics Network Requirements Review 2019 - Final Report”, ESnet Network Requirements Review, May 8, 2019, LBNL LBNL-2001281

Julian Borrill, Eli Dart, Brooklin Gore, Salman Habib, Steven T. Myers, Peter Nugent, Don Petravick, Rollin Thomas, “Improving Data Mobility & Management for International Cosmology”, CrossConnects 2015 Workshop, October 2, 2015, LBNL 1001456

Eli Dart, Mary Hester, and Jason Zurawski, Editors, “Biological and Environmental Research Network Requirements Review 2015 - Final Report”, ESnet Network Requirements Review, September 18, 2015, LBNL 1004370

Eli Dart, Mary Hester, Jason Zurawski, “Advanced Scientific Computing Research Network Requirements Review - Final Report 2015”, ESnet Network Requirements Review, April 22, 2015, LBNL 1005790

Eli Dart, Mary Hester, Jason Zurawski, “Basic Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, September 2014, LBNL 6998E

Eli Dart, Mary Hester, Jason Zurawski, “Fusion Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, August 2014, LBNL 6975E

Eli Dart, Mary Hester, Jason Zurawski, Editors, “High Energy Physics and Nuclear Physics Network Requirements - Final Report”, ESnet Network Requirements Workshop, August 2013, LBNL 6642E

Eli Dart, Brian Tierney, Editors, “Biological and Environmental Research Network Requirements Workshop, November 2012 - Final Report””, November 29, 2012, LBNL LBNL-6395E

David Asner, Eli Dart, and Takanori Hara, “Belle-II Experiment Network Requirements”, October 2012, LBNL LBNL-6268E

The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is
expected in 2015.

In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration’s work.

The requirements identified at the Belle II Experiment Requirements workshop are summarized in the Findings section, and are described in more detail in this report. KEK invited Belle II organizations to attend a follow-up meeting hosted by PNNL during SC12 in Salt Lake City on November 13, 2012. The notes from this meeting are in Appendix C.

Eli Dart, Brian Tierney, editors, “Advanced Scientific Computing Research Network Requirements Review, October 2012 - Final Report”, ESnet Network Requirements Review, October 4, 2012, LBNL LBNL-6109E

Eli Dart, “ESnet Requirements Workshops Summary for Sites”, ESCC Meeting, Clemson, SC, February 2, 2011,

Eli Dart, Lauren Rotman, Brian Tierney, editors, “Nuclear Physics Network Requirements Workshop, August 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5518E

Eli Dart, Brian Tierney, editors, “Fusion Energy Network Requirements Workshop, December 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5905E

Eli Dart, Brian Tierney, editors, “Biological and Environmental Research Network Requirements Workshop, April 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4089E

Office of Biological and Environmental Research, DOE Office of Science Energy Sciences Network Rockville, MD — April 29 and 30, 2010. This is LBNL report LBNL-4089E.

Participants and Contributors: Kiran Alapaty, DOE/SC/BER (Atmospheric System Research) Ben Allen, LANL (Bioinformatics) Greg Bell, ESnet (Networking) David Benton, GLBRC/University of Wisconsin (Informatics) Tom Brettin, ORNL (Bioinformatics) Shane Canon, NERSC (Data Systems) Rich Carlson, DOE/SC/ASCR (Network Research) Steve Cotter, ESnet (Networking) Silvia Crivelli, LBNL (JBEI) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager) Narayan Desai, ANL (Networking) Richard Egan, ANL (ARM) Jeff Flick, NOAA (Networking) Ken Goodwin, PSC/NLR (Networking) Susan Gregurick, DOE/SC/BER (Computational Biology) Susan Hicks, ORNL (Networking) Bill Johnston, ESnet (Networking) Bert de Jong, PNNL (EMSL/HPC) Kerstin Kleese van Dam, PNNL (Data Management) Miron Livny, University of Wisconsin (Open Science Grid) Victor Markowitz, LBNL/JGI (Genomics) Jim McGraw, LLNL (HPC/Climate) Raymond McCord, ORNL (ARM) Chris Oehmen, PNNL (Bioinformatics/ScalaBLAST) Kevin Regimbal, PNNL (Networking/HPC) Galen Shipman, ORNL (ESG/Climate) Gary Strand, NCAR (Climate) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories, Middleware) Dean Williams, LLNL (ESG/Climate) Jason Zurawski, Internet2 (Networking)  

Editors: Eli Dart, ESnet; Brian Tierney, ESnet

Eli Dart, Brian Tierney, editors, “Basic Energy Sciences Network Requirements Workshop, September 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4363E

Office of Basic Energy Sciences, DOE Office of Science; Energy Sciences Network; Gaithersburg, MD — September 22 and 23, 2010

Participants and Contributors; Alan Biocca, LBNL (Advanced Light Source); Rich Carlson, DOE/SC/ASCR (Program Manager); Jackie Chen, SNL/CA (Chemistry/Combustion); Steve Cotter, ESnet (Networking); Eli Dart, ESnet (Networking); Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager); Jim Davenport, DOE/SC/BES (BES Program); Alexander Gaenko, Ames Lab (Chemistry); Paul Kent, ORNL (Materials Science, Simulations); Monica Lamm, Ames Lab (Computational Chemistry); Stephen Miller, ORNL (Spallation Neutron Source); Chris Mundy, PNNL (Chemical Physics); Thomas Ndousse, DOE/SC/ASCR (ASCR Program); Mark Pederson, DOE/SC/BES (BES Program); Amedeo Perazzo, SLAC (Linac Coherent Light Source); Razvan Popescu, BNL (National Synchrotron Light Source); Damian Rouson, SNL/CA (Chemistry/Combustion); Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager); Bobby Sumpter, ORNL (Computer Science and Mathematics and Center for Nanophase; Materials Sciences); Brian Tierney, ESnet (Networking); Cai-Zhuang Wang, Ames Lab (Computer Science/Simulations); Steve Whitelam, LBNL (Molecular Foundry); Jason Zurawski, Internet2 (Networking)

“HEP (High Energy Physics) Network Requirements Workshop, August 2009 - Final Report”, ESnet Network Requirements Workshop, August 27, 2009, LBNL LBNL-3397E

Office of High Energy Physics, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-3397E.

Participants and Contributors: Jon Bakken, FNAL (LHC/CMS) Artur Barczyk, Caltech (LHC/Networking) Alan Blatecky, NSF (NSF Cyberinfrastructure) Amber Boehnlein, DOE/SC/HEP (HEP Program Office) Rich Carlson, Internet2 (Networking) Sergei Chekanov, ANL (LHC/ATLAS) Steve Cotter, ESnet (Networking) Les Cottrell, SLAC (Networking) Glen Crawford, DOE/SC/HEP (HEP Program Office) Matt Crawford, FNAL (Networking/Storage) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (HEP/LHC/ATLAS) Ian Fisk, FNAL (LHC/CMS) Rob Gardner, University of Chicago (HEP/LHC/ATLAS) Bill Johnston, ESnet (Networking) Steve Kent, FNAL (Astroparticle) Stephan Lammel, FNAL (FNAL Experiments and Facilities) Stewart Loken, LBNL (HEP) Joe Metzger, ESnet (Networking) Richard Mount, SLAC (HEP) Thomas Ndousse-Fetter, DOE/SC/ASCR (Network Research) Harvey Newman, Caltech (HEP/LHC/Networking) Jennifer Schopf, NSF (NSF Cyberinfrastructure) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alan Stone, DOE/SC/HEP (HEP Program Office) Brian Tierney, ESnet (Networking) Craig Tull, LBNL (Daya Bay) Jason Zurawski, Internet2 (Networking)

 

“ASCR (Advanced Scientific Computing Research) Network Requirements Workshop, April 2009 - Final Report”, ESnet Networking Requirements Workshop, April 15, 2009, LBNL LBNL-2495E

Office of Advanced Scientific Computing Research, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-2495E.

Participants and Contributors: Bill Allcock, ANL (ALCF, GridFTP) Rich Carlson, Internet2 (Networking) Steve Cotter, ESnet (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Brent Draney, NERSC (Networking and Security) Richard Gerber, NERSC (User Services) Mike Helm, ESnet (DOEGrids/PKI) Jason Hick, NERSC (Storage) Susan Hicks, ORNL (Networking) Scott Klasky, ORNL (OLCF Applications) Miron Livny, University of Wisconsin Madison (OSG) Barney Maccabe, ORNL (Computer Science) Colin Morgan, NOAA (Networking) Sue Morss, DOE/SC/ASCR (ASCR Program Office) Lucy Nowell, DOE/SC/ASCR (SciDAC) Don Petravick, FNAL (HEP Program Office) Jim Rogers, ORNL (OLCF) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alex Sim, LBNL (Storage Middleware) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories/Middleware) Dean Williams, LLNL (ESG/Climate) Linda Winkler, ANL (Networking) Frank Wuerthwein, UC San Diego (OSG)

“NP (Nuclear Physics) Network Requirements Workshop, May 2008 - Final Report”, ESnet Network Requirements Workshop, May 6, 2008, LBNL LBNL-1289E

Nuclear Physics Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD. LBNL-1289E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (RHIC) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Andy Kowalski, JLAB (Networking) Jerome Lauret, BNL (STAR at RHIC) Charles Maguire, Vanderbilt (LHC CMS Heavy Ion) Douglas Olson, LBNL (STAR at RHIC and ALICE at LHC) Martin Purschke, BNL (PHENIX at RHIC) Gulshan Rai, DOE/SC (NP Program Office) Brian Tierney, ESnet (Networking) Chip Watson, JLAB (CEBAF) Carla Vale, BNL (PHENIX at RHIC)

“FES (Fusion Energy Sciences ) Network Requirements Workshop, March 2008 - Final Report”, ESnet Network Requirements Workshop, March 13, 2008, LBNL LBNL-644E.

Fusion Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-644E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Tom Casper, LLNL (Fusion – LLNL) Dan Ciarlette, ORNL (ITER) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Bill Dorland, University of Maryland (Fusion – Computation) Martin Greenwald, MIT (Fusion – Alcator C-Mod) Paul Henderson, PPPL (Fusion – PPPL Networking, PPPL) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) Ihor Holod, UC Irvine (Fusion – Computation, SciDAC) William Johnston, ESnet (Networking) Scott Klasky, ORNL (Fusion – Computation, SciDAC) John Mandrekas, DOE/SC (FES Program Office) Doug McCune, PPPL (Fusion – TRANSP user community, PPPL) Thomas NDousse, DOE/SC/ASCR (ASCR Program Office) Ravi Samtaney, PPPL (Fusion – Computation, SciDAC) David Schissel, General Atomics (Fusion – DIII-D, Collaboratories) Yukiko Sekine, DOE/SC/ASCR (ASCR Program Office), Sveta Shasharina, Tech-X Corporation (Fusion – Computation) Brian Tierney, LBNL (Networking)

“BER (Biological and Environmental Research) Network Requirements Workshop, July 2007 - Final Report”, ESnet Network Requirements Workshop, July 26, 2007,

Biological and Environmental Research Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD – July 26 and 27, 2007. LBNL/PUB-988.

Participants and Contributors: Dave Bader, LLNL (Climate) Raymond Bair, ANL (Comp Bio) Anjuli Bamzai, DOE/SC BER Paul Bayer, DOE/SC BER David Bernholdt, ORNL (Earth System Grid) Lawrence Buja, NCAR (Climate) Alice Cialella, BNL (ARM Data) Eli Dart, ESnet (Networking) Eric Davis, LLNL (Climate) Bert DeJong, PNNL (EMSL) Dick Eagan, ANL (ARM) Yakov Golder, JGI (Comp Bio) Dave Goodwin, DOE/SC ASCR Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Phil Jones, LANL (Climate) Raymond McCord, ORNL (ARM) Steve Meacham, NSF George Michaels, PNNL (Comp Bio) Kevin Regimbal, PNNL (EMSL) Mike Sayre, NIH Harris Shapiro, LBNL (JGI) Ellen Stechel, ASCAC Brian Tierney, LBNL (Networking) Lee Tsengdar, NASA (Geosciences) Mike Wehner, LBNL (Climate) Trey White, ORNL (Climate)

“BES (Basic Energy Sciences) Network Requirements Workshop, June 2007 - Final Report”, ESnet Network Requirements Workshop, June 4, 2007, LBNL LBNL/PUB-981

Basic Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Washington, DC – June 4 and 5, 2007. LBNL/PUB-981.

Participants and Contributors: Dohn Arms, ANL (Advanced Photon Source) Anjuli Bamzai, DOE/SC/BER (BER Program Office) Alan Biocca, LBNL (Advanced Light Source) Jackie Chen, SNL (Combustion Research Facility) Eli Dart, ESnet (Networking) Bert DeJong, PNNL (Chemistry) Paul Domagala, ANL (Computing and Information Systems) Yiping Feng, SLAC (LCLS/LUSI) David Goodwin, DOE/SC/ASCR (ASCR Program Office) Bruce Harmon, Ames Lab (Materials Science) Robert Harrison, UT/ORNL (Chemistry) Richard Hilderbrandt, DOE/SC/BES (BES Program Office) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Roger Klaffky, DOE/SC/BES (BES Program Office) Michael McGuigan, BNL (Center for Functional Nanomaterials) Stephen Miller, ORNL (Spallation Neutron Source) Richard Mount, SLAC (Linac Coherent Light Source) Jeff Neaton, LBNL (Molecular Foundry) Larry Rahn, SNL/BES (Combustion) Thomas Schulthess, ORNL (CNMS) Ken Sidorowicz, ANL (Advanced Photon Source) Ellen Stechel, SNL (ASCAC) Brian Tierney, LBNL (Networking) Linda Winkler, ANL (Networking) Zhijian Yin, BNL (National Synchrotron Light Source)

Eli Dart, editor, “Science-Driven Network Requirements for ESnet: Update to the 2002 Office of Science Networking Requirements Workshop Report - February 21, 2006”, ESnet Networking Requirements Workshop, February 21, 2006,

Update to the 2002 Office of Science Networking Requirements Workshop Report February 21, 2006. LBNL report LBNL-61832.

Contributors: Paul Adams, LBNL (Advanced Light Source); Shane Canon, ORNL (NLCF); Steven Carter, ORNL (NLCF); Brent Draney, LBNL (NERSC); Martin Greenwald, MIT (Magnetic Fusion Energy); Jason Hodges, ORNL (Spallation Neutron Source); Jerome Lauret, BNL (Nuclear Physics); George Michaels, PNNL (Bioinformatics); Larry Rahn, SNL (Chemistry); David Schissel, GA (Magnetic Fusion Energy); Gary Strand, NCAR (Climate Science); Howard Walter, LBNL (NERSC); Michael Wehner, LBNL (Climate Science); Dean Williams, LLNL (Climate Science).