Menu

Publications, Presentations, and More

Journal Article

2024

Sion Kim, Ezra Kissel, Karel Matous, “Adaptive and Parallel Multiscale Framework for Modeling Cohesive Failure in Engineering Scale Systems”, Journal of Computer Methods in Applied Mechanics and Engineering (CMAME), September 1, 2024, 429,

Jason Zurawski, Eli Dart, Michael Halfmoon, Carol Hawk, Josh King, John Mandrekas, Ken Miller, Andrew Wiedlea, “Fusion Energy Sciences Network Requirements Review: Update”, Report, July 26, 2024, 36, LBNL-LBNL-2001603

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

ESnet interconnects DOE national laboratories, user facilities, and major experiments so that scientists can use remote instruments and computing resources as well as share data with collaborators, transfer large datasets, and access distributed data repositories. ESnet is specifically built to provide a range of network services tailored to meet the unique requirements of the DOE’s data-intensive science.

In May 2023, the Energy Sciences Network (ESnet) and the Fusion Energy Sciences program (FES) of the DOE SC organized an interim ESnet requirements review of FES-supported activities to follow up on the work started during the 2021 FES Network Requirements Review. Preparation for these events included checking back with the key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare updates to its previously submitted case study documents, so that ESnet could update the understanding of any changes to the current, near-term, and long-term status, expectations, and processes that will support the science activities of the program.

 

 

2023

Garhan Attebury, Marian Babik, Dale Carder, Tim Chown, Andrew Hanushevsky, Bruno Hoeft, Andrew Lake, Michael Lambert, James Letts, Shawn McKee, Karl Newell, Tristan Sullivan, “Identifying and Understanding Scientific Network Flows”, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023), May 2023,

The High-Energy Physics (HEP) and Worldwide LHC Computing Grid (WLCG) communities have faced significant challenges in understanding their global network flows across the world’s research and education (R&E) networks. This article describes the status of the work carried out to tackle this challenge by the Research Technical Networking Working Group (RNTWG) and the Scientific Network Tags (Scitags) initiative, including the evolving framework and tools, as well as our plans to improve network visibility before the next WLCG Network Data Challenge in early 2024. The Scitags initiative is a long-term effort to improve the visibility and management of network traffic for data-intensive sciences. The efforts of the RNTWG and Scitags initiatives have created a set of tools, standards, and proof-of-concept demonstrators that show the feasibility of identifying the owner (community) and purpose (activity) of network traffic anywhere in the network.

W Bhimji, D Carder, E Dart, J Duarte, I Fisk, R Gardner, C Guok, B Jayatilaka, T Lehman, M Lin, C Maltzahn, S McKee, MS Neubauer, O Rind, O Shadura, NV Tran, P van Gemmeren, G Watts, BA Weaver, F Würthwein, “Snowmass 2021 Computational Frontier CompF4 Topical Group Report Storage and Processing Resource Access”, Computing and Software for Big Science, April 2023, 7,

The Snowmass 2021 CompF4 topical group’s scope is facilities R&D, where we consider “facilities” as the hardware and software infrastructure inside the data centers plus the networking between data centers, irrespective of who owns them, and what policies are applied for using them. In other words, it includes commercial clouds, federally funded High Performance Computing (HPC) systems for all of science, and systems funded explicitly for a given experimental or theoretical program. However, we explicitly consider any data centers that are integrated into data acquisition systems or trigger of the experiments out of scope here. Those systems tend to have requirements that are quite distinct from the data center functionality required for “offline” processing and storage.

Se-young Yu, Qingyang Zeng, Jim Chen, Yan Chen, Joe Mambretti, “AIDTN: Towards a Real-Time AI Optimized DTN System With NVMeoF”, IEEE Transactions on Parallel and Distributed Systems, 2023,

2022

Yatish Kumar, Stacey Sheldon, Dale Carder, “Transport Layer Networking”, arXiv preprint, April 2022,

In this paper we focus on the invention of new network forwarding behaviors between network Layers 4 and Layer 7 in the OSI network model. Our design goal is to propose no changes to L3 - The IP network layer, thus maintaining 100% compatibility with the existing internet. Small changes are made to L4 the transport layer, and a new design for a session ( L5 ) is proposed. This new capability is intended to have minimal or no impact on the application layer, except for exposing the ability for L7 to select this new mode of data transfer or not. The invention of new networking technologies is frequently done in an academic setting, however the design needs to be constrained by practical considerations for cost, operational feasibility, robustness and scale. Our goal is to improve the production data infrastructure for HEP 24/7 on a global scale.

2021

Mariam Kiran, Scott Campbell, Fatema Bannat Wala, Nick Buraglio, Inder Monga, “Machine learning-based analysis of COVID-19 pandemic impact on US research networks”, ACM SIGCOMM Computer Communication Review, December 3, 2021,

This study explores how fallout from the changing public health policy around COVID-19 has changed how researchers access and process their science experiments. Using a combination of techniques from statistical analysis and machine learning, we conduct a retrospective analysis of historical network data for a period around the stay-at-home orders that took place in March 2020. Our analysis takes data from the entire ESnet infrastructure to explore DOE high-performance computing (HPC) resources at OLCF, ALCF, and NERSC, as well as User sites such as PNNL and JLAB. We look at detecting and quantifying changes in site activity using a combination of t-Distributed Stochastic Neighbor Embedding (t-SNE) and decision tree analysis. Our findings bring insights into the working patterns and impact on data volume movements, particularly during late-night hours and weekends.

Se-young Yu, Jim Chen, Fei Yeh, Joe Mambretti, Xiao Wang, Anna Giannakou, Eric Pouyoul, Marc Lyonnais, “Analysis of NVMe over fabrics with SCinet DTN-as-a-Service”, Cluster Computing, 2021, 1--13,

2020

Inder Monga, Chin Guok, John MacAuley, Alex Sim, Harvey Newman, Justas Balcas, Phil DeMar, Linda Winkler, Tom Lehman, Xi Yang, “Software-Defined Network for End-to-end Networked Science at the Exascale”, Future Generation Computer Systems, April 13, 2020,

Abstract

Domain science applications and workflow processes are currently forced to view the network as an opaque infrastructure into which they inject data and hope that it emerges at the destination with an acceptable Quality of Experience. There is little ability for applications to interact with the network to exchange information, negotiate performance parameters, discover expected performance metrics, or receive status/troubleshooting information in real time. The work presented here is motivated by a vision for a new smart network and smart application ecosystem that will provide a more deterministic and interactive environment for domain science workflows. The Software-Defined Network for End-to-end Networked Science at Exascale (SENSE) system includes a model-based architecture, implementation, and deployment which enables automated end-to-end network service instantiation across administrative domains. An intent based interface allows applications to express their high-level service requirements, an intelligent orchestrator and resource control systems allow for custom tailoring of scalability and real-time responsiveness based on individual application and infrastructure operator requirements. This allows the science applications to manage the network as a first-class schedulable resource as is the current practice for instruments, compute, and storage systems. Deployment and experiments on production networks and testbeds have validated SENSE functions and performance. Emulation based testing verified the scalability needed to support research and education infrastructures. Key contributions of this work include an architecture definition, reference implementation, and deployment. This provides the basis for further innovation of smart network services to accelerate scientific discovery in the era of big data, cloud computing, machine learning and artificial intelligence.

Qinwen Hu, Se-Young Yu, Muhammad Rizwan Asghar, “Analysing performance issues of open-source intrusion detection systems in high-speed networks”, Journal of Information Security and Applications, 2020, 51:102426,

2019

Marco Ruffini, Kasandra Pillay, Chongjin Xie, Lei Shi, Dale Smith, Inder Monga, Xinsheng Wang, and Jun Shan Wey, “Connected OFCity Challenge: Addressing the Digital Divide in the Developing World”, Journal of Optical Communications and Networking, June 20, 2019, 11:354-361,

Jonathan B. Ajo-Franklin, Shan Dou, Nathaniel J. Lindsey, Inder Monga, Chris Tracy, Michelle Robertson, Veronica Rodriguez Tribaldos, Craig Ulrich, Barry Freifeld, Thomas Daley and Xiaoye Li, “Distributed Acoustic Sensing Using Dark Fiber for Near-Surface Characterization and Broadband Seismic Event Detection”, Nature, February 4, 2019,

Mariam Kiran and Anshuman Chhabra, “Understanding flows in high-speed scientific networks: A Netflow data study”, Future Generation Computer Systems, February 1, 2019, 94:72-79,

2018

F Alali, N Hanford, E Pouyoul, R Kettimuthu, M Kiran, B Mack-Crane, “Calibers: A bandwidth calendaring paradigm for science workflows”, Future Generation Computer Systems, December 1, 2018, 89:736-745,

M Gribaudo, M Iacono, Mariam Kiran, “A performance modeling framework for lambda architecture based applications”, Future Generation Computer Systems, November 9, 2018, 86:1032-1041,

Syed Asif Raza, Wenji Wu, Qiming Lu, Liang Zhang, Sajith Sasidharan, Phil DeMar, Chin Guok, John Macauley, Eric Pouyoul, Jin Kim, Seo-Young Noh, “AmoebaNet: An SDN-enabled network service for big data science”, Journal of Network and Computer Applications, Elsevier, October 1, 2018, 119:70-82,

RK Shyamasundar, Prabhat Prabhat, Vipin Chaudhary, Ashwin Gumaste, Inder Monga, Vishwas Patil, Ankur Narang, “Computing for Science, Engineering and Society: Challenges, Requirement, and Strategic Roadmap”, Proceedings of the Indian National Science Academy, June 15, 2018,

Inder Monga, Prabhat, “Big-Data Science: Infrastructure Impact”, Proceedings of the Indian National Science Academy, June 15, 2018,

The nature of science is changing dramatically, from single researcher at a lab or university laboratory working with graduate students to a distributed multi- researcher consortiums, across universities and research labs, tackling large scientific problems. In addition, experimentalists and theorists are collaborating with each other by designing experiments to prove the proposed theories. ‘Big Data’ being produced by these large experiments have to verified against simulations run on High Performance Computing (HPC) resources.

The trends above are pointing towards

  1. Geographically dispersed experiments (and associated communities) that require data being moved across multiple sites. Appropriate mechanisms and tools need to be employed to move, store and archive datasets from such experiments.

  2. Convergence of simulation (requiring High Performance Computing) and Big Data Analytics (requiring advanced on-site data management techniques) into a small number of High Performance Computing centers. Such centers are key for consolidating software and hardware infrastructure efforts, and achieving broad impact across numerous scientific domains.

    The trends indicate that for modern science and scientific discovery, infrastructure support for handling both large scientific data as well as high-performance computing is extremely important. In addition, given the distributed nature of research and big-team science, it is important to build infrastructure, both hardware and software, that enables sharing across

 

institutions, researchers, students, industry and academia. This is the only way that a nation can maximize the research capabilities of its citizens while maximizing the use of its investments in computer, storage, network and experimental infrastructure.

This chapter introduces infrastructure requirements of High-Performance Computing and Networking with examples drawn from NERSC and ESnet, two large Department of Energy facilities at Lawrence Berkeley National Laboratory, CA, USA, that exemplify some of the qualities needed for future Research & Education infrastructure.

Ilya Baldin, Tilman Wolf, Inder Monga, Tom Lehman, “The Future of CISE Distributed Research Infrastructure”, ACM SIGCOMM Computer Communication Review, May 1, 2018,

Shared research infrastructure that is globally distributed and widely accessible has been a hallmark of the networking community. We present a vision for a future mid-scale distributed research infrastructure aimed at enabling new types of discoveries. The “lessons learned” from constructing and operating the Global Environment for Network Innovations (GENI) infrastructure are the basis for our attempt to project future concepts and solutions. Our aim is to engage the community to contribute new ideas and to inform funding agencies about future research directions.

Ralph Koning, Nick Buraglio, Cees de Laat, Paola Grosso, “CoreFlow: Enriching Bro security events using network traffic monitoring data”, Future Generation Comp. Syst., February 1, 2018, 79,

Attacks against network infrastructures can be detected by Intrusion Detection Systems (IDS). Still reaction to these events are often limited by the lack of larger contextual information in which they occurred. In this paper we present CoreFlow, a framework for the correlation and enrichment of IDS data with network flow information. CoreFlow ingests data from the Bro IDS and augments this with flow data from the devices in the network. By doing this the network providers are able to reconstruct more precisely the route followed by the malicious flows. This enables them to devise tailored countermeasures, e.g. blocking close to the source of the attack. We tested the initial CoreFlow prototype in the ESnet network, using inputs from 3 Bro systems and more than 50 routers.

Liang Zhang, Wenji Wu, Phil DeMar, “mdtmFTP and its evaluation on ESNET SDN testbed”, Future Generation Computer Systems, Elsevier, February 1, 2018, 79:199-204,

Jose Leal D Neto, Se-Young Yu, Daniel F Macedo, Jose Marcos S Nogueira, Rami Langar, Stefano Secci, “ULOOF: A user level online offloading framework for mobile edge computing”, IEEE Transactions on Mobile Computing, 2018, 17:2660--2674,

2017

M. Gribaudo, M. Iacono, M. Kiran, “A performance modeling framework for lambda architecture based applications”, Future Generation Computer Systems, August 30, 2017,

M Kiran, E Pouyoul, A Mercian, B Tierney, C Guok, I Monga, “Enabling intent to configure scientific networks for high performance demands”, Future Generation Computer Systems, August 2, 2017,

Kim Roberts, Qunbi Zhuge, Inder Monga, Sebastien Gareau, and Charles Laperle, “Beyond 100 Gb/s: Capacity, Flexibility, and Network Optimization”, Journal of Optical Communication Network, April 1, 2017, Volume 9,

In this paper, we discuss building blocks that enable the exploitation of optical capacities beyond 100 Gbs. Optical networks will benefit from more flexibility and agility in their network elements, especially from co- herent transceivers. To achieve capacities of 400 Gbs and more, coherent transceivers will operate at higher symbol rates. This will be made possible with higher bandwidth components using new electro-optic technologies imple- mented with indium phosphide and silicon photonics. Digital signal processing will benefit from new algorithms. Multi-dimensional modulation, of which some formats are already in existence in current flexible coherent transceiv- ers, will provide improved tolerance to noise and fiber non- linearities. Constellation shaping will further improve these tolerances while allowing a finer granularity in the selection of capacity. Frequency-division multiplexing will also provide improved tolerance to the nonlinear charac- teristics of fibers. Algorithms with reduced computation complexity will allow the implementation, at speeds, of direct pre-compensation of nonlinear propagation effects. Advancement in forward error correction will shrink the performance gap with Shannons limit. At the network con- trol and management level, new tools are being developed to achieve a more efficient utilization of networks. This will also allow for network virtualization, orchestration, and management. Finally, FlexEthernet and FlexOTN will be put in place to allow network operators to optimize capac- ity in their optical transport networks without manual changes to the client hardware. 

Ashwin Gumaste, Tamal Das, Kandarp Khandwala, and Inder Monga, “Network Hardware Virtualization for Application Provisioning in Core Networks”, IEEE Communications Magazine, February 1, 2017,

Service providers and vendors are moving toward a network virtualized core, whereby multiple applications would be treated on their own merit in programmable hardware. Such a network would have the advantage of being customized for user requirements and allow provisioning of next generation services that are built speci cally to meet user needs. In this article, we articulate the impact of network virtualization on networks that provide customized services and how a pro- vider’s business can grow with network virtualization. We outline a decision map that allows mapping of applications with technology that is supported in network-virtualization--oriented equipment. Analogies to the world of virtual machines and generic virtualization show that hardware supporting network virtualization will facilitate new customer needs while optimizing the provider network from the cost and performance perspectives. A key conclusion of the article is that growth would yield sizable revenue when providers plan ahead in terms of supporting network-virtualization-oriented technology in their networks. To be precise, providers have to incorporate into their growth plans network elements capable of new service deployments while protecting network neutrality. A simulation study validates our NV-induced model. 

L. Zuo, M. Zhu, C. Wu and J. Zurawski, “Fault-tolerant Bandwidth Reservation Strategies for Data Transfers in High-performance Networks”, Computer Networks, February 1, 2017, 113:1-16,

Tatyana Eftonova, Mariam Kiran, Mike Stannett, “Long-term Macroeconomic Dynamics of Competition in the Russian Economy using Agent-based Modelling”, International Journal of System Dynamics Applications (IJSDA) 6 (1), 1-20, 2017, January 1, 2017,

2016

Mariam Kiran, Anthony Simons, “Testing Software Services in Cloud Ecosystems”, International Journal of Cloud Applications and Computing (IJCAC) 6 (1), 42-58 2016, July 1, 2016,

Sean Peisert, William Barnett, Eli Dart, James Cuff, Robert L Grossman, Edward Balas, Ari Berman,
Anurag Shankar, Brian Tierney,
“The Medical Science DMZ”, Journal of the American Medical Informatics Association, May 2, 2016,

Nathan Hanford, Vishal Ahuja, Matthew Farrens, Dipak Ghosal, Mehmet Balman, Eric Pouyoul, Brian Tierney, “Improving network performance on multicore systems: Impact of core affinities on high throughput flows”, Future Generation Computer Systems, Vol 56., March 1, 2016,

2015

Zhenzhen Yan, Chris Tracy, Malathi Veeraraghavan, Tian Jin, Zhengyang Liu, “A Network Management System for Handling Scientific Data Flows”, Journal of Network and Systems Management, October 11, 2015,

Ewa Deelman, Christopher Carothers,Anirban Mandal,Brian Tierney,Jeffrey S Vetter,Ilya Baldin,Claris Castillo,Gideon Juve,Dariusz Król,Vickie Lynch,Ben Mayer,Jeremy Meredith,Thomas Proffen,Paul Ruth,Rafael Ferreira da Silva,, “PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows”, International Journal of High Performance Computing Applications, July 14, 2015,

M Kiran, G Katsaros, J Guitart, J L Prieto, “Methodology for Information Management and Data Assessment in Cloud Environments”, International Journal of Grid and High Performance Computing (IJGHPC), 6(4), 46-71, June 1, 2015,

M Kiran, “Modelling Cities as a Collection of TeraSystems–Computational Challenges in Multi-Agent Approach”, Procedia Computer Science 52, 974-979, 2015, June 1, 2015,

Peter Hinrich, P Grosso, Inder Monga, “Collaborative Research Using eScience Infrastructure and High Speed Networks”, Future Generation Computer Systems, April 2, 2015,

2014

K. Djemame, B Barnitzke, M Corrales, M Kiran, M Jiang, D Armstrong, N Forgo, I Nwankwo, “Legal issues in clouds: towards a risk inventory”, Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 371, 1983, The Royal Society, June 1, 2014,

2013

Z. Yan, M. Veeraraghavan, C. Tracy, C. Guok, “On How to Provision Virtual Circuits for Network-Redirected Large-Sized, High-Rate Flows”, International Journal on Advances in Internet Technology, vol. 6, no. 3 & 4, 2013, November 1, 2013,

M Holcombe, S Chin, S Cincotti, M Raberto, A Teglio, S Coakley, C Deissenberg, S vander Hoog, C Greenough, H Dawid, M Neugart, S Gemkow, P Harting, M Kiran, D Worth, “Large-scale modelling of economic systems”, Complex Systems 22 (2) 8 2013, June 1, 2013,

2012

M.Holcombe, S.Adra, M.Bicak, S.Chin, S.Coakley, A.I.Graham, J.Green, C.Greenough, D.Jackson, M.Kiran, S.MacNeil, A.Maleki-Dizaji, P.McMinn, M.Pogson, R.Poole, E.Qwarnstrom, F.Ratnieks, M.D.Rolfe, R.Smallwood, T.Sun and D.Worth, “Modelling complex biological systems using an agent-based approach,”, Integrative Biology, 2012, June 1, 2012,

M.Kiran, M.Bicak, S.Maleki-Dizaji, M.Holcombe, “FLAME: A Platform for High Performance Computing of Complex Systems, Applied for Three Case Studies”, Acta Physica Polonica B, Proceedings Supplement, DOI:10.5506/APhysPolBSupp.4.201, PACS numbers: 07.05.Tp, vol 4, no 2, 2011 (Polish Journal), January 1, 2012,

2011

Wenji Wu, Phil DeMar, Matt Crawford, “A transport-friendly NIC for multicore/multiprocessor systems”, IEEE Transactions on Parallel and Distributed Systems, July 14, 2011, 23:607-615,

Neal Charbonneau, Vinod M. Vokkarane, Chin Guok, Inder Monga, “Advance Reservation Frameworks in Hybrid IP-WDM Networks”, IEEE Communications Magazine, May 9, 2011, 59, Issu:132-139,

Tom Lehman, Xi Yang, Nasir Ghani, Feng Gu, Chin Guok, Inder Monga, and Brian Tierney, “Multilayer Networks: An Architecture Framework”, IEEE Communications Magazine, May 9, 2011,

Inder Monga, Chin Guok, William E. Johnston, and Brian Tierney, “Hybrid Networks: Lessons Learned and Future Challenges Based on ESnet4 Experience”, IEEE Communications Magazine, May 1, 2011,

Wenji Wu, Phil DeMar, Matt Crawford, “Why can some advanced Ethernet NICs cause packet reordering?”, IEEE Communications Letter, February 1, 2011, 15:253-255,

Ezra Kissel, Martin Swany, Aaron Brown, “Phoebus: A system for high throughput data movement”, J. Parallel Distributed Comput., 2011, 71:266--279,

2009

Wenji Wu, Phil DeMar, Matt Crawford, “Sorting reordered packets with interrupt coalescing”, Computer Networks, Elsevier, October 12, 2009, 53:2646-2662,

Mariam Kiran, Simon Coakley, Neil Walkinshaw, Phil McMinn, Mike Holcombe, “Validation and discovery from computational biology models”, Biosystems, September 1, 2009,

Swany M. Brown A., Zurawski J., “A General Encoding Framework for Representing Network Measurement and Topology Data”, Concurrency and Computation: Practice and Experience, 2009, 21:1069--1086,

2008

Chin P. Guok, Jason R. Lee, Karlo Berket, “Improving The Bulk Data Transfer Experience”, International Journal of Internet Protocol Technology 2008 - Vol. 3, No.1 pp. 46 - 53, January 1, 2008,

J. Zurawski, D Wang, “Fault-tolerance schemes for clusterheads in clustered mesh networks”, International Journal of Parallel, Emergent and Distributed Systems, 2008, 23:271--287,

2007

William E Johnston, “ESnet: Advanced Networking for Science”, SciDAC Review, July 1, 2007,

William Johnston, “The Advanced Networks and Services Underpinning Modern, Large-Scale Science”, SciDAC Review Paper, May 1, 2007,

“Measurements On Hybrid Dedicated Bandwidth Connections”, INFOCOM 2007, IEEE (TCHSN/ONTC), May 1, 2007,

INFOCOM 2007, IEEE (TCHSN/ONTC)

Wenji Wu,Matt Crawford,Mark Bowden, “The performance analysis of Linux networking–packet receiving”, Computer Communication, Elsevier, March 8, 2007, Volume 3:1044-1057,

Wenji Wu, Matt Crawford, “Potential performance bottleneck in Linux TCP”, International Journal of Communication Systems, Wiley, February 1, 2007, 20:1263-1283,

Conference Paper

2024

Lloyd Brown, Emily Marx, Dev Bali, Emmanuel Amaro, Debnil Sur, Ezra Kissel, Inder Monga, Ethan Katz-Bassett, Arvind Krishnamurthy, James McCauley, Tejas Narechania, Aurojit Panda, Scott Shenker, “An Architecture For Edge Networking Services”, ACM SIGCOMM '24: Proceedings of the ACM SIGCOMM 2024 Conference, August 4, 2024, 645-660,

Kate Robinson, Jason Zurawski, Tom Costello, “Designing, Constructing, and Operating an IPv6 Network at SC23”, Practice and Experience in Advanced Research Comput- ing (PEARC ’24), New York, NY, USA, ACM, July 25, 2024, 8,

Fatema Bannat Wala, Steven Bohacek,, “Zone-Hopping:Sensitive Information Leakage Prevention in DNSSEC-NSEC”, The 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN’24), June 24, 2024,

Xi Yang, Ezra Kissel, Abdelilah Essiari, Liang Zhang, Tom Lehman, Inder Monga, Paul Ruth, Komal Theraja, Ilya Baldin, “FabFed: Tool-Based Network Federation for Testbed of Testbeds - Paradigm and Practice”, IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), May 20, 2024,

Marian Babik, Martin Bly, Nick Buraglio, Tim Chown, Dimitrios Christidis, Jiri Chudoba, Phil DeMar, José Flix Molina, Costin Grigoras, Bruno Hoeft, Hiro Ito, David Kelsey, Edoardo Martelli, Shawn McKee, Carmen Misa Moreira, Raja Nandakumar, Kars Ohrenberg, Francesco Prelz, Duncan Rand, Andrea Sciabà, Tim Skirvin, “Overcoming obstacles to IPv6 on WLCG”, 26th International Conference on Computing in High Energy and Nuclear Physics, May 6, 2024,

The transition of the Worldwide Large Hadron Collider Computing Grid (WLCG) storage services to dual-stack IPv6/IPv4 is almost complete; all Tier-1 and 94% of Tier-2 storage are IPv6 enabled. While most data transfers now use IPv6, a significant number of IPv4 transfers still occur even when both endpoints support IPv6. This paper presents the ongoing efforts of the HEPiX IPv6 working group to steer WLCG toward IPv6-only services by investigating and fixing the obstacles to the use of IPv6 and identifying cases where IPv4 is used when IPv6 is available. Removing IPv4 use is essential for the long-term agreed goal of IPv6-only access to resources within WLCG, thus eliminating the complexity and security concerns associated with dual-stack services. We present our achievements and ongoing challenges as we navigate the final stages of the transition from IPv4 to IPv6 within WLCG.

2023

Marc Koerner, “Vendor Agnostic Network Service Orchestration with Stacked NSO Services”, Proceedings of the 2023 - 19th International Conference on Network and Service Management (CNSM), Niagara Falls, ON, Canada, IEEE, November 28, 2023, 4,

The Energy Sciences Network (ESnet) is the Department of Energy's internal wide area network provider, delivering connectivity for all US national laboratories including some satellite sites in Europe. The ESnet network is a highly reliable and high bandwidth network, which transports vast amounts of data between laboratories and supercomputing facilities. Thus, ESnet is supplying the scientific community with the connectivity requirements for all sorts of data analytics and simulations. One of the major goals within the ESnet6 deployment was to have an orchestrated and fully automated network configuration management system. Hence, ESnet is leveraging tools like the Cisco Network Service Orchestrator (NSO) to deploy router configuration in a centralized and also service oriented fashion. After building the first iteration of services following a significant optimization in the router configuration and eventually deployment time, ESnet decided to take advantage of the accumulated knowledge during the first implementation and started to revise the NSO service architecture. The first prototype using the revised architecture is currently getting implemented in scope of our management router based configuration service. This paper will elaborate on the advantages of minimum functional configuration based stacked service architecture in comparison to plain networks service model based design.

Inder Monga, Erhan Saglamyurek, Ezra Kissel, Hartmut Haffner, Wenji Wu, “QUANT-NET: A testbed for quantum networking research over deployed fiber”, SIGCOMM QuNet'23, ACM, September 10, 2023, QuNet'23:31-37,

Zhongfen Deng, Kesheng Wu, Alex Sim, Inder Monga, Chin Guok, et al, “Analyzing Transatlantic Network Traffic over Scientific Data Caches”, 6th ACM International Workshop on ​System and Network Telemetry and Analysis, July 31, 2023,

Large scientific collaborations often share huge volumes of data around the world. Consequently a significant amount of network bandwidth is needed for data replication and data access. Users in the same region may possibly share resources as well as data, especially when they are working on related topics with similar datasets. In this work, we study the network traffic patterns and resource utilization for scientific data caches connecting European networks to the US. We explore the efficiency of resource utilization, especially for network traffic which consists mostly of transatlantic data transfers, and the potential for having more caching node deployments. Our study shows that these data caches reduced network traffic volume by 97% during the study period. This demonstrates that such caching nodes are effective in reducing wide-area network traffic.

Alex Sim, Ezra Kissel, Damian Hazen, Chin Guok, “Experiences in deploying in-network data caches”, 26th International Conference on Computing in High Energy & Nuclear Physics, May 11, 2023,

Caitlin Sim, Kesheng Wu, Alex Sim, Inder Monga, Chin Guok, et al, “Predicting Resource Utilization Trends with Southern California Petabyte Scale Cache”, 26th International Conference on Computing in High Energy & Nuclear Physics, May 8, 2023,

Large community of high-energy physicists share their data all around world making it necessary to ship a large number of files over wide-area networks. Regional disk caches such as the Southern California Petabyte Scale Cache have been deployed to reduce the data access latency. We observe that about 94% of the requested data volume were served from this cache, with-out remote transfers, between Sep. 2022 and July 2023. In this paper, we show the predictability of the resource utilization by exploring the trends of recent cache usage. The time series based prediction is made with a machine learning approach and the prediction errors are small relative to the variation in the input data. This work would help understanding the characteristics of the resource utilization and plan for additional deployments of caches in the future.

Caitlin Sim, Kesheng Wu, Alex Sim, Inder Monga, Chin Guok, et al, “Predicting Resource Usage Trends with Southern California Petabyte Scale Cache”, 26th International Conference on Computing in High Energy & Nuclear Physics, May 8, 2023,

Marian Babik, Martin Bly, Nick Buraglio, Tim Chown, Dimitrios Christidis, Jiri Chudoba, Phil DeMar, José Flix Molina, Costin Grigoras, Bruno Hoeft, Hiro Ito, David Kelsey, Edoardo Martelli, Shawn McKee, Carmen Misa Moreira, Raja Nandakumar, Kars Ohrenberg, Francesco Prelz, Duncan Rand, Andrea Sciabà, Tim Skirvin, “Overcoming obstacles to IPv6 on WLCG”, CHEP2023, May 8, 2023,

The transition of the Worldwide Large Hadron Collider Computing Grid (WLCG) storage services to dual-stack IPv6/IPv4 is almost complete; all Tier-1 and 94% of Tier-2 storage are IPv6 enabled. While most data transfers now use IPv6, a significant number of IPv4 transfers still occur even when both endpoints support IPv6. This paper presents the ongoing efforts of the HEPiX IPv6 working group to steer WLCG toward IPv6-only services by investigating and fixing the obstacles to the use of IPv6 and identifying cases where IPv4 is used when IPv6 is available. Removing IPv4 use is essential for the long-term agreed goal of IPv6-only access to resources within WLCG, thus eliminating the complexity and security concerns associated with dual-stack services. We present our achievements and ongoing challenges as we navigate the final stages of the transition from IPv4 to IPv6 within WLCG.

Scott Campbell, Fatema Bannat Wala, Mariam Kiran, “Insights into DoH: Traffic Classification for DNS over HTTPS in an Encrypted Network”, NDSS Conference, 2023, February 27, 2023,

In the past few years there has been a growing desire to provide more built in functionality to protect user communications from eavesdropping. An example of this is DNS over HTTPS (DoH) which can be used to protect user privacy, confidentiality and against spoofing attacks. Since its first popularity in 2018 as used in browsers, there is much further study to test the effectiveness of DoH in protection schemes and whether it is possible to detect the protocol over the web. Detecting DoH traffic among normal web traffic is also a major challenge for network admins to allow filter- ing of malicious traffic flows. In this paper, we investigate machine learning classification to study the detection of DoH traffic and fur- ther analyze the key feature characteristics in the protocol behavior to help researchers build credibility in the DoH protocol detection. Our study reveals key features and statistical relationships among DoH test runs on the Alexa-recommended 100 most-used websites using three different DoH servers, showing up to 98% test accuracy in our built classifier.

Caitlin Sim, Kesheng Wu, Alex Sim, Inder Monga, Chin Guok, “Effectiveness and predictability of in-network storage cache for Scientific Workflows””, IEEE International Conference on Computing, Networking and Communication, February 20, 2023,

Large scientific collaborations often have multiple scientists accessing the same set of files while doing different analyses, which create repeated accesses to the large amounts of shared data located far away. These data accesses have long latency due to distance and occupy the limited bandwidth available over the wide-area network. To reduce the wide-area network traffic and the data access latency, regional data storage caches have been installed as a new networking service. To study the effectiveness of such a cache system in scientific applications, we examine the Southern California Petabyte Scale Cache for a high-energy physics experiment. By examining about 3TB of operational logs, we show that this cache removed 67.6% of file requests from the wide-area network and reduced the traffic volume on wide-area network by 12.3TB (or 35.4%) an average day. The reduction in the traffic volume (35.4%) is less than the reduction in file counts (67.6%) because the larger files are less likely to be reused. Due to this difference in data access patterns, the cache system has implemented a policy to avoid evicting smaller files when processing larger files. We also build a machine learning model to study the predictability of the cache behavior. Tests show that this model is able to accurately predict the cache accesses, cache misses, and network throughput, making the model useful for future studies on resource provisioning and planning.

 

2022

Ezra Kissel, “Janus: Lightweight Container Orchestration for High-Performance Data Sharing”, Fifth International Workshop on Systems and Network Telemetry and Analytics (SNTA 2022), June 30, 2022,

Yu-Kuen Lai, Se-Young Yu, Iek-Seng Chan, Bo-Hsun Huang, Che-Hao Chang, Jim Hao Chen, Joe Mambretti, “Sketch-based entropy estimation: a tabular interpolation approach using P4”, Proceedings of the 5th International Workshop on P4 in Europe, 2022, 57--60,

2021

Wenji Wu, Joaquin Chung, Gregory Kanter, Nikolai Lauk Raju Valivarthi, Russell Ceballos, Cristian Pena, Neil Sinclair, Jordan Thomas, Ely Eastman, Si Xie, Rajkumar Kettimuthu, Prem Kumar, Panagiotis Spentzouris, Maria Spiropulu, “Illinois express quantum network for distributing and controlling entanglement on metro-scale”, IEEE/ACM International Workshop on Quantum Computing Software (QCS), IEEE/ACM, December 22, 2021,

Joaquin Chung, Gregory Kanter, Nikolai Lauk, Raju Valivarthi, Wenji Wu, Russell R. Ceballos, Cristián Peña, Neil Sinclair, Jordan Thomas, Si Xie, Rajkumar Kettimuthu, Prem Kumar, Panagiotis Spentzouris, Maria Spiropulu, “Illinois Express Quantum Network (IEQNET): metropolitan-scale experimental quantum networking over deployed optical fiber”, SPIE, Quantum Information Science, Sensing, and Computation XIII, SPIE, April 21, 2021,

Brian Tierney, Dart, Kissel, Eashan Adhikarla, “Exploring the BBRv2 Congestion Control Algorithm for use on Data Transfer Nodes”, IEEE Workshop on Innovating the Network for Data-Intensive Science, INDIS@SC 2021, St. Louis, MO, USA, November 15, 2021, IEEE, 2021, 23--33,

2020

Wenji Wu, Liang Zhang, Qiming Lu, Phil DeMar, Robert Illingworth, Joe Mambretti, Se-Young Yu, Jim Hao Chen, Inder Monga, Xi Yang, Tom Lehman, Chin Guok, John MacAuley, “ROBIN (RuciO/BIgData Express/SENSE) A Next-Generation High-Performance Data Service Platform”, 2020 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), IEEE/ACM, December 31, 2020,

Wenji Wu, Liang Zhang, Qiming Lu, Phil DeMar, Robert Illingworth, Joe Mambretti, Se-young Yu, Jim Hao Chen, Inder Monga, Xi Yang, others, “ROBIN (RuciO/BIgData Express/SENSE) A Next-Generation High-Performance Data Service Platform”, 2020 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), 2020, 33--44,

2019

Verónica Rodríguez Tribaldos, Shan Dou, Nate Lindsey, Inder Monga, Chris Tracy, Jonathan Blair Ajo-Franklin, “Monitoring Aquifers Using Relative Seismic Velocity Changes Recorded with Fiber-optic DAS”, AGU Meeting, December 10, 2019,

Dipak Ghosal, Sambit Shukla, Alex Sim, Aditya V. Thakur, and Kesheng, Wu, “A Reinforcement Learning Based Network Scheduler For Deadline-Driven Data Transfers”, 2019 IEEE Global Communications Conference, December 9, 2019,

Sambit Shukla, Dipak Ghosal, Kesheng Wu, Alex Sim, and Matthew Farrens, “Co-optimizing Latency and Energy for IoT services using HMP servers in Fog Clusters.”, 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC), IEEE, August 15, 2019,

Qiming Lu, Liang Zhang, Sajith Sasidharan, Wenji Wu, Phil DeMar, Chin Guok, John Macauley, Inder Monga, Se-Young Yu, Jim Hao Chen, Joe Mambretti, Jin Kim, Seo-Young Noh, Xi Yang, Tom Lehman, Gary Liu, “BigData Express: Toward Schedulable, Predictable, and High-Performance Data Transfer”, 2018 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), IEEE/ACM, February 24, 2019,

Qiang Gong, Wenji Wu, Phil DeMar, “GoldenEye: stream-based network packet inspection using GPUs”, 2018 IEEE 43rd Conference on Local Computer Networks (LCN), IEEE, February 10, 2019,

Se-young Yu, Jim Chen, Fei Yeh, Joe Mambretti, Xiao Wang, Anna Giannakou, Eric Pouyoul, Marc Lyonnais, “SCinet DTN-as-a-service framework”, 2019 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), 2019, 1--8,

2018

Inder Monga, Chin Guok, John Macauley, Alex Sim, Harvey Newman, Justas Balcas, Phil DeMar, Linda Winkler, Xi Yang, Tom Lehman, “SDN for End-to-end Networked Science at the Exascale (SENSE)”, INDIS Workshop SC18, November 11, 2018,

The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project is building smart network services to accelerate scientific discovery in the era of ‘big data’ driven by Exascale, cloud computing, machine learning and AI. The project’s architecture, models, and demonstrated prototype define the mechanisms needed to dynamically build end-to-end virtual guaranteed networks across administrative domains, with no manual intervention. In addition, a highly intuitive ‘intent’ based interface, as defined by the project, allows applications to express their high-level service requirements, and an intelligent, scalable model-based software orchestrator converts that intent into appropriate network services, configured across multiple types of devices. The significance of these capabilities is the ability for science applications to manage the network as a firstclass schedulable resource akin to instruments, compute, and storage, to enable well defined and highly tuned complex workflows that require close coupling of resources spread across a vast geographic footprint such as those used in science domains like high-energy physics and basic energy sciences.

Amel Bennaceur, Ampaeli Cano, Lilia Georgieva, Mariam Kiran, Maria Salama, Poonam Yadav, “Issues in Gender Diversity and Equality in the UK”, Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, ACM, July 13, 2018,

Paul Ruth, Mert Cevik, Cong Wang, Yuanjun Yao, Qiang Cao, Rubens Farias,
Jeff Chase, Victor Orlikowski, Nick Buraglio,
“Toward Live Inter-Domain Network Services on the ExoGENI Testbed”, 2018 IEEE INFOCOM, IEEE, April 15, 2018,

This paper introduces ExoPlex, a framework to improve the QoS of live (real) experiments on the ExoGENI federated testbed. The authors make the case for implementing the abstraction of network service providers (NSPs) as a way of having experimenters specify the performance characteristics they expect from the platform (at the testbed level). An example tenant using this version of ExoGENI enhanced with NSP capabilities is presented, and experimental results show the effectiveness of the approach.

Qiming Lu, Liang Zhang, Sajith Sasidharan, Wenji Wu, Phil DeMar, Chin Guok, John Macauley, Inder Monga, Se-young Yu, Jim Hao Chen, others, “Bigdata express: Toward schedulable, predictable, and high-performance data transfer”, 2018 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), January 1, 2018, 75--84,

Se-young Yu, Jim Chen, Joe Mambretti, Fei Yeh, “Analysis of CPU pinning and storage configuration in 100 Gbps network data transfer”, 2018 IEEE/ACM Innovating the Network for Data-Intensive Science (INDIS), 2018, 64--74,

2017

Liang Zhang, Phil Demar, Bockjoo Kim, Wenji Wu, “MDTM: Optimizing Data Transfer Using Multicore-Aware I/O Scheduling”, 2017 IEEE 42nd Conference on Local Computer Networks (LCN), IEEE, September 12, 2017,

Bulk data transfer is facing significant challenges in the coming era of big data. There are multiple performance bottlenecks along the end-to-end path from the source to destination storage system. The limitations of current generation data transfer tools themselves can have a significant impact on end-to-end data transfer rates. In this paper, we identify the issues that lead to underperformance of these tools, and present a new data transfer tool with an innovative I/O scheduler called MDTM. The MDTM scheduler exploits underlying multicore layouts to optimize throughput by reducing delay and contention for I/O reading and writing operations. With our evaluations, we show how MDTM successfully avoids NUMA-based congestion and significantly improves end-to-end data transfer rates across high-speed wide area networks.

S Khan, T Yairi, M Kiran, “Towards a Cloud-based Machine Learning for Health Monitoring and Fault Diagnosis”, Asia Pacific Conference of the Prognostics and Health Management Society 2017, August 1, 2017,

A Mercian, M Kiran, E Pouyoul, B Tierney, I Monga, “INDIRA:‘Application Intent’ network assistant to configure SDN-based high performance scientific networks”, Optical Fiber Communication Conference, July 1, 2017,

Alessandro Zanni, Se-Young Yu, Paolo Bellavista, Rami Langar, Stefano Secci, “Automated selection of offloadable tasks for mobile computation offloading in edge computing”, 2017 13th international conference on network and service management (CNSM), 2017, 1--5,

Alessandro Zanni, Se-young Yu, Stefano Secci, Rami Langar, Paolo Bellavista, Daniel F Macedo, “Automated offloading of android applications for computation/energy optimizations”, 2017 IEEE conference on computer communications workshops (INFOCOM WKSHPS), 2017, 990--991,

2016

M Usman, A Iqbal, M Kiran, “A Bandwidth Friendly Architecture for Cloud Gaming”, 31st International Conference on Information Networking (ICOIN 2017), December 1, 2016,

M Kiran, E Pouyoul, A Mercian, B Tierney, C Guok, I Monga, “Enabling Intent to Configure Scientific Networks for High Performance Demands”, 3nd International Workshop on Innovating the Network for Data Intensive Science (INDIS) 2016, SC16., November 10, 2016,

S. Stepanov, O. Makarov, M. Hilgart, S.B. Pothineni J. Zurawski, J.L. Smith, R.F. Fischetti, “Integration of Fast Detectors into Beamline Controls at the GM/CA Macromolecular Crystallogra- phy Beamlines at the Advanced Photon Source”, The 11th New Opportunities for Better User Group Software (NOBUGS) Conference, Copenhagen Denmark, October 1, 2016,

B Mohammed, M Kiran, IU Awan, KM Maiyama, “Optimising Fault Tolerance in Real-Time Cloud Computing IaaS Environment”, Future Internet of Things and Cloud (FiCloud), 2016 IEEE 4th International, 2016, September 15, 2016,

S. Stepanov, O. Makarov, M. Hilgart, S.B. Pothineni, J. Zurawski, J.L. Smith, R.F. Fischetti, “Integration of Fast Detectors into Beamline Controls at GM/CA@APS: Pilatus3 6M and Eiger 16M”, 12th International Conference on Biology and Synchrotron Radiation (BSR-16), Palo Alto CA, August 1, 2016,

Alberto Gonzalez, Jason Leigh, Sean Peisert, Brian Tierney, Andrew Lee, and Jennifer M. Schopf, “NETSAGE: Open Privacy-Aware Network Measurement, Analysis and Visualization Service”, TNC16 Networking Conference, June 15, 2016,

Anir Mandal, Paul Ruth, Ilya Baldin, Dariusz Krol, Gideon Juve, Rajiv Mayani, Rafael Ferreira da Silva, Ewa Deelman, Jeremy Meredith, Jeffrey Vetter, Vickie Lynch, Ben Mayer, James Wynne III, Mark Blanco, Chris Carothers, Justin LaPre, Brian Tierney, “Toward an End-to-end Framework for Modeling, Monitoring and Anomaly Detection for Scientific Workflows”, Workshop on Large-Scale Parallel Processing (LSPP), in conjuction with 30th IEEE International Parallel & Distributed Processing Symposium (IPDPS), May 23, 2016,

Se-young Yu, Aniket Mahanti, Mingwei Gong, “Benchmarking ISPs in New Zealand”, 2016 IEEE 35th International Performance Computing and Communications Conference (IPCCC), 2016, 1--7,

2015

Nathan Hanford, Brian Tierney, Dipak Ghosal, “Optimizing Data Transfer Nodes using Packet Pacing”, Second Workshop on Innovating the Network for Data-Intensive Science, November 16, 2015,

Paper available from SIGHPC website as well.

M Kiran, “Women in HPC: Changing the Face of HPC”, SC15: HPC transforms, 2015, Austin Texas, November 15, 2015,

M Kiran, “Multiple platforms: Issues of porting Agent-Based Simulation from Grids to Graphics cards”, Workshop on Portability Among HPC Architectures for Scientific Applications, SC15: HPC transforms, 2015, Austin Texas., November 15, 2015,

P Yadav, M Kiran, A Bennaceur, L Georgieva, M Salama and A E Cano, “Jack of all Trades versus Master of one”, Grace Hopper 2015 Conference, November, 2015, November 1, 2015,

Mariam Kiran, Peter Murphy, Inder Monga, Jon Dugan, Sartaj Baveja, “Lambda Architecture for Cost-effective Batch and Speed Big Data processing”, First Workshop on Data-Centric Infrastructure for Big Data Science (DIBS), October 29, 2015,

This paper presents an implementation of the lambda architecture design pattern to construct a data-handling backend on Amazon EC2, providing high throughput, dense and intense data demand delivered as services, minimizing the cost of the network maintenance. This paper combines ideas from database management, cost models, query management and cloud computing to present a general architecture that could be applied in any given scenario where affordable online data processing of Big Datasets is needed. The results are presented with a case study of processing router sensor data on the current ESnet network data as a working example of the approach. The results showcase a reduction in cost and argue benefits for performing online analysis and anomaly detection for sensor data

M Kiran, “Platform dependency and cloud use for ABM, Satellite Workshop, Computational Transparency in Modeling Complex Systems,”, Conference on Complex Systems, Arizona, USA, 2015., September 5, 2015,

P Yadav, M Kiran, A Bennaceur, L Georgieva, M Salama and A E Cano, “Impact of Gender Diversity and Equality Initiatives”, WomENcourage 2015 Conference, Uppsala, Sweden, October, 2015, September 1, 2015,

M Kiran, S Konur, M Burkitt, “PlatOpen Platform Dependency for Open ABM Complex Model Simulations, Satellite Workshop,”, Conference on Complex Systems, Arizona, USA, 2015., September 1, 2015,

Mariam Kiran, Kabiru Maiyama, Haroon Mir, Bashir Mohammad, Ashraf Al Oun, “Agent-Based Modelling as a Service on Amazon EC2: Opportunities and Challenges”, Utility and Cloud Computing (UCC), 2015 IEEE/ACM 8th International Conference on, September 1, 2015,

Ranjana Addanki, Sourav Maji, Malathi Veeraraghavan, Chris Tracy, “A measurement-based study of big-data movement”, 2015 European Conference onNetworks and Communications (EuCNC), July 29, 2015,

S Konur, M Kiran, M Gheorghe, M Burkitt, F Ipate,, “Agent-based high-performance simulation of biological systems on the GPU,”, High Performance Computing and Communications, IEEE, 2015, May 1, 2015,

Shawn McKee, Marian Babik, Simone Campana, Tony Wildish, Joel Closier, Costin Grigoras, Ilija Vukotic, Michail Salichos, Kaushik De, Vincent Garonne, Jorge Alberto Diaz Cruz, Alessandra Forti, Christopher John Walker, Duncan Rand, Alessandro De Salvo, Enrico Mazzoni, Ian Gable, Frederique Chollet, Hsin Yen Chen, Ulf Bobson Severin Tigerstedt, Guenter Duckeck, Andreas Petzold, Fernando Lopez Munoz, Josep Flix, John Shade, Michael O'Connor, Volodymyr Kotlyar, Bruno Heinrich Hoeft, Jason Zurawski, “Integrating network and transfer metrics to optimize transfer efficiency and experiment workflows”, 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa Japan, April 13, 2015,

A Al-Ou’n, M Kiran, DD Kouvatsos, “Using Agent-Based VM Placement Policy,”, Future Internet of Things and Cloud (FiCloud), Rome, Italy, August, 2015, April 1, 2015,

B Mohammed, M Kiran, “Analysis of Cloud Test Beds Using OpenSource Solutions,”, Future Internet of Things and Cloud (FiCloud), Rome, Italy, August, 2015, April 1, 2015,

Adrian Lara, Byrav Ramamurthy, Eric Pouyoul, Inder Monga, “WAN Virtualization and Dynamic End-to-End Bandwidth Provisioning Using SDN”, Optical Fiber Communication Conference 2015, March 22, 2015,

We evaluate a WAN-virtualization framework in terms of delay and scalability and demonstrate that adding a virtual layer between the physical topology and the end-user brings significant advantages and tolerable delays

Se-young Yu, Nevil Brownlee, Aniket Mahanti, “Characterizing performance and fairness of big data transfer protocols on long-haul networks”, 2015 IEEE 40th Conference on Local Computer Networks (LCN), 2015, 213--216,

Se-young Yu, Nevil Brownlee, Aniket Mahanti, “Comparative analysis of big data transfer protocols in an international high-speed network”, 2015 IEEE 34th International Performance Computing and Communications Conference (IPCCC), 2015, 1--9,

2014

Nathan Hanford, Vishal Ahuja, Matthew Farrens, Dipak Ghosal, Mehmet Balman, Eric Pouyoul, Brian Tierney., “Analysis of the effect of core affinity on high-throughput flows”, Proceedings of the Fourth International Workshop on Network-Aware Data Management (NDM '14), November 16, 2014,

Wenji Wu, Phil DeMar, “Wirecap: a novel packet capture engine for commodity NICs in high-speed networks”, ACM IMC'14, ACM, November 5, 2014, IMC'14:395-406,

Karel van der Veldt, Inder Monga, Jon Dugan, Cees de Laat, Paola Grosso, “Carbon-aware path provisioning for NRENs”, International Green Computing Conference, November 3, 2014,

 

National Research and Education Networks (NRENs) are becoming keener in providing information on the energy consumption of their equipment. However there are only few NRENs trying to use the available information to reduce power consumption and/or carbon footprint. We set out to study the impact that deploying energy-aware networking devices may have in terms of CO2 emissions, taking the ESnet network as use case. We defined a model that can be used to select paths that lead to a lower impact on the CO2 footprint of the network. We implemented a simulation of the ESnet network using our model to investigate the CO2 footprint under different traffic conditions. Our results suggest that NRENs such as ESnet could reduce their network’s environmental impact if they would deploy energy- aware hardware combined with paths setup tailored to reduction of carbon footprint. This could be achieved by modification of the current path provisioning systems used in the NREN community. 

 

Mariam Kiran, Anthony JH Simons, “Model-Based Testing for Composite Web Services in Cloud Brokerage Scenarios”, Advances in Service-Oriented and Cloud Computing, ESOCC, 2014, September 1, 2014,

M Kiran, A Friesen, A J H Simons and W K R Schwach, “Model-based Testing in Cloud Brokerage Scenarios,”, Proc. 1st. Int. Workshop on Cloud Service Brokerage. Service-Oriented Computing, ICSOC 2013 Workshops, LNCS 8377, 2014, September 1, 2014,

Henrique Rodriguez, Inder Monga, Abhinava Sadasivarao , Sharfuddin Sayed, Chin Guok, Eric Pouyoul, Chris Liou,Tajana Rosing, “Traffic Optimization in Multi-Layered WANs using SDN”, 22nd Annual Symposium on High-Performance Interconnects, Best Student Paper Award, August 27, 2014,

Wide area networks (WAN) forward traffic through a mix of packet and optical data planes, composed by a variety of devices from different vendors. Multiple forwarding technologies and encapsulation methods are used for each data plane (e.g. IP, MPLS, ATM, SONET, Wavelength Switching). Despite standards defined, the control planes of these devices are usually not interoperable, and different technologies are used to manage each forwarding segment independently (e.g. OpenFlow, TL-1, GMPLS). The result is lack of coordination between layers and inefficient resource usage. In this paper we discuss the design and implementation of a system that uses unmodified OpenFlow to optimize network utilization across layers, enabling practical bandwidth virtualization. We discuss strategies for scalable traffic monitoring and to minimize losses on route updates across layers. We explore two use cases that benefit from multi-layer bandwidth on demand provisioning. A prototype of the system was built open using a traditional circuit reservation application and an unmodified SDN controller, and its evaluation was per-formed on a multi-vendor testbed.

http://blog.infinera.com/2014/09/05/henrique-rodrigues-wins-best-student-paper-at-ieee-hot-interconnects-for-infinerabrocadeesnet-multi-layer-sdn-demo/

http://esnetupdates.wordpress.com/2014/09/05/esnet-student-assistant-henrique-rodrigues-wins-best-student-paper-award-at-hot-interconnects/

 

 

Malathi Veeraraghavan, Inder Monga, “Broadening the scope of optical circuit networks”, International Conference On Optical Network Design and Modeling, Stockholm, Sweden, May 22, 2014,

 

Advances in optical communications and switching technologies are enabling energy-efficient, flexible, higher- utilization network operations. To take full advantage of these capabilities, the scope of optical circuit networks can be increased in both the vertical and horizontal directions. In the vertical direction, some of the existing Internet applications, transport-layer protocols, and application-programming interfaces need to be redesigned and new ones invented to leverage the high-bandwidth, low-latency capabilities of optical circuit networks. In the horizontal direction, inter-domain control and management-protocols are required to create a global-scale interconnection of optical circuit-switched networks. 

 

Se-young Yu, Nevil Brownlee, Aniket Mahanti, “Performance and fairness issues in big data transfers”, Proceedings of the 2014 CoNEXT on Student Workshop, 2014, 9--11,

2013

Eli Dart, Lauren Rotman, Brian Tierney, Mary Hester, and Jason Zurawski, “The Science DMZ: A Network Design Pattern for Data-Intensive Science”, SC13: The International Conference for High Performance Computing, Networking, Storage and Analysis, Best Paper Nominee. Denver CO, USA, ACM. DOI:10.1145/2503210.2503245, November 19, 2013, LBNL 6366E.

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The Science DMZ paradigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that create an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.

 

Nathan Hanford, Vishal Ahuja, Mehmet Balman, Matthew Farrens, Dipak Ghosal, Eric Pouyoul and Brian Tierney, “Characterizing the Impact of End-System Affinities On the End-to-End Performance of High-Speed Flows”, The 3rd International Workshop on Network-aware Data Management, in conjunction with SC'13, November 17, 2013,

Ezra Kissel, Martin Swany, Brian Tierney and Eric Pouyoul, “Efficient Wide Area Data Transfer Protocols for 100 Gbps Networks and Beyond”, The 3rd International Workshop on Network-aware Data Management, in conjunction with SC'13:, November 17, 2013,

Campana S., Bonacorsi D., Brown A., Capone E., De Girolamo D., Fernandez Casani A., Flix Molina J., Forti A., Gable I., Gutsche O., Hesnaux A., Liu L., Lopez Munoz L., Magini N., McKee S., Mohammad K., Rand D., Reale M., Roiser S., Zielinski M., and Zurawski J.}, “Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology”, 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2013), October 2013,

Jason Zurawski, Sowmya Balasubramanian, Aaron Brown, Ezra Kissel, Andrew Lake, Martin Swany, Brian Tierney, Matt Zekauskas, “perfSONAR: On-board Diagnostics for Big Data”, 1st Workshop on Big Data and Science: Infrastructure and Services Co-located with IEEE International Conference on Big Data 2013 (IEEE BigData 2013), October 6, 2013,

U Khan, M Oriol, M Kiran, “Threat methodology for securing scalable video in the Cloud”, Internet Technology and Secured Transactions (ICITST), 2013 8th Int. Conf., Pages 428-436, IEEE, 2013, September 1, 2013,

Abhinava Sadasivarao, Sharfuddin Syed, Chris Liou, Ping Pan, Andrew Lake, Chin Guok, Inder Monga, “Open Transport Switch - A Software Defined Networking Architecture for Transport Networks”, August 17, 2013,

 

There have been a lot of proposals to unify the control and management of packet and circuit networks but none have been deployed widely. In this paper, we propose a sim- ple programmable architecture that abstracts a core transport node into a programmable virtual switch, that meshes well with the software-defined network paradigm while leverag- ing the OpenFlow protocol for control. A demonstration use-case of an OpenFlow-enabled optical virtual switch im- plementation managing a small optical transport network for big-data applications is described. With appropriate exten- sions to OpenFlow, we discuss how the programmability and flexibility SDN brings to packet-optical backbone networks will be substantial in solving some of the complex multi- vendor, multi-layer, multi-domain issues service providers face today. 

 

Baris Aksanli, Jagannathan Venkatesh, Tajana Rosing, Inder Monga, “A Comprehensive Approach to Reduce the Energy Cost of Network of Datacenters”, International Symposium on Computers and Communications, Best Student Paper award, July 7, 2013,

Best Student Paper

Several studies have proposed job migration over the wide area network (WAN) to reduce the energy of networks of datacenters by taking advantage of different electricity prices and load demands. Each study focuses on only a small subset of network parameters and thus their results may have large errors. For example,  datacenters usually have long-term power contracts instead of paying market prices. However, previous work neglects these contracts, thus overestimating the energy savings by 2.3x. We present a comprehensive approach to minimize the energy cost of networks of datacenters by modeling performance of the workloads, power contracts, local renewable energy sources, different routing options for WAN and future router technologies. Our method can reduce the energy cost of datacenters by up to 28%, while reducing the error in the energy cost estimation by 2.6x.

William E. Johnston, Eli Dart, Michael Ernst, Brian Tierney, “Enabling high throughput in widely distributed data management and analysis systems: Lessons from the LHC”, TERENA Networking Conference, June 3, 2013,

Z. Yan, M. Veeraraghavan, C. Tracy, and C. Guok, “On how to Provision Quality of Service (QoS) for Large Dataset Transfers”, Proceedings of the Sixth International Conference on Communication Theory, Reliability, and Quality of Service, April 21, 2013,

Andrew Luxton-Reilly, Paul Denny, Diana Kirk, Ewan Tempero, Se-Young Yu, “On the differences between correct student solutions”, Proceedings of the 18th ACM conference on Innovation and technology in computer science education, 2013, 177--182,

Se-young Yu, Nevil Brownlee, Aniket Mahanti, “Comparative performance analysis of high-speed transfer protocols for big data”, 38th Annual IEEE Conference on Local Computer Networks, 2013, 292--295,

2012

Yufei Ren, Tan Li, Dantong Yu, Shudong Jin, Thomas Robertazzi, Brian L. Tierney, Eric Pouyoul, “Protocols for Wide-Area Data-intensive Applications: Design and Performance Issues”, Proceedings of IEEE Supercomputing 2012, November 12, 2012,

Providing high-speed data transfer is vital to various data-intensive applications. While there have been remarkable technology advances to provide ultra-high-speed network band- width, existing protocols and applications may not be able to fully utilize the bare-metal bandwidth due to their inefficient design. We identify the same problem remains in the field of Remote Direct Memory Access (RDMA) networks. RDMA offloads TCP/IP protocols to hardware devices. However, its benefits have not been fully exploited due to the lack of efficient software and application protocols, in particular in wide-area networks. In this paper, we address the design choices to develop such protocols. We describe a protocol implemented as part of a communication middleware. The protocol has its flow control, connection management, and task synchronization. It maximizes the parallelism of RDMA operations. We demonstrate its performance benefit on various local and wide-area testbeds, including the DOE ANI testbed with RoCE links and InfiniBand links.

 

Gunter D., Kettimuthu R., Kissel E., Swany M., Yi J., Zurawski J., “Exploiting Network Parallelism for Improving Data Transfer Performance”, IEEE/ACM Annual SuperComputing Conference (SC12) Companion Volume, Salt Lake City Utah, USA, November 2012,

Wu, Q., Yun, D., Zhu, M., Brown, P., and Zurawski, J., “A Workflow-based Network Advisor for Data Movement with End-to-end Performance Optimization”, The Seventh Workshop on Workflows in Support of Large-Scale Science (WORKS12), Salt Lake City Utah, USA, November 2012,

Inder Monga, Eric Pouyoul, Chin Guok, “Software Defined Networking for big-data science (paper)”, SuperComputing 2012, November 11, 2012,

 

University campuses, Supercomputer centers and R&E networks are challenged to architect, build and support IT infrastructure to deal effectively with the data deluge facing most science disciplines. Hybrid network architecture, multi-domain bandwidth reservations, performance monitoring and GLIF Open Lightpath Exchanges (GOLE) are examples of network architectures that have been proposed, championed and implemented successfully to meet the needs of science. Most recently, Science DMZ, a campus design pattern that bypasses traditional performance hotspots in typical campus network implementation, has been gaining momentum. In this paper and corresponding demonstration, we build upon the SC11 SCinet Research Sandbox demonstrator with Software-Defined networking to explore new architectural approaches. A virtual switch network abstraction is explored, that when combined with software-defined networking concepts provides the science users a simple, adaptable network framework to meet their upcoming application requirements. 

 

Brian Tierney, Ezra Kissel, Martin Swany, Eric Pouyoul, “Efficient Data Transfer Protocols for Big Data”, Proceedings of the 8th International Conference on eScience, IEEE, October 9, 2012,

Abstract—Data set sizes are growing exponentially, so it is important to use data movement protocols that are the most efficient available. Most data movement tools today rely on TCP over sockets, which limits flows to around 20Gbps on today’s hardware. RDMA over Converged Ethernet (RoCE) is a promising new technology for high-performance network data movement with minimal CPU impact over circuit-based infrastructures. We compare the performance of TCP, UDP, UDT, and RoCE over high latency 10Gbps and 40Gbps network paths, and show that RoCE-based data transfers can fill a 40Gbps path using much less CPU than other protocols. We also show that the Linux zero-copy system calls can improve TCP performance considerably, especially on current Intel “Sandy Bridge”-based PCI Express 3.0 (Gen3) hosts.

 

T Kirkham, K Djemame, M Kiran, M Jiang, G Vafiadis, A Evangelinou, “Risk based SLA management in clouds: A legal perspective,”, Internet Technology and Secured Transactions, 2012, September 1, 2012,

M.Kiran, A.U.Khan, M.Jiang, K.Djemame, M.Oriol, M.Corrales,, “Managing Security Threats in Clouds”, Digital Research 2012, September 1, 2012,

A.U.Khan, M.Kiran, M.Oriol, M.Jiang, K.Djemame, “Security risks and their management in cloud computing”, CloudCom 2012: 121-128, September 1, 2012,

T.Kirkham, D.Armstrong, K.Djemame, M.Corrales, M.Kiran, I.Nwankwo, M.Jiang, N.Forgo, “Assuring Data Privacy in Cloud Transformations,”, In: TrustCom, 2012, September 1, 2012,

Mehmet Balman, Eric Pouyoul, Yushu Yao, E. Wes Bethel Burlen Loring, Prabhat, John Shalf, Alex Sim, Brian L. Tierney, “Experiences with 100Gbps Network Applications”, The Fifth International Workshop on Data Intensive Distributed Computing (DIDC 2012), June 20, 2012,

100Gbps networking has finally arrived, and many research and educational institutions have begun to deploy 100Gbps routers and services. ESnet and Internet2 worked together to make 100Gbps networks available to researchers at the Supercomputing 2011 conference in Seattle Washington. In this paper, we describe two of the first applications to take advantage of this network. We demonstrate a visualization application that enables remotely located scientists to gain insights from large datasets. We also demonstrate climate data movement and analysis over the 100Gbps network. We describe a number of application design issues and host tuning strategies necessary for enabling applications to scale to 100Gbps rates.

Jon Dugan, Gopal Vaswani, Gregory Bell, Inder Monga, “The MyESnet Portal: Making the Network Visible”, TERENA 2012 Conference, May 22, 2012,

 

ESnet provides a platform for moving large data sets and accelerating worldwide scientific collaboration. It provides high-bandwidth, reliable connections that link scientists at national laboratories, universities and other research institutions, enabling them to collaborate on some of the world's most important scientific challenges including renewable energy sources, climate science, and the origins of the universe.

ESnet has embarked on a major project to provide substantial visibility into the inner-workings of the network by aggregating diverse data sources, exposing them via web services, and visualizing them with user-centered interfaces. The portal’s strategy is driven by understanding the needs and requirements of ESnet’s user community and carefully providing interfaces to the data to meet those needs. The 'MyESnet Portal' allows users to monitor, troubleshoot, and understand the real time operations of the network and its associated services.

This paper will describe the MyESnet portal and the process of developing it. The data for the portal comes from a wide variety of sources: homegrown systems, commercial products, and even peer networks. Some visualizations from the portal are presented highlighting some interesting and unusual cases such as power consumption and flow data. Developing effective user interfaces is an iterative process. When a new feature is released, users are both interviewed and observed using the site. From this process valuable insights were found concerning what is important to the users and other features and services they may also want. Open source tools were used to build the portal and the pros and cons of these tools are discussed.

 

Zurawski J., Ball R., Barczyk A., Binkley M., Boote J., Boyd E., Brown A., Brown R., Lehman T., McKee S., Meekhof B., Mughal A. Newman H., Rozsa S., Sheldon P., Tackett A., Voicu R., Wolff S., and Yang X., “The DYNES Instrument: A Description and Overview”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

McKee S., Lake A., Laurens P., Severini H., Wlodek T., Wolff S., and Zurawski J., “Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

Baris Aksanli, Tajana Rosing, Inder Monga, “Benefits of Green Energy and Proportionality in High Speed Wide Area Networks Connecting Data Centers”, Design, Automation and Test in Europe (DATE), March 5, 2012,

Abstract: Many companies deploy multiple data centers across the globe to satisfy the dramatically increased computational demand. Wide area connectivity between such geographically distributed data centers has an important role to ensure both the quality of service, and, as bandwidths increase to 100Gbps and beyond, as an efficient way to dynamically distribute the computation. The energy cost of data transmission is dominated by the router power consumption, which is unfortunately not energy proportional. In this paper we not only quantify the performance benefits of leveraging the network to run more jobs, but also analyze its energy impact. We compare the benefits of redesigning routers to be more energy efficient to those obtained by leveraging locally available green energy as a complement to the brown energy supply. Furthermore, we design novel green energy aware routing policies for wide area traffic and compare to state-of-the-art shortest path routing algorithm. Our results indicate that using energy proportional routers powered in part by green energy along with our new routing algorithm results in 10x improvement in per router energy efficiency with 36% average increase in the number of jobs completed.

 

Brian Tierney, Kissel, Martin Swany, Eric Pouyoul, “Efficient data transfer protocols for big data”, 8th IEEE International Conference on E-Science, e-Science 2012, Chicago, IL, USA, October 8-12, 2012, IEEE Computer Society, January 2012, 1--9,

2011

Zurawski J., Boyd E., Lehman T., McKee S., Mughal A., Newman H., Sheldon P, Wolff S., and Yang X., “Scientific data movement enabled by the DYNES instrument”, Proceedings of the first international workshop on Network-aware data management (NDM ’11), Seattle WA, USA, November 2011,

K.Djemame, D.Armstrong, M.Kiran, M.Jiang, “A Risk Assessment Framework and Software Toolkit for Cloud Service Ecosystems, Cloud Computing 2011,”, The Second International Conference on Cloud Computing, Grids and Virtualization, pg: 119-126, ISBN: 978-1-61208-153-3, Italy, September 1, 2011,

S.F.Adra, M.Kiran, P.McMinn, N.Walkinshaw, “A multiobjective optimisation approach for the dynamic inference and refinement of agent-based model specifications,”, IEEE Congress on Evolutionary Computation 2011: 2237-2244, New Orleans, USA, January 2, 2011,

M.Kiran, M.Jiang, D.Armstrong and K.Djemame, “Towards a Service Life Cycle-based Methodology for Risk Assessment in Cloud Computing,”, CGC 2011, International conference on Cloud and Green Computing, December, Australia, Proceedings DASC 2011: 449-456, January 2, 2011,

2010

M.Kiran, P.Richmond, M.Holcombe, L.S.Chin, D.Worth and C.Greenough, “FLAME: Simulating Large Populations of Agents on Parallel Hardware Architectures”, AAMAS 2010: 1633-1636, Toronto, Canada, June 1, 2010,

Swany M. Portnoi M., Zurawski J., “Information services algorithm to heuristically summarize IP addresses for distributed, hierarchical directory”, 11th IEEE/ACM International Conference on Grid Computing (Grid2010), 2010,

2009

Tierney B., Metzger J., Boote J., Brown A., Zekauskas M., Zurawski J., Swany M., Grigoriev M., “perfSONAR: Instantiating a Global Network Measurement Framework”, 4th Workshop on Real Overlays and Distributed Systems (ROADS’09) Co-located with the 22nd ACM Symposium on Operating Systems Principles (SOSP), January 1, 2009, LBNL LBNL-1452

Grigoriev M., Boote J., Boyd E., Brown A., Metzger J., DeMar P., Swany M., Tierney B., Zekauskas M., Zurawski J., “Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites”, 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2009), January 1, 2009,

2008

A. Baranovski, K. Beattie, S. Bharathi, J. Boverhof, J. Bresnahan, A. Chervenak, I. Foster, T. Freeman, D. Gunter, K. Keahey, C. Kesselman, R. Kettimuthu, N. Leroy, M. Link, M. Livny, R. Madduri, G. Oleynik, L. Pearlman, R. Schuler, and B. Tierney, “Enabling Petascale Science: Data Management, Troubleshooting and Scalable Science Services”, Proceedings of SciDAC 2008,, July 1, 2008,

2007

Dan Gunter, Brian L. Tierney, Aaron Brown, Martin Swany, John Bresnahan, Jennifer M. Schopf, “Log Summarization and Anomaly Detection for Troubleshooting Distributed Systems”, Proceedings of the 8th IEEE/ACM International Conference on Grid Computing, September 19, 2007,

Matthias Vallentin, Robin Sommer, Jason Lee, Craig Leres, Vern Paxson, Brian Tierney,, “The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware”, Proceedings of the Symposium on Recent Advances in Intrusion Detection (RAID), September 10, 2007,

Tom Lehman , Xi Yang, Chin P. Guok, Nageswara S. V. Rao, Andy Lake, John Vollbrecht, Nasir Ghani, “Control Plane Architecture and Design Considerations for Multi-Service, Multi-Layer, Multi-Domain Hybrid Networks”, INFOCOM 2007, IEEE (TCHSN/ONTC), May 1, 2007,

2006

Chin Guok, David Robertson, Mary Thompson, Jason Lee, Brian Tierney and William Johnston, “Intra and Interdomain Circuit Provisioning Using the OSCARS Reservation System”, Third International Conference on Broadband Communications Networks, and Systems, IEEE/ICST, October 1, 2006,

Zurawski, J., Swany M., and Gunter D., “A Scalable Framework for Representation and Exchange of Network Measurements”, 2nd International IEEE/Create-Net Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TridentCom 2006), Barcelona, Spain, 2006,

2005

R. Pang, M. Allman, M. Bennett, J. Lee, V. Paxson, B. Tierney, “A First Look at Modern Enterprise Traffic”, Internet Measurement Conference 2005 (IMC-2005), October 19, 2005,

Hanemann A., Boote J. , Boyd E., Durand J., Kudarimoti L., Lapacz R., Swany M., Trocha S., and Zurawski J., “PerfSONAR: A Service-Oriented Architecture for Multi-Domain Network Monitoring”, International Conference on Service Oriented Computing (ICSOC 2005), Amsterdam, The Netherlands, 2005,

Wang D. Zurawski J., “Fault-Tolerance Schemes for Hierarchical Mesh Networks”, The 6th International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT 2005), Dalian, China, 2005,

Zurawski J., Swany M., Beck M. and Ding Y., “Logistical Multicast for Data Distribution”, Proceedings of CCGrid, Workshop on Grids and Advanced Networks 2005 (GAN05), Cardiff, Wales, 2005,

2003

D. Gunter, B. Tierney, C. E. Tull, V. Virmani, “On-Demand Grid Application Tuning and Debugging with the NetLogger Activation Service”, 4th International Workshop on Grid Computing (Grid2003), November 17, 2003,

D. Agarwal, J. M. González, G. Jin, B. Tierney, “An Infrastructure for Passive Network Monitoring of Application Data Streams”, 2003 Passive and Active Measurement Workshop, April 14, 2003,

Dan Gunter, Brian Tierney, “NetLogger: A Toolkit for Distributed System Performance Tuning and Debugging”, Proceedings of The 8th IFIP/IEEE International Symposium on Integrated Network Management, March 24, 2003,

2002

A Chervenak, E. Deelman, I. Foster, A. Iamnitchi, C. Kesselman, W. Hoschek, P. Kunszt, M. Ripeanu, H. Stockinger, K. Stockinger, B. Tierney, “Giggle: A Framework for Constructing Scalable Replica Location Services”, Proceeding of IEEE Supercomputing 2002 Conference, November 15, 2002,

J. Lee, D. Gunter, M. Stoufer, B. Tierney, “Monitoring Data Archives for Grid Environments”, Proceeding of IEEE Supercomputing 2002 Conference, November 15, 2002,

T. Dunigan, M. Mathis and B. Tierney, “A TCP Tuning Daemon”, Proceeding of IEEE Supercomputing 2002 Conference, November 10, 2002,

D. Gunter, B. Tierney, K. Jackson, J. Lee, M. Stoufer,, “Dynamic Monitoring of High-Performance Distributed Applications”, Proceedings of the 11th IEEE Symposium on High Performance Distributed Computing (HPDC-11), July 10, 2002,

2001

Brian L. Tierney, Joseph B. Evans, Dan Gunter, Jason Lee, Martin Stoufer, “Enabling Network-Aware Applications”, Proceedings of the 10th IEEE Symposium on High Performance Distributed Computing (HPDC-10), August 15, 2001,

2000

B. Tierney, B. Crowley, D. Gunter, M. Holding, J. Lee, M. Thompson, “A Monitoring Sensor Management System for Grid Environments”, Proceedings of the IEEE High Performance Distributed Computing conference ( HPDC-9 ), August 10, 2000,

1999

Tierney, B. Lee, J., Crowley, B., Holding, M., Hylton, J., Drake, F., “A Network-Aware Distributed Storage Cache for Data Intensive Environments”, Proceedings of IEEE High Performance Distributed Computing conference ( HPDC-8 ), August 15, 1999,

1998

Tierney, B., W. Johnston, B. Crowley, G. Hoo, C. Brooks, D. Gunter., “The NetLogger Methodology for High Performance Distributed Systems Performance Analysis”, Proceedings of IEEE High Performance Distributed Computing conference (HPDC-7), July 12, 1998,

1969

Daniel K. Gunter, Keith R. Jackson, David E. Konerding, Jason R. Lee and Brian L. Tierney, “Essential Grid Workflow Monitoring Elements”, The 2005 International Conference on Grid Computing and Applications (GCA'05), December 31, 1969,

Book

2017

Mariam Kiran, X-Machines for Agent-Based Modeling: FLAME Perspectives, (January 30, 2017)

Book Chapter

2016

Baris Aksanli, Jagannath Venkatesh, Inder Monga, Tajana Rosing, “Renewable Energy Prediction for Improved Utilization and Efficiency in Data Centers and Backbone Networks”, ( May 30, 2016)

The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

Se-Young Yu, Nevil Brownlee, Aniket Mahanti, “Performance Evaluation of Protocols for Big Data Transfers”, Big Data: Storage, Sharing, and Security; Auerbach Publications: New York, NY, USA, ( 2016) Pages: 43

2015

Mariam Kiran, “What is Modelling and Simulation: An introduction”, Encyclopedia of Computer Science and Technology, ( December 24, 2015)

Mariam Kiran, “Legal Issues Surrounding Connected Government Services: A Closer Look at G-Clouds”, Cloud Computing Technologies for Connected Government, ( October 24, 2015)

M Kiran, “A methodology for Cloud Security Risks Management”, Cloud Computing, ( October 20, 2015)

2008

William Johnston, Evangelos Chaniotakis, Eli Dart, Chin Guok, Joe Metzger, Brian Tierney, “The Evolution of Research and Education Networks and their Essential Role in Modern Science”, Trends in High Performance & Large Scale Computing, ( November 1, 2008)

Published in: "Trends in High Performance & Large Scale Computing" Lucio Grandinetti and Gerhard Joubert, Editors

William Johnston, Joe Metzger, Mike O'Connor, Michael Collins, Joseph Burrescia, Eli Dart, Jim Gagliardi, Chin Guok, Kevin Oberman, “Network Communication as a Service-Oriented Capability”, High Performance Computing and Grids in Action, Volume 16, Advances in Parallel Computing, ( March 1, 2008)

ABSTRACT    
In widely distributed systems generally, and in science-oriented Grids in particular, software, CPU time, storage, etc., are treated as “services” – they can be allocated and used with service guarantees that allows them to be integrated into systems that perform complex tasks. Network communication is currently not a service – it is provided, in general, as a “best effort” capability with no guarantees and only statistical predictability.

In order for Grids (and most types of systems with widely distributed components) to be successful in performing the sustained, complex tasks of large-scale science – e.g., the multi-disciplinary simulation of next generation climate modeling and management and analysis of the petabytes of data that will come from the next generation of scientific instrument (which is very soon for the LHC at CERN) – networks must provide communication capability that is service-oriented: That is it must be configurable, schedulable, predictable, and reliable. In order to accomplish this, the research and education network community is undertaking a strategy that involves changes in network architecture to support multiple classes of service; development and deployment of service-oriented communication services, and; monitoring and reporting in a form that is directly useful to the application-oriented system so that it may adapt to communications failures.

In this paper we describe ESnet's approach to each of these – an approach that is part of an international community effort to have intra-distributed system communication be based on a service-oriented capability.

Presentation/Talk

2023

Marc Koerner, Chris Cummings, Network Orchestration at ESnet, Cisco Automation Developer Days, December 4, 2023,

ESnet is the DOE’s internal research network WAN provider. The ESnet network is designed as a high bandwidth and jumbo-frame optimized network, capable of transporting vast amounts of data between US national laboratories and supercomputing facilities. Thus, ESnet is delivering end-to-end connectivity for the scientific community for all sorts of data analytics and simulations. One of the major goals for our latest network generation (ESnet6) was the objective to have a fully orchestrated and automated configuration management system. Therefore, ESnet is leveraging tools like the Cisco Network Services Orchestrator (NSO) to deploy router configuration in a centralized and also service oriented fashion. This talk will provide an overview and insights about the current state of our NSO services, as well as how NSO is getting utilized in our network orchestration stack.

Chris Cummings, Benefits of IPv6 For Software Development, 2023 UK IPv6 Council Annual Meeting, November 21, 2023,

Chris Cummings, Dan Kelcher, Jeff Kala, Challenges to Network Automation Adoption, AutoCon0, November 13, 2023,

Network Orchestration is a defining factor in next generation networks, enabling operators to deliver more consistent and reliable services. Using the collaboratively developed Workflow Orchestrator and other commercial and open source tools, ESnet has been able to successfully Orchestrate and Automate network configuration deployment for large swaths of the ESnet6 network. This approach has enabled rapid deployment of new network services, as well as ensuring that configuration standards are well enforced when deploying network services. During this talk, we will provide a brief history of automation at ESnet, Introduce The Workflow Orchestrator, dive into what our goals were for orchestration and automation in the ESnet6 project, describe the technology and process that we used to meet those goals, and then provide a live demonstration of ESnet’s orchestration tooling in action. Finally, we will discuss the lessons we learned along the way while developing this tooling, providing time for Q&A.

Bruce A. Mah, 1:1 Packet Sampling, 12th SIG-NGN Meeting, October 26, 2023,

Adam Slagell, NSF Cybersecurity Summit Opening & Welcome, October 24, 2023,

Welcome to open the summit, history of ESnet, and an introduction to the SAFER workshop.

Jason Zurawski, Kenneth Miller, EPOC Office Hours, The 2023 NSF Campus Cyberinfrastructure PI Workshop, September 28, 2023,

Chris Cummings, Ryan Vredenburg, From Zero to Orchestrated Workshop, TechEX23, September 22, 2023,

Getting started with Network Orchestration is a daunting task that requires a lot of forethought and domain knowledge. Join this interactive full-day technical workshop to benefit from ESnet and SURF network and software engineers who have already gone through this process and are ready to share their knowledge.

A development environment will be provided, you just need to bring a laptop with a working docker setup and an IDE (preferably PyCharm or VSCode). The workshop will begin with introductions to product and workflow modeling with the Workflow Orchestrator and then move to interactive development sessions, finally ending with an open discussion around tailoring the orchestrator to your use-cases.

Dale W. Carder, Minding our MANRS at ESnet, Internet2 Tech Exchange 2023, September 21, 2023,

Dan Doyle, Empowering Measurement Users at ESnet, September 21, 2023,

Come hear how ESnet has worked to put more of its portfolio of network measurement collection and analysis capabilities directly – and securely – into the hands of users. These services and APIs are built around ESnet’s Stardust measurement platform and allow users not only to view data about the network, but also to control how data is visualized and reported on. Learn about how these themes also extend into some of the open source and community efforts that ESnet is involved in, with technologies and patterns inspired from the successes in Stardust.

Being more of a philosophical approach rather than a single technology or implementation strategy, we will be touching on several topics over the course of this talk. Each topic will focus on where this need came from, our approach to delivering on the need, lessons learned along the way, and where we see it growing in the future.

– The need to collect a variety of data and metadata from different sources using different protocols, as well as the ability to react quickly to changes in user needs
– The growing need at ESnet for custom network maps targeting a range of different end users and goals, such as engineering diagrams, outreach maps, or marketing and overmap maps
– ESnet’s use of Grafana Enterprise to provide federated authentication and filtered views of network data for external users
– Community benefit in Netsage and perfSONAR from open sourced data collection pipelines and storage based on lessons learned from Stardust

Central to all of these topics is the idea of Zero Trust. As we work to make these services more multi tenant capable and available to a broader segment of users, we must be able to consistently authenticate and authorize these users to ensure that they are only able to see and make changes to appropriate data. We will touch on some of the current and future efforts in this space as well.

By providing users with increased ability to see and manipulate information about the network, we hope to optimize many workflows by removing dependencies on developer or network engineer cycles while simultaneously freeing those resources up to work on other high impact items.

Dale W. Carder, R&E Upgrades for HL-LHC, Internet2 Tech Exchange 2023, September 20, 2023,

Jason Zurawski, Jennifer Schopf, Doug Southworth, Using NetSage to Support ACCESS, Internet2 Technology Exchange 2023, September 20, 2023,

Scott Campbell, Nick Buraglio, Mariam Kiran, Hecate Update, Internet 2 TechEX, September 20, 2023,

Overview Slide:

• Project Overview: What is Hecate about?
• Project Update: What have we done?
• Planned research agenda: How are we planning on reaching our
objective?


I will focus on where we are in terms of providing a deliverable or more realistically how we can best be in a position to deeply understand details around proposed vendor solutions.

 

Chris Cummings, Intro to The Workflow Orchestrator, TechEX23, September 19, 2023,

Network Orchestration is a defining factor in next generation networks, enabling operators to deliver more consistent and reliable services. Using the collaboratively developed Workflow Orchestrator and other commercial and open source tools, ESnet has been able to successfully Orchestrate and Automate network configuration deployment for large swaths of the ESnet6 network. This approach has enabled rapid deployment of new network services, as well as ensuring that configuration standards are well enforced when deploying network services.

During this talk, we will provide a brief history of automation at ESnet, Introduce The Workflow Orchestrator, dive into what our goals were for orchestration and automation in the ESnet6 project, describe the technology and process that we used to meet those goals, and then provide a live demonstration of ESnet’s orchestration tooling in action. Finally, we will discuss the lessons we learned along the way while developing this tooling, providing time for Q&A.

Marc Koerner, ESnet’s Orchestration Perspective, Internet2 Tech Exchange 2023, September 18, 2023,

ESnet and Internet2 are using the Cisco Network Service Orchestrator (NSO) to automate and orchestrate network configuration by leveraging principles of intent based networking and vendor agnostic service abstraction. This talk will give a brief overview of ESnet’s and Internet2’s NSO service architecture, the lessons learned, and the impacts of the overall software development process. ESnet will present a more granular umbrella service redesign, as well as the resulting strategy for the NSO service refactoring within ESnet’s network orchestration stack. Internet2 will present it’s current architecture and how we’re leveraging NSO to support the upcoming Insight Console Virtual Networks.

Chris Cummings, Sean Cummings, DPDK as an Offload Engine for P4 SmartNIC Applications, DPDK Summit 2023, September 12, 2023,

P4 has taken off as a powerful language for high-performance network applications, however, it is a limited language by design. Due to these limits, many P4 applications require a “slow” path for more complex packet manipulation. In this presentation we explore the use of DPDK as a component of P4 applications in conjunction with the ESnet SmartNIC platform. Drawing from our experience building a P4-based SIIT-DC NAT64 translator on FPGAs, this presentation explores how DPDK can be leveraged to offload complex and variable-length packet manipulation functions from the P4 datapath to a general-purpose CPU. While P4 offers the tools to quickly develop Legacy IP to IPv6 translations that perform at 100Gbps line rates, we encountered challenges when dealing with complex packet translations like ICMP responses. Accordingly, we punt those more intricate, but less frequently used, translations to a DPDK side-car application. Join us as we delve into the architecture of our NAT64 translation application and demonstrate the development flow used for integrating these tools together.

Jason Zurawski, Science Requirements to Support NOAA & NIST, Alaska Region Technology Interchange Consortium (ARTIC), September 11, 2023,

Jason Zurawski, Kenneth Miller, EPOC Report on NIST Science Drivers, NIST, August 28, 2023,

Kevin Barba, Marc Koerner, Anomaly Detection For Network Monitoring and Outage Risk Mitigation, CONCISE Project 2023, August 1, 2023,

Dale W. Carder, Tim Chown, Shawn McKee, Marian Babik, Use of the IPv6 Flow Label for WLCG Packet Marking, IETF 117, July 25, 2023,

Jason Zurawski, Jennifer Schopf, Nathaniel Mendoza, Doug Southworth, EPOC Support for Cyberinfrastructure and Data Movement, PEARC 2023, July 17, 2023,

Dan Doyle, Andrew Lake, Katrina Turner, How ESnet built Grafana plugins to visualize network data, GrafanaCON 2023, June 13, 2023,

In this session, Software Engineers Katrina Turner, Andy Lake, and Dan Doyle will delve into why ESnet needed these new visualizations, how they went about building them, and how others in the community can use them for their own purposes (networking and otherwise). Having recently deployed Grafana Enterprise, the team will touch on its future work, including utilizing the new authorization and team features to give ESnet users customized login and dashboard experiences on a single instance.

Chris Cummings, Simone Spinelli, Planning and Development in R&E Networks: Automation and Orchestration, TNC23, June 9, 2023,

Side meeting at TNC to provide updates and discuss potential collaboration opportunities in intercontinental connectivity between R&E networks and automation/orchestration.

Chin Guok, ESnet's In-Network Caching Pilot, The Network Conference 2023 (TNC'23), June 5, 2023,

Chris Cummings, Hans Trompert, Peter Boers, Nehemya McCarter-Ribakoff, From Zero to Orchestrated—A Workflow Orchestrator Workshop, TNC23, June 5, 2023,

Getting started with Network Orchestration is a daunting task that requires a lot of forethought and domain knowledge. Join this interactive full-day technical workshop to benefit from ESnet and SURF network and software engineers who have already gone through this process and are ready to share their knowledge. A remote development environment will be provided, you just need to bring a laptop with an SSH client and an IDE (preferably PyCharm or VSCode). The workshop will begin with introductions to product and workflow modeling with The Orchestrator and then move to interactive development sessions, finally ending with an open discussion around tailoring the orchestrator to your use-cases.

Jason Zurawski, Kenneth Miller, "Science DMZ Architecture", "Data Transfer Hardware", "Science DMZ Security Policy", "perfSONAR / Measurement", and "NetSage Network Visibility", Cyberinfrastructure for Research Data Management Workshop, May 23, 2023,

Jason Zurawski, Jennifer Schopf, Engagement and Performance Operations Center (EPOC) Support to Share and Collaborate, Internet2 Community Exchange 2023, May 10, 2023,

Ezra Kissel, Chin Guok, Alex Sim, Experiences in deploying in-network data caches, 26th International Conference on Computing in High Energy & Nuclear Physics (CHEP 2023), May 8, 2023,

Jason Zurawski, Kenneth Miller, Fasterdata DTN Framework, Globus World 2023, April 26, 2023,

Chin Guok, ESnet In-Network Caching Pilot, LHCOPN-LHCONE Meeting #50, 2023, April 18, 2023,

Jason Zurawski, ESnet & EPOC: Support for Remote Scientific Use Cases, NWAVE Stakeholder and Science Engagement Summit, March 29, 2023,

Jason Zurawski, Kenneth Miller, "Introduction to Science DMZ, Science Data Movement", "BGP Essentials", "Advanced BGP Concepts", and "Network Monitoring and perfSONAR", UCF / FLR Workshop on Networking Topics, February 16, 2023,

Jason Zurawski, Jennifer Schopf, NetSage and the NRP/CENIC, National Research Platform 2023 Conference (4NRP), February 8, 2023,

Jason Zurawski, Jennifer Schopf, EPOC and NetSage for WestNet, WestNet Winter 2023 Member Meeting, January 23, 2023,

2022

Dale W. Carder, Innovating on Control Planes, 10th SIG-NGN Meeting, November 29, 2022,

Chris Cummings, A Real-World Approach to Intent-based Networking and Service Orchestration, CHINOG 10, October 6, 2022,

Chris Cummings, Garrett Stewart, Robert Kwon, Karl Newell, Network Orchestration and Automation at Internet2, CENIC, and ESnet, CENIC 22, September 28, 2022,

Join leaders from Internet2, ESnet, and CENIC as they discuss network orchestration. Network orchestration is a defining factor in next-generation networks, enabling operators to deliver more consistent and reliable services. ESnet has leveraged a combination of internally developed tools, open-source software, and commercial software to orchestrate and automate network configuration deployment. This approach has enabled rapid deployment of new network services and ensured that configuration standards are well enforced when deploying network services. Karl Newell will describe Internet2's work in this area, including their work with Cisco NSO, the deployment of 400G networks, and the platforms they are actively using. Robert Kwon will detail the efforts CENIC has underway. During this talk, panelists will provide a brief history of automation in their organizations, describe their goals for orchestration and automation, describe the technology and process used to meet those goals, and provide demonstrations of their orchestration tooling.

Chris Cummings, A Real-World Approach to Intent-Based Networking and Service Orchestration, NFD Service Provider 2, August 4, 2022,

Intent-based networking is something that has a lot of mystique and buzz-words surrounding it. This talk explores the approach that ESnet took to build our service orchestration software suite as well as giving a few demonstrations of the software in action. This presentation is not an exhaustive explanation of how to build your own intent-based networking environment, but rather an example and overview of a real-world stack that is being used in a production network today and the principles behind it.

Building software that controls network equipment has many similarities to traditional software engineering, however, testing this software introduces many complexities unique to the network orchestration world. Join this talk to learn how we approached these challenges by building a Realistic Orchestration Validation Environment for netwoRks (ROVER) at ESnet.

2020

Inder Monga, FABRIC: integration of bits, bytes, and xPUs, JET meeting, March 17, 2020,

Presenting NSF-funded FABRIC project to the JET community

2018

Nick Buraglio, Automation, Orchestration, prototyping, and strategy, Great Planes Network Webinar Series Presentation, March 9, 2018,

Presentation on network automation and orchestration with focus on getting started and options available.

2016

Brian Tierney, Nathan Hanford, Recent Linux TCP Updates, and how to tune your 100G host, Internet2 Technology Exchange, September 27, 2016,

M. O’Connor, Y. Hines, Amazon Web Services Pilot Report, ESnet Report, September 2016,

This report summarizes an effort called "The ESnet Amazon Web Services (AWS) pilot" which was implemented to determine AWS “Direct Connect” or “DX” service provides advantages to ESnet customers above and beyond that of ESnet's standard Amazon connections at public Internet exchange points. 

Nick Buraglio, SDN Best Practices, Great Planes Network Webinar Series Presentation, April 8, 2016,

Presentation of best practices in production SDN deployments based on experience deploying SDN based networks based on varying technologies and techniques. 

Nick Buraglio, SDN: Theory vs. Practice, Invited talk, CODASPY 2016 SDN/NFV workshop, March 11, 2016,

Discuss research based software based networking and the differences beetween real world, prodiuction SDN for CODASPY SDN/NFV conference workshop. 

Inder Monga, Chin Guok, SDN for End-to-End Networking at Exascale, February 16, 2016,

Traditionally, WAN and campus networks and services have evolved independently from each other. For example, MPLS traffic engineered and VPN technologies have been targeted towards the WAN, while the LAN (or last mile) implementations have not incorporated that functionality. These restrictions have resulted in dissonance in services offered in the WAN vs. the LAN. While OSCARS/NSI virtual circuits are widely deployed in the WAN, they typically only run from site boundary to site boundary, and require painful phone calls, manual configuration, and resource allocation decisions for last mile extension. Such inconsistencies in campus infrastructures, all the way from the campus edge to the data-transfer hosts, often lead to unpredictable application performance. New architectures such as the Science DMZ have been successful in simplifying the variance, but the Science DMZ is not designed or able to solve the end-to-end orchestration problem. With the advent of SDN, the R&E community has an opportunity to genuinely orchestrate end-to-end services - and not just from a network perspective, but also from an end-host perspective. In addition, with SDN, the opportunity exists to create a broader set of custom intelligent services that are targeted towards specific science application use-cases. This proposal describes an advanced deployment of SDN equipment and creation of a comprehensive SDN software platform that will help with bring together the missing end-to-end story. 

Inder Monga, Plenary Keynote - "Design Patterns: Scaling up eResearch", Web Site, February 9, 2016,

2015

M Kiran, Invited Talk Software Engineering challenges in Smart Cities, Optics Group Arizona University, Oct 2015, December 1, 2015,

Inder Monga, Network Operating Systems and Intent APIs for SDN Applications, Technology Exchange Conference, October 6, 2015,

Philosophy of Network Operating Systems and Intent APIs

Inder Monga, ICN roadmaps for the next 2 years, 2nd ACM Conference on Information-Centric Networking (ICN 2015), October 1, 2015,

Panelists: Paul Mankiewich (Cisco), Luca Muscariello (Orange), Inder Monga (ESnet), Ignacio Solis (PARC), GQ Wang(Huawei), Jeff Burke (UCLA)

M Kiran, Platform dependency and cloud use for ABM, CCS Conference, Oct 2015, October 1, 2015,

Inder Monga, Science Data and the NDN paradigm, NDN Community Meeting (NDNcomm 2015): Architecture, Applications, and Collaboration, September 28, 2015,

Michael Smitasen, Brian Tierney, Evaluating Network Buffer Size requirements for Very Large Data Transfers, NANOG 64, San Francisco, June 2, 2015,

Jason Zurawski, Bridging the Technical Gap: Science Engagement at ESnet, Great Plains Network Annual Meeting, May 28, 2015,

Nick Buraglio, Bro intrusion detection system (IDS): an overview, Enhancing CyberInfrastructure by Training and Education, May 22, 2015,

Eli Dart, The Science DMZ, BioTeam Science DMZ 101 Webinar, May 18, 2015,

Jason Zurawski, Network Monitoring with perfSONAR, BioTeam & ESnet Webinar, May 18, 2015,

Nick Buraglio, Anita Nikolich, Dale Carder, Secure Layer 3 SDX Concept (Interdomain SDN), May 14, 2015,

A concept framework for Secure Layer 3 Interdomain SDN and ISD/IXP. 

Jason Zurawski, Cybersecurity: Protecting Against Things that go “bump” in the Net, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, May 8, 2015,

Handling Data Challenges in the Capacity Crunch, Royal Society London, May 2015, May 1, 2015,

Jason Zurawski, Understanding Big Data Trends and the Key Role of the Regionals in Bridging Needs and Solutions, PNWGP Board Meeting, April 21, 2015,

Jason Zurawski, The perfSONAR Effect: Changing the Outcome of Networks by Measuring Them, 2015 KINBER Annual Conference, April 16, 2015,

Jason Zurawski, Improving Scientific Outcomes at the APS with a Science DMZ, Globus World 2015, April 15, 2015,

Eli Dart, The Science DMZ, CDC OID/ITSO Science DMZ Workhsop, April 15, 2015,

Jason Zurawski, Cybersecurity: Protecting Against Things that go “bump” in the Net, Southern Partnership in Advanced Networking, April 9, 2015,

Jason Zurawski, The Science DMZ: A Network Design Pattern for Data-Intensive Science, Southern Partnership in Advanced Networking, April 8, 2015,

Jason Zurawski, Science DMZ Architecture and Security, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, April 3, 2015,

Joe Metzger, Jason Zurawski, ESnet's LHCONE Service, 2015 OSG All Hands Meeting, March 23, 2015,

Nick Buraglio, IPv6 Status; Operating production IPv6 networks, March 22, 2015,

IPv6 Status update and primer on operating production IPv6 networks as of 3/2015

Jason Zurawski, perfSONAR and Network Monitoring, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, March 13, 2015,

Jason Zurawski, Understanding Big Data Trends and the Key Role of the Regionals in Bridging Needs and Solutions, 2015 Quilt Winter Meeting, February 11, 2015,

Jason Zurawski, Wagging the Dog: Determining Network Requirements to Drive Modern Network Design, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, February 6, 2015,

Jason Zurawski, perfSONAR at 10 Years: Cleaning Networks & Disrupting Operation, perfSONAR Focused Technical Workshop, January 22, 2015,

Science Engagement: A Non-Technical Approach to the Technical Divide, ENCITE (ENhancing CyberInfrastructure by Training and Education) Webinar, January 16, 2015,

The Science DMZ and the CIO: Data Intensive Science and the Enterprise, RMCMOA Workshop, January 13, 2015,

2014

Jason Zurawski, The Science DMZ: A Network Design Pattern for Data-Intensive Science, New Mexico Technology in Education (NMTIE) Cyber Infrastructure Day, November 19, 2014,

EPSRC Grand Engineering Challenges, EPSRC meeting in defining the engineering challenges for future UK research, November 2014, November 1, 2014,

EC consultation research directions for future European research calls 2015-2016, November 1, 2014,

Mariam Kiran, Concerns for how software is distributed through the Cloud, Consultation on Cloud Computing, EU Digital Agenda for Europe, 2014, November 1, 2014,

Nick Buraglio, Anita Nikolich, Dale Carder, Securing the SDN WAN, October 30, 2014,

SDN has been successfully implemented by large companies and ISPs within their own data centers. However, the focus has remained on intradomain use cases with controllers under the purview of the same authority. Interdomain SDN promises more fine grained control of data flows between SDN networks but also presents the greater challenges of trust, authentication and policy control between them. We propose a secure method to peer SDN networks and a test implementation

Michael Smitasin, Brian Tierney, Switch Buffers Experiments: How much buffer do you need to support 10G flows?, 2014 Technology Exchange, October 29, 2014,

Nick Buraglio,Vincent Stoffer, Adam Slagell, Jim Eyrich, Scott Campbell, Von Welch, Securing the Science DMZ: a discussion, October 28, 2014,

The Science DMZ model is a widely deployed and accepted architecture allowing for movement and sharing of large-scale data sets between facilities, resources, or institutions. In order to help assure integrity of the resources served by the science DMZ, a different approach should be taken regarding necessary resources, visibility as well as perimeter and host security. Experienced panelists discuss common techniques, best practices, typical caveats as well as what to expect (and not expect) from a network perimeter that is purpose built to move science data.

Jason Zurawski, Science Engagement: A Non-Technical Approach to the Technical Divide, Cyber Summit 2014: Crowdsourcing Innovation, September 25, 2014,

Jason Zurawski, Mary Hester, Of Mice and Elephants: Supporting Research with the Science DMZ and Software Defined Networking, Cyber Summit 2014: Crowdsourcing Innovation, September 24, 2014,

Jason Zurawski, FAIL-transfer: Removing the Mystery of Network Performance from Scientific Data Movement, XSEDE Campus Champions Webinar, August 20, 2014,

 

Best practices for securing an open perimeter network
Securing the Science DMZ

Best practices for securing an open perimeter network or Science DMZ at for BroCon 2014.  Slides. Video

Nick Buraglio, Securing the Science DMZ, June 14, 2014,

The Science DMZ model is a widely deployed and accepted architecture allowing for movement and sharing of large-scale data sets between facilities, resources, or institutions.
In order to help assure integrity of the resources served by the science DMZ, a different approach should be taking regarding
necessary resources, visibility as well as perimeter and host security. Based on proven and existing production techniques
and deployment strategies, we provide an operational map and high level functional framework for securing a science DMZ utilizing a “defense in depth” strategy including log aggregation, effective IDS filtering and management techniques, black hole routing,
flow data and traffic baselining.

Nick Buraglio, Real world IPv6 deployments, June 9, 2014,

Presentation for Westnet conference on Real world IPv6 deployments, lessons learned and expectations.

Jason Zurawski, A Brief Overview of the Science DMZ, Open Science Grid Campus Grids Webinar, May 23, 2014,

Jason Zurawski, Brian Tierney, Mary Hester, The Role of End-user Engagement for Scientific Networking, TERENA Networking Conference (TNC), May 20, 2014,

Jason Zurawski, Brian Tierney, Jason Zurawski, An Overview in Emerging (and not) Networking Technologies, TERENA Networking Conference (TNC), May 19, 2014,

Jason Zurawski, Fundamentals of Data Movement Hardware, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, Essentials of perfSONAR, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, The perfSONAR Project at 10 Years: Status and Trajectory, GN3 (GÉANT) NA3, Task 2 - Campus Network Monitoring and Security Workshop, April 25, 2014,

Jason Zurawski, Network and Host Design to Facilitate High Performance Data Transfer, Globus World 2014, April 15, 2014,

Jason Zurawski, Brian Tierney, ESnet perfSONAR Update, 2014 Winter ESnet Site Coordinators Committee (ESCC) Meeting, February 25, 2014,

Jason Zurawski, Security and the perfSONAR Toolkit, Second NSF Workshop on perfSONAR based Multi-domain Network Performance Measurement and Monitoring (pSW 2014), February 21, 2014,

Overview of recent security breaches and practices for the perfSONAR Toolkit. 

2013

Jason Zurawski, The Science DMZ - Architecture, Monitoring Performance, and Constructing a DTN, Operating Innovative Networks (OIN), October 3, 2013,

Abhinava Sadasivarao, Sharfuddin Syed, Ping Pan, Chris Liou, Andy Lake, Chin Guok, Inder Monga, Open Transport Switch: A Software Defined Networking Architecture for Transport Networks, Workshop, August 16, 2013,

Presentation at HotSDN Workshop as part of SIGCOMM 2013

Jason Zurawski, Kathy Benninger, Network PerformanceTutorial featuring perfSONAR, XSEDE13: Gateway to Discovery, July 22, 2013,

Jason Zurawski, A Completely Serious Overview of Network Performance for Scientific Networking, Focused Technical Workshop: Network Issues for Life Sciences Research, July 18, 2013,

Jason Zurawski, Site Performance Measurement & Monitoring Best Practices, 2013 Summer ESnet Site Coordinators Committee (ESCC) Meeting, July 16, 2013,

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, SSERCA / FLR Summit, June 14, 2013,

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Great Plains Network Annual Meeting, May 29, 2013,

Jason Zurawski, Matt Lessins, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Merit Member Conference 2013, May 15, 2013,

Lauren Rotman, Jason Zurawski, Building User Outreach Strategies: Challenges & Best Practices, Internet2 Annual Meeting, April 22, 2013,

Michael Sinatra, IPv6 Deployment Panel Discussion, Department of Energy Information Managers’ Conference, April 2013,

Jason Zurawski, Network Tools Tutorial, Internet2 Annual Meeting, April 11, 2013,

Bill Johnston, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, The Third International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering, March 2013,

Jason Zurawski, Networking Potpourri, OSG All Hands Meeting, March 11, 2013,

Jason Zurawski, Debugging Network Performance With perfSONAR, eduPERT Performance U! Winter school, March 6, 2013,

Joe Metzger, ESnet5 Network Engineering Group Update, Winter ESCC 2013, January 2013,

Inder Monga, Network Abstractions: The first step towards a programmable WAN, TIP 2013, January 14, 2013,

University campuses, Supercomputer centers and R&E networks are challenged to architect, build and support IT infrastructure to deal effectively with the data deluge facing most science disciplines. Hybrid network architecture, multi-domain bandwidth reservations, performance monitoring and GLIF Open Lightpath Exchanges (GOLE) are examples of network architectures that have been proposed, championed and implemented successfully to meet the needs of science. This talk explores a new "one virtual switch" abstraction leveraging software-defined networking and OpenFlow concepts, that provides the science users a simple, adaptable network framework to meet their future application requirements. The talk will include the high-level design that includes use of OpenFlow and OSCARS as well as implementation details from demonstration planned for super-computing.

Eli Dart, Brian Tierney, Raj Kettimuthu, Jason Zurawski, Achieving the Science DMZ, January 13, 2013,

Tutorial at TIP2013, Honolulu, HI

  • Part 1: Architecture and Security
  • Part 2: Data Transfer Nodes and Data Transfer Tools
  • Part 3: perfSONAR

 

 

Michael Sinatra, DNSSEC: Signing, Validating, and Troubleshooting, TIP 2013: Joint Techs, January 2013,

Joe Metzger, Lessons Learned Deploying a 100G Nationwide Network, TIP 2013, January 2013,

2012

M. Boddie, T. Entel, C. Guok, A. Lake, J. Plante, E. Pouyoul, B. H. Ramaprasad, B. Tierney, J. Triay, V. M. Vokkarane, On Extending ESnet's OSCARS with a Multi-Domain Anycast Service, IEEE ONDM 2012, December 2012,

Inder Monga, Introduction to Bandwidth on Demand to LHCONE, LCHONE Point-to-point Service Workshop, December 13, 2012,

Introducing Bandwidth on Demand concepts to the application community of CMS and ATLAS experiments.

Michael Sinatra, IPv6 Measurement Related Activities, CANS 2012, December 2012,

Michael Sinatra, Risks of Not Deploying IPv6 Now, CANS 2012, December 2012,

Michael Sinatra, Don’t Ignore the Substrate: What Networkers Need to Worry About in the Era of Big Clouds and Big Data, Merit Networkers Workshop, December 2012,

Inder Monga, Software Defined Networking for big-data science, Worldwide LHC Grid meeting, December 2012,

Inder Monga, Eric Pouyoul, Chin Guok, Software Defined Networking for big-data science, SuperComputing 2012, November 15, 2012,

 

The emerging era of “Big Science” demands the highest possible network performance. End-to-end circuit automation and workflow-driven customization are two essential capabilities needed for networks to scale to meet this challenge. This demonstration showcases how combining software-defined networking techniques with virtual circuits capabilities can transform the network into a dynamic, customer-configurable virtual switch. In doing so, users are able to rapidly customize network capabilities to meet their unique workflows with little to no configuration effort. The demo also highlights how the network can be automated to support multiple collaborations in parallel.

 

Greg Bell, Network as Instrument, CANARIE Users’ Forum, November 2012,

Greg Bell, Measuring Success in R&E Networking, The Quilt, November 2012,

Greg Bell, Lead Panel on DOE Computing Resources, National Laboratory Day in Mississippi, November 12, 2012,

Inder Monga, Programmable Information Highway, November 11, 2012,

 

Suggested Panel Questions:

- What do you envision will have dramatic impact in the future networking and data management?  What research challenges do you expect in achieving your vision? 

- Do we need to re-engineer existing tools and middleware software? Elaborate on network management middleware in terms of virtual circuits, performance monitoring, and diagnosis tools.

- How do current applications match increasing data sizes and enhancements in network infrastructure? Please list a few network-aware application.  What is the scope of networking in the application domain?

- Resource management and scheduling problems are gaining importance due to  current developments in utility computing and high interest in Cloud infrastructure. Explain your vision.  What sort of algorithms/mechanisms will practically be used in the future?

- What are the main issues in designing/modelling cutting edge dynamic networks for large-scale data processing? What sort of performance problems do you expect?

- What necessary step do we need to implement to benefit from next generation high bandwidth networks? Do you think there will be radical changes such as novel APIs or new network stacks?

 

I. Monga, E. Pouyoul, C. Guok, Software-Define Networking for Big-Data Science – Arthictectural Models from Campus to the WAN, SC12: IEEE HPC, November 2012,

Inder Monga, Software-defined networking (SDN) and OpenFlow: Hot topics in networking, Masters Class at CMU, NASA Ames, October 2012,

Brian Tierney, ESnet’s Research Testbeds, GLIF Meeting, October 2012,

Paola Grosso, Inder Monga, Cees DeLaat, GreenSONAR, GLIF, October 12, 2012,

Michael Sinatra, DNS Security: Panel Discussion, NANOG 56, October 2012,

Eli Dart, Network expectations, or what to tell your system administrator, ALS user group meeting tomography workshop, October 2012,

Inder Monga, Bill St. Arnaud, Erik-Jan Bos, Defining GLIF Architecture Task Force, GLIF, October 11, 2012,

12th Annual LambdaGrid Workshop in Chicago

C.Guok, E, Chaniotakis, A. Lake, OSCARS Production Deployment Experiences, GLIF NSI Operationalization Meeting, October 2012,

Inder Monga, Network Service Interface: Concepts and Architecture, I2 Fall Member Meeting, September 2012,

Mike Bennett, Energy Efficiency in IEEE Ethernet Networks – Current Status and Prospects for the Future, Joint ITU/IEEE Workshop on Ethernet--Emerging Applications and Technologies, September 2012,

Mike Bennett, An Overview of Energy-Efficient Ethernet, NGBASE-T Study Group, IEEE 802.3 Interim meeting, September 2012,

Mike Bennett, EEE for P802.3bm, Objective Proposal, IEEE 40G and 100G Next Generation Optics Task Force, IEEE 802.3 Interim meeting, September 2012,

Greg Bell, Network as Instrument, NORDUnet 2012, September 2012,

Brian Tierney, High Performance Bulk Data Transfer with Globus Online, Webinar, September 2012,

Bill Johnston, Eli Dart, and Brian Tierney, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, ECT2012: The Eighth International Conference on Engineering Computational Technology, September 2012,

Inder Monga, Architecting and Operating Energy-Efficient Networks, September 10, 2012,

The presentation outlines the network energy efficiency challenges, the growth of network traffic and the simulation use-case to build next-generation energy-efficient network designs.

Greg Bell, ESnet Dark Fiber Footprint and Testbed, CESNET Customer Empowered Fiber Networks Workshop, September 2012,

Brian Tierney, ESnet perfSONAR-PS Plans and Perspective, OSG Meeting, August 2012,

Eli Dart, Networks for Data Intensive Science Environments, BES Neutron and Photon Detector Workshop, August 2012,

Bill Johnston, Eli Dart, and Brian Tierney, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, ARNES: The Academic and Research Network of Slovenia, August 2012,

Greg Bell, ESnet Manifesto, Joint Techs Conference, July 2012,

Inder Monga, Eric Pouyoul, Chin Guok, Eli Dart, SDN for Science Networks, Summer Joint Techs 2012, July 17, 2012,

Jon Dugan, The MyESnet Portal: Making the Network Visible, Summer 2012 ESCC/Internet2 Joint Techs, July 2012,

Inder Monga, Marching Towards …a Net-Zero Network, WIN2012 Conference, July 2012,

Inder Monga, A Data-Intensive Network Substrate for eResearch, eScience Workshop, July 2012,

Greg Bell, ESnet Update, ESnet Coordinating Committee Meeting, July 2012,

Joe Metzger, ANI & ESnet5, Summer ESCC 2012, July 2012,

Inder Monga, Energy Efficiency starts with measurement, Greentouch Meeting, June 2012,

Inder Monga, ESnet Update: Networks and Research, JGNx and NTT, June 2012,

Chris Tracy, 100G Deployment--Challenges & Lessons Learned from the ANI Prototype & SC11, NANOG 55, June 2012,

Brian Tierney, ESnet’s Research Testbed, LSN Meeting, May 2012,

Eli Dart, High Performance Networks to Enable and Enhance Scientific Productivity, WRNP 13, May 2012,

Jon Dugan, The MyESnet Portal: Making the Network Transparent, TERENA Networking Conference 2012, May 2012,

Greg Bell, ESnet Overview, LBNL Advisory Board, May 2012,

Bill Johnston, Evolution of R&E Networks to Enable LHC Science, Istituto Nazionale di Fisica Nucleare (INFN) and Italian Research & Education Network network (GARR) joint meeting, May 2012,

Bill Johnston and Eli Dart, The Square Kilometer Array: A next generation scientific instrument and its implications for networks (and possible lessons from the LHC experience), TERENA Networking Conference 2012, May 2012,

Bill Johnston, Some ESnet Observations on Using and Managing OSCARS Point-to-Point Circuit, LHCONE / LHCOPN meeting, May 2012,

Brian Tierney, ESnet, the Science DMZ, and the role of Globus Online, Globus World, April 2012,

Michael Sinatra, IPv6 Panel: Successes and Setbacks, ARIN XXIX, April 2012,

Eli Dart, Cyberinfrastructure for Data Intensive Science, Joint Techs: Internet2 Spring Member Meeting, April 2012,

Greg Bell, ESnet Update, National Laboratory CIO Meeting, March 2012,

Greg Bell, ESnet Update, CENIC Annual Conference, March 2012,

Andy Lake, Network Performance Monitoring and Measuring Using perfSONAR, CENIC 2012, March 2012,

C. Guok, I. Monga, IDCP and NSI: Lessons Learned, Deployments and Gap Analysis, OGF 34, March 2012,

T. Lehman, C. Guok, Advanced Resource Computation for Hybrid Service and TOpology Networks (ARCHSTONE), DOE ASCR PI Meeting, March 2012,

Eli Dart, Network Impacts of Data Intensive Science, Ethernet Technology Summit, February 2012,

Inder Monga, Enabling Science at 100G, ON*Vector Conference, February 2012,

Bill Johnston and Eli Dart, Design Patterns for Data-Intensive Science--LHC lessons and SKA, Pawsey Supercomputer Center User Meeting, February 2012,

Inder Monga, John MacAuley, GLIF NSI Implementation Task Force Presentation, Winter GLIF Tech Meeting at Baton Rouge, LA, January 26, 2012,

Michael Sinatra, Site IPv6 Deployment Status & Issues, Winter ESnet Site Coordinators Committee Meeting, January 26, 2012,

Michael Sinatra, ESnet as an ISP, Winter ESnet Site Coordinators Committee Meeting, January 26, 2012,

Greg Bell, Science at the Center: ESnet Update, Joint Techs, January 25, 2012,

In this talk, Acting Director Greg Bell will provide an update on ESnet's recent activities through the lens of its mission to accelerate discovery for researchers in the DOE Office of Science. Topics covered: what makes ESnet distinct? Why does its ScienceDMZ strategy matter? What are potential 'design patterns' for data-intensive science? Does 100G matter?

Eric Pouyoul, Brian Tierney, Achieving 98Gbps of Cross-country TCP traffic using 2.5 hosts, 10 10G NICs, and 10 TCP streams, Winder 2012 Joint Techs, January 25, 2012,

Patty Giuntoli, Sheila Cisko, ESnet Collaborative Services (ECS) / RCWG updates, Winter ESnet Site Coordinators Committee Meeting, January 25, 2012,

Joe Metzger, ESnet 5 Deployment Plans, Winter ESnet Site Coordinators Committee Meeting, January 25, 2012,

Eli Dart, Brent Draney, National Laboratory Success Stories, Joint Techs, January 24, 2012,

Reports from ESnet and National Laboratories that have successfully deployed methods to enhance their infrastructure support for data intensive science.

This presentation will discuss the challenges and lessons learned in the deployment of the 100GigE ANI Prototype network and support of 100G circuit services during SC11 in Seattle. Interoperability, testing, measurement, debugging, and operational issues at both the optical and layer-2/3 will be addressed. Specific topics will include: (1) 100G pluggable optics – options, support, and standardization issues, (2) Factors negatively affecting 100G line-side transmission, (3) Saturation testing and measurement with hosts connected at 10G, (4) Debugging and fault isolation with creative use of loops/circuit services, (5) Examples of interoperability problems in a multi-vendor environment, and (6) Case study: Transport of 2x100G waves to SC11.

Chin Guok, Evolution of OSCARS, Joint Techs, January 23, 2012,

On-demand Secure Circuits and Advance Reservation System (OSCARS) has evolved tremendously since its conception as a DOE funded project to ESnet back in 2004. Since then, it has grown from a research project to a collaborative open-source software project with production deployments in several R&E networks including ESnet and Internet2. In the latest release of OSCARS as version 0.6, the software was redesigned to flexibly accommodate both research and production needs. It is being used currently by several research projects to study path computation algorithms, and demonstrate multi-layer circuit management. Just recently, OSCARS 0.6 was leveraged to support production level bandwidth management in the ESnet ANI 100G prototype network, SCinet at SC11 in Seattle, and the Internet2 DYNES project. This presentation will highlight the evolution of OSCARS, activities surrounding OSCARS v0.6 and lessons learned, and share with the community the roadmap for future development that will be discussed within the open-source collaboration.

Joe Breen, Eli Dart, Eric Pouyoul, Brian Tierney, Achieving a Science "DMZ", Winter 2012 Joint Techs, Full day tutorial, January 22, 2012,

There are several aspects to building successful infrastructure to support data intensive science. The Science DMZ Model incorporates three key components into a cohesive whole: a high-performance network architecture designed for ease of use; well-configured systems for data transfer; and measurement hosts to provide visibility and rapid fault isolation. This tutorial will cover aspects of network architecture and network device configuration, the design and configuration of a Data Transfer Node, and the deployment of perfSONAR in the Science DMZ. Aspects of current deployments will also be discussed.

2011

Steve Cotter, ANI details leading to ESnet5, ESCC, Summer 2011, July 13, 2011,

C. Guok, OSCARS, GENI Project Office Meeting, May 2011,

William E. Johnston, Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service, TERENA Networking Conference, 16 - 19 May, 2011, Prague, Czech Republic, May 16, 2011,

Steve Cotter, Early Lessons Learned Deploying a 100Gbps Network, Enterprise Innovation Symposium in Atlanta, May 4, 2011,

W.E. Johnston, C. Guok, J. Metzger, B. Tierney, Network Services for High Performance Distributed Computing and Data Management, The Second International Conference on Parallel, Distributed, Grid, and Cloud Computing for Engineering, Ajaccio - Corsica - France, April 12, 2011,

Steve Cotter, ESnet Update, Winter 2011 Joint Techs Clemson, SC, February 2, 2011,

Brian Tierney, ANI Testbed Project Update, Winter 2011 Joint Techs, Clemson, SC, February 2, 2011,

Eli Dart, The Science DMZ, Winter 2011 Joint Techs, February 1, 2011,

Joe Metzger, DICE Diagnostic Service, Joint Techs - Clemson, South Carolina, January 27, 2011,

2010

Chaitanya S. K. Vadrevu, Massimo Tornatore, Chin P. Guok, Inder Monga, A Heuristic for Combined Protection of IP Services and Wavelength Services in Optical WDM Networks, IEEE ANTS 2010, December 2010,

Chris Tracy, Introduction to OpenFlow: Bringing Experimental Protocols to a Network Near You, NANOG50 Conference, Atlanta, Oct. 4, 2010, October 4, 2010,

Chris Tracy, Eli Dart, Science DMZs: Understanding their role in high-performance data transfers, CANS 2010, September 20, 2010,

Kevin Oberman, IPv6 Implementation at a Network Service Provider, 2010 Inter Agency IPv6 Information Exchange, August 4, 2010,

Joe Metzger, PerfSONAR Update, ESCC Meeting, July 15, 2010,



Evangelos Chaniotakis, Virtual Circuits Landscape, ESCC Meeting, Columbus, Ohio, July 15, 2010,

Jon Dugan, Using Graphite to Visualize Network Data, ESCC Meeting
, Columbus, Ohio, July 15, 2010,

Kevin Oberman, Future DNSSEC Directions, ESCC Meeting, Columbus, Ohio, July 15, 2010,

Eli Dart, High Performance Data Transfer, Joint Techs, Summer 2010, July 15, 2010,

C. Guok, OSCARS Roadmap, OGF 28; DICE Control Plane WG, May 2010,

Steve Cotter, ESnet Update, ESCC Meeting, Salt Lake City, Utah, February 3, 2010,

Kevin Oberman, IPv6 SNMP Network Management, http://events.internet2.edu/2010/jt-slc/, February 3, 2010,

Joint Techs
Salt Lake City, Utah
http://events.internet2.edu/2010/jt-slc/

Jon Dugan, Network Monitoring and Visualization at ESnet, Joint Techs
, Salt Lake City, Utah, February 3, 2010,

Steve Cotter, ESnet Update, Joint Techs, Salt Lake City, Utah, February 2, 2010,

Kevin Oberman, DNSSEC Implementation at ESnet, Joint Techs
, Salt Lake City, Utah, February 2, 2010,

C. Guok, I. Monga, Composible Network Service Framework, ESCC, February 2010,

2009

William E Johnston, Progress in Integrating Networks with Service Oriented Architectures / Grids. The Evolution of ESnet's Guaranteed Bandwidth Service, Cracow ’09 Grid Workshop, October 12, 2009,

William Johnston, Energy Sciences Network Enabling Virtual Science, TERENA Conference, Malaga, Spain, July 9, 2009,

William E Johnston, The Evolution of Research and Education Networks and their Essential Role in Modern Science, TERENA Conference, Malaga, Spain, June 9, 2009,

C. Guok, ESnet OSCARS, DOE Joint Engineering Taskforce, February 2009,

2008

Chin Guok, David Robertson, Evangelos Chaniotakis, Mary Thompson, William Johnston, Brian Tierney, A User Driven Dynamic Circuit Network Implementation, IEEE DANMS 2008, November 2008,

William Johnston, ESnet Planning, Status, and Future Issues, ASCAC Meeting, August 1, 2008,

William E Johnston, ESnet4: Networking for the Future of DOE Science, Office of Science, Science Programs Requirements Workshops: Nuclear Physics, May 1, 2008,

Kevin Oberman, The Gathering Storm: The Coming Internet Crisis, Joint Techs, Honolulu, Hawaii, January 21, 2008,

Joseph Burrescia, ESnet Update, Joint Techs, Honolulu, Hawaii, January 21, 2008,

2007

C. Guok, Impact of ESnet OSCARS and Collaborative Projects, SC07, November 2007,

Joe Metzger, ESnet4: Networking for the Future of DOE Science, ICFA International Workshop on Digital Divide, October 25, 2007,

William E Johnston, ESnet4 - Networking for the Future of DOE Science: High Energy Physics / LHC Networking, ON Vector (ON*Vector) Workshop
, February 26, 2007,

2006

ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS), Google invited talk; Advanced Networking for Distributed Petascale Science Workshop; IEEE GridNets; QUILT Fall Fiber Workshop, 2008, 2006,

Report

2024

Jason Zurawski, Ben Brown, Gulshan Rai, Eli Dart, Cian Dawson, Carol Hawk, Paul Mantica, Spyridon Margetis, Ken Miller, Nathan Miller, Andrew Wiedlea, “Nuclear Physics Network Requirements Review Final Report”, Report, July 26, 2024, LBNL LBNL-2001602

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

ESnet interconnects DOE national laboratories, user facilities, and major experiments so that scientists can use remote instruments and computing resources as well as share data with collaborators, transfer large datasets, and access distributed data repositories. ESnet is specifically built to provide a range of network services tailored to meet the unique requirements of the DOE’s data-intensive science.

Between July 2023 and October 2023, ESnet and the Nuclear Physics program (NP) of the DOE SC organized an ESnet requirements review of NP-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the NP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski, Dale Carder, Eric Colby, Eli Dart, Carol Hawk, Ken Miller, Abid Patwa, Kate Robinson, Andrew Wiedlea, “High Energy Physics Network Requirements Review: Two-Year Update”, Report, July 26, 2024, LBNL LBNL-2001605

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

ESnet interconnects DOE national laboratories, user facilities, and major experiments so that scientists can use remote instruments and computing resources as well as share data with collaborators, transfer large datasets, and access distributed data repositories. ESnet is specifically built to provide a range of network services tailored to meet the unique requirements of the DOE’s data-intensive science.

In July 2023, the Energy Sciences Network (ESnet) and the High Energy Physics program (HEP) of the DOE SC organized an interim ESnet requirements review of HEP-supported activities, to follow up on the work started during the 2020 HEP Network Requirements Review. Preparation for these events included checking back with the key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare updates to their previously submitted case study documents, so that ESnet could update the understanding of any changes to the current, near-term, and long-term status, expectations, and processes that will support the science activities of the program.

J. Zurawski, J. Schopf, “EPOC accelerates science collaborations for The Quilt”, Quilt Circle, May 1, 2024,

B. Meade, W. Huntoon, M. Meehl, K. Robinson, J. Zurawski, “Elevating Women in IT Networking: Indiana University assumes leadership of WINS program”, Quilt Circle, May 1, 2024,

2023

Inder Monga, Chin Guok, Arjun Shankar, “Federated IRI Science Testbed (FIRST): A Concept Note”, DOE Office of Science, December 7, 2023, LBNL LBNL-2001553

The Department of Energy’s (DOE’s) vision for an Integrated Research Infrastructure (IRI) is to empower researchers to smoothly and securely meld the DOE’s world-class user facilities and research infrastructure in novel ways in order to radically accelerate discovery and innovation. Performant IRI arises through the continuous interoperability of research workflows with compute, storage, and networking infrastructure, fulfilling researchers’ quests to gain insight from observational and experimental data. Decades of successful research, pilot projects, and demonstrations point to the extraordinary promise of IRI but also indicate the intertwined technological, policy, and sociological hurdles it presents. Creating, developing, and stewarding the conditions for seamless interoperability of DOE research infrastructure, with clear value propositions to stakeholders to opt into an IRI ecosystem, will be the next big step.

The IRI testbed will tie together experimental and observational instruments, ASCR compute facilities for largescale analysis, and edge computing for data reduction and filtering using Energy Sciences Network (ESnet), the high performance network and DOE user facility. The testbed will provide pre-production capabilities that are beyond a demonstration of technology.

Governance, funding, and resource allocation are beyond the scope of this document: it seeks to provide a high-level view of potential benefits, focus areas, and the working groups whose formation would further define the testbed’s design, activities, and goals.

 

Eli Dart, Jason Zurawski, Carol Hawk, Benjamin Brown, Inder Monga, “ESnet Requirements Review Program Through the IRI Lens”, LBNL, October 16, 2023, LBNL 2001552

The Department of Energy (DOE) ensures America’s security and prosperity by addressing its energy, environmental, and nuclear challenges through transformative science and technology solutions. The DOE’s Office of Science (SC) delivers groundbreaking scientific discoveries and major scientific tools that transform our understanding of nature and advance the energy, economic, and national security of the United States. The SC’s programs advance DOE mission science across a wide range of disciplines and have developed the research infrastructure needed to remain at the forefront of scientific discovery.

The DOE SC’s world-class research infrastructure — exemplified by the 28 SC scientific user facilities — provides the research community with premier observational, experimental, computational, and network capabilities. Each user facility is designed to provide unique capabilities to advance core DOE mission science for its sponsor SC program and to stimulate a rich discovery and innovation ecosystem.

Research communities gather and flourish around each user facility, bringing together diverse perspectives. A hallmark of many facilities is the large population of students, postdoctoral researchers, and early-career scientists who contribute as full-fledged users. These facility staff and users collaborate over years to devise new approaches to utilizing the user facility’s core capabilities. The history of the SC user facilities has many examples of wildly inventive researchers challenging operational orthodoxy to pioneer new vistas of discovery; for example, the use of the synchrotron X-ray light sources for study of proteins and other large biological molecules. This continual reinvention of the practice of science — as users and staff forge novel approaches expressed in research workflows — unlocks new discoveries and propels scientific progress.

Within this research ecosystem, the high performance computing (HPC) and networking user facilities stewarded by SC’s Advanced Scientific Computing Research (ASCR) program play a dynamic cross-cutting role, enabling complex workflows demanding high performance data, networking, and computing solutions. The DOE SC’s three HPC user facilities and the Energy Sciences Network (ESnet) high-performance research network serve all of the SC’s programs as well as the global research community. Argonne Leadership Computing Facility (ALCF), the National Energy Research Scientific Computing Center (NERSC), and Oak Ridge Leadership Computing Facility (OLCF) conceive, build, and provide access to a range of supercomputing, advanced computing, and large-scale data-infrastructure platforms, while ESnet interconnects DOE SC research infrastructure and enables seamless exchange of scientific data. All four facilities operate testbeds to expand the frontiers of computing and networking research. Together, the ASCR facilities enterprise seeks to understand and meet the needs and requirements across SC and DOE domain science programs and priority efforts, highlighted by the formal requirements reviews (RRs) methodology.

In recent years, the research communities around the SC user facilities have begun experimenting with and demanding solutions integrated with HPC and data infrastructure. This rise of integrated-science approaches is documented in many community and high-level government reports. At the dawn of the era of exascale science and the acceleration of artificial intelligence (AI) innovation, there is a broad need for integrated computational, data, and networking solutions.

In response to these drivers, DOE has developed a vision for an Integrated Research Infrastructure (IRI): To empower researchers to meld DOE’s world-class research tools, infrastructure, and user facilities seamlessly and securely in novel ways to radically accelerate discovery and innovation.

The IRI vision is fundamentally about establishing new data-management and computational paradigms within which DOE SC user facilities and their research communities work together to improve existing capabilities and create new possibilities by building bridges across traditional silos. Implementation of IRI solutions will give researchers simple and powerful tools with which to implement multi-facility research data workflows.

In 2022, SC leadership directed the Advanced Scientific Computing Research (ASCR) program to conduct the Integrated Research Infrastructure Architecture Blueprint Activity (IRI ABA) to produce a reference framework to inform a coordinated, SC-wide strategy for IRI. This activity convened the SC science programs and more than 150 DOE national laboratory experts from all 28 SC user facilities across 13 national laboratories to consider the technological, policy, and sociological challenges to implementing IRI.

Through a series of cross-cutting sprint exercises facilitated by the IRI ABA leadership group and peer facilitators, participants produced an IRI Framework based on the IRI Vision and comprising:

  • IRI Science Patterns spanning DOE science domains;
  • IRI Practice Areas needed for implementation;
  • IRI blueprints that connect Patterns and Practice Areas;
  • Overarching principles for realizing the DOE-wide IRI ecosystem.

The resulting IRI framework and blueprints provide the conceptual foundations to move forward with organized, coordinated DOE implementation efforts. The next step is to identify urgencies and ripe areas for focused efforts that uplift multiple communities.

Upon completion of the IRI ABA framework, ESnet applied the IRI Science Patterns lens and undertook a metaanalysis of ESnet’s Requirements Reviews (RRs), the core strategic planning documents that animate the multiyear partnerships between ESnet and five of the DOE SC programs. Between 2019 and 2023, ESnet completed a new round of RRs with the following SC programs: Nuclear Physics (2019-20), High Energy Physics (2020-21), Fusion Energy Sciences (2021-22), Basic Energy Sciences (2021-22), and Biological and Environmental Research (2022-23). Together these ESnet RRs provide a rich trove of insights into opportunities for immediate IRI progress and investment.

Our meta-analysis of 74 high-priority case studies reveals that:

  • -There are a significant number of research workflows spanning materials science, fusion energy, nuclear physics, and biological science that have a similar structure. Creation of common software components to improve these workflows’ performance and scalability will benefit researchers in all of these areas.
  • There is broad opportunity to accelerate scientific productivity and scientific output across DOE facilities by integrating them with each other and with high performance computing and networking.
  • The ESnet RRs’ blending of retrospective and prospective insight affirms that the IRI patterns are persistent across time and likely to persist into the future, offering value as a basis for analysis and strategic planning going forward.

 

Jason Zurawski, Eli Dart, Zach Harlan, Carol Hawk, John Hess, Justin Hnilo, John Macauley, Ramana Madupu, Ken Miller, Christopher Tracy, Andrew Wiedlea, “Biological and Environmental Research Network Requirements Review Final Report”, Report, September 11, 2023, LBNL LBNL-2001542

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between August 2022 and April 2023, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized an ESnet requirements review of BER-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BER ESS program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

J. Zurawski, J. Schopf, “EPOC provides deep insights into Research CI use for institutions”, Quilt Circle, May 1, 2023,

J. Zurawski, D. Carder, E. Dart, K. Robinson, “Evaluating and improving network performance to support high energy physics with ESnet”, Quilt Circle, May 1, 2023,

Jason Zurawski, Jennifer Schopf, “National Institute of Standards and Technology Requirements Analysis Report”, Lawrence Berkeley National Laboratory, April 21, 2023, LBNL LBNL-2001525

Jason Zurawski, Jennifer Schopf, Doug Southworth, Austin Gamble, Byron Hicks, Amy Schultz, “St. Mary’s University Requirements Analysis Report”, Lawrence Berkeley National Laboratory, January 16, 2023, LBNL LBNL-2001503

2022

Jason Zurawski, Ben Brown, Dale Carder, Eric Colby, Eli Dart, Ken Miller, Abid Patwa, Kate Robinson, Andrew Wiedlea, “High Energy Physics Network Requirements Review: One-Year Update”, ESnet Network Requirements Review, December 22, 2022, LBNL LBNL-2001492

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy​ (DOE) ​Office​ of​ Science​ (SC)​ and​ delivers​ highly​ reliable​ data​transport ​capabilities​ optimized​ for ​the​ requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the United States and abroad. ESnet is funded and stewarded​ by​ the​ Advanced​ Scientific ​Computing​ Research​ (ASCR)​ program​ and​ managed​ and​operated​ by​ the​ Scientific ​Networking​ Division​ at ​Lawrence​ Berkeley ​National​ Laboratory​ (LBNL). ​ESnet ​is ​widely​ regarded​ as​ a global leader in the research and education networking community.

In April 2022, ESnet and the Office of High Energy Physics (HEP) of the DOE SC organized an ESnet requirements review of HEP-supported activities. Preparation for the review included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about the group’s relationship to the HEP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

ESnet and ASCR use requirements reviews to discuss and analyze current and planned science use cases and anticipated data output of a particular program, user facility, or project to inform ESnet’s strategic planning, including network operations, capacity upgrades, and other service investments. A requirements review comprehensively surveys major science stakeholders’ plans and processes in order to investigate data management requirements over the next 5–10 years.

Jason Zurawski,Dale Carder,Matthias Graf,Carol Hawk,Aaron Holder,Dylan Jacob,Eliane Lessner,Ken Miller,Cody Rotermund,Thomas Russell,Athena Sefat,Andrew Wiedlea, “2022 Basic Energy Sciences Network Requirements Review Final Report”, Report, December 2, 2022, LBNL LBNL-2001490

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewardedby the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL). ESnet is widely regarded as a global leader in the research and education networking community.

Between March and September 2022, ESnet and the Office of Basic Energy Sciences (BES) of the DOE SC organized an ESnet requirements review of BES-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about its relationship to the BES program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski,Eli Dart,Ken Miller,Lauren Rotman,Andrew Wiedlea, “ARIES Network Requirements Review”, Report, November 28, 2022, LBNL LBNL-2001476

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL).

On May 1, 2021, ESnet and the DOE Office of Energy Efficiency and Renewable Energy (EERE), organized an ESnet requirements review of the ARIES (Advanced Research on Integrated Energy Systems) platform. Preparation for this event included identification of key stakeholders to the process: program and facility management, research groups, technology providers, and a number of external observers. These individuals were asked to prepare formal case study documents in order to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

Jason Zurawski, Ben Brown, Eli Dart, Carol Hawk, Saswata Hier-Majumder, Josh King, John Mandrekas, Ken Miller, William Miller, Lauren Rotman, Andrew Wiedlea, “2021 Fusion Energy Sciences Network Requirements Review”, May 23, 2022, LBNL 2001462

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy (DOE) Office of Science (SC) and delivers highly reliable data transport capabilities optimized for the requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the US and abroad. ESnet is funded and stewarded by the Advanced Scientific Computing Research (ASCR) program and managed and operated by the Scientific Networking Division at Lawrence Berkeley National Laboratory (LBNL).

 

ESnet is widely regarded as a global leader in the research and education networking community. Throughout 2021, ESnet and the Office of Fusion Energy Sciences (FES) of the DOE SC organized an ESnet requirements review of FES-supported activities. Preparation for these events included identification of key stakeholders: program and facility management, research groups, and technology providers. Each stakeholder group was asked to prepare formal case study documents about their relationship to the FES program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward.

2021

Jason Zurawski, Hans Addleman, Ken Miller, “University of Central Florida Campus-Wide Deep Dive”, LBNL Report, August 20, 2021, LBNL LBNL-2001419

Jason Zurawski, Hans Addleman, Ken Miller, Doug Southworth, “NOAA National Centers for Environmental Information Fisheries Acoustics ArchiveNetwork Deep Dive”, LBNL Report, August 12, 2021, LBNL LBNL-2001417

Jason Zurawski, Ben Brown, Dale Carder, Eric Colby, Eli Dart, Ken Miller, Abid Patwa, Kate Robinson, Lauren Rotman, Andrew Wiedlea, “2020 High Energy Physics Network Requirements Review Final Report”, ESnet Network Requirements Review, June 29, 2021, LBNL LBNL-2001398

The Energy Sciences Network (ESnet) is the high-performance network user facility for the US Department of Energy​ (DOE) ​Office​ of​ Science​ (SC)​ and​ delivers​ highly​ reliable​ data​transport ​capabilities​ optimized​ for ​the​ requirements of data-intensive science. In essence, ESnet is the circulatory system that enables the DOE science mission by connecting all of its laboratories and facilities in the United States and abroad. ESnet is funded and stewarded​ by​ the​ Advanced​ Scientific ​Computing​ Research​ (ASCR)​ program​ and​ managed​ and​operated​ by​ the​ Scientific ​Networking​ Division​ at ​Lawrence​ Berkeley ​National​ Laboratory​ (LBNL). ​ESnet ​is ​widely​ regarded​ as​ a global leader in the research and education networking community.

Throughout ​2020,​ESnet​ and​ the ​Office ​of ​High ​Energy​ Physics​ (HEP)​ of ​the ​DOE​ SC​ organized​ an​ ESnet​ requirements ​review​ of ​HEP-supported​ activities.​ Preparation ​for ​this​ event​included​ identification ​of​ key​ stakeholders: program and facility management, research groups, technology providers, and a number of external observers. These individuals were asked to prepare formal case study documents about their relationship to the HEP program to build a complete understanding of the current, near-term, and long-term status, expectations, and processes that will support the science going forward. A series of pre-planning meetings better prepared case study authors for this task, along with guidance on how the review would proceed in a virtual fashion.

ESnet and ASCR use requirements reviews to discuss and analyze current and planned science use cases and anticipated data output of a particular program, user facility, or project to inform ESnet’s strategic planning, including network operations, capacity upgrades, and other service investments. A requirements review comprehensively surveys major science stakeholders’ plans and processes in order to investigate data management requirements over the next 5–10 years.

2020

Ezra Kissel, Chin Fang, “Zettar zx Evaluation for ESnet DTNs”, ESnet 100G Testbed, November 2020,

Jason Zurawski, Benjamin Brown, Eli Dart, Ken Miller, Gulshan Rai, Lauren Rotman, Paul Wefel, Andrew Wiedlea, Editors, “Nuclear Physics Network Requirements Review: One-Year Update”, ESnet Network Requirements Review, September 2, 2020, LBNL LBNL-2001381

Jason Zurawski, Jennifer Schopf, Hans Addleman, “University of Wisconsin-Madison Campus-Wide Deep Dive”, LBNL Report, May 26, 2020, LBNL LBNL-2001325

2019

Jason Zurawski, Jennifer Schopf, Hans Addleman, Scott Chevalier, George Robb, “Great Plains Network - Kansas State University Agronomy Application Deep Dive”, LBNL Report, November 11, 2019, LBNL LBNL-2001321

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “University of Cincinnati Campus-Wide Deep Dive”, LBNL Report, November 1, 2019, LBNL LBNL-2001320

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “Trinity University Campus-Wide Deep Dive”, LBNL Report, November 1, 2019, LBNL LBNL-2001319

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, Scott Chevalier, “Purdue University Application Deep Dive”, LBNL Report, November 1, 2019, LBNL LBNL-2001318

Jason Zurawski, Jennifer Schopf, Hans Addleman, Doug Southworth, “Arcadia University Bioinformatics Application Deep Dive”, LBNL Report, July 8, 2019, LBNL LBNL-2001317

Jason Zurawski, Eli Dart, Lauren Rotman, Paul Wefel, Editors, “Nuclear Physics Network Requirements Review 2019 - Final Report”, ESnet Network Requirements Review, May 8, 2019, LBNL LBNL-2001281

Nicholas A Peters, Warren P Grice, Prem Kumar, Thomas Chapuran, Saikat Guha, Scott Hamilton, Inder Monga, Raymond Newell, Andrei Nomerotski, Don Towsley, Ben Yoo, “Quantum Networks for Open Science (QNOS) Workshop”, DOE Technical Report, April 1, 2019,

2016

Chevalier, S., Schopf, J. , M., Miller, K., Zurawski, J., “Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure”, July 1, 2016, LBNL 1005797

Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets.

The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends.

This work explores low cost alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. We present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.

2015

M Kiran, M Stanett, “Bitcoin risk analysis, NEMODE Policy Paper”, December 1, 2015,

Vincenzo Capone, Mary Hester, Florence Hudson, Lauren Rotman, “Connecting HPC and High Performance Networks for Scientists and Researchers”, November 2015,

Julian Borrill, Eli Dart, Brooklin Gore, Salman Habib, Steven T. Myers, Peter Nugent, Don Petravick, Rollin Thomas, “Improving Data Mobility & Management for International Cosmology”, CrossConnects 2015 Workshop, October 2, 2015, LBNL 1001456

Eli Dart, Mary Hester, and Jason Zurawski, Editors, “Biological and Environmental Research Network Requirements Review 2015 - Final Report”, ESnet Network Requirements Review, September 18, 2015, LBNL 1004370

Eli Dart, Mary Hester, Jason Zurawski, “Advanced Scientific Computing Research Network Requirements Review - Final Report 2015”, ESnet Network Requirements Review, April 22, 2015, LBNL 1005790

2014

Eli Dart, Mary Hester, Jason Zurawski, “Basic Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, September 2014, LBNL 6998E

Eli Dart, Mary Hester, Jason Zurawski, “Fusion Energy Sciences Network Requirements Review - Final Report 2014”, ESnet Network Requirements Review, August 2014, LBNL 6975E

B Mohammed, M Kiran, “Experimental Report on Setting up a Cloud Computing Environment at the University”, arXiv preprint arXiv:1412.4582 1 2014, June 1, 2014,

2013

Eli Dart, Mary Hester, Jason Zurawski, Editors, “High Energy Physics and Nuclear Physics Network Requirements - Final Report”, ESnet Network Requirements Workshop, August 2013, LBNL 6642E

J. van der Ham, F. Dijkstra, R. Łapacz, J. Zurawski, “Network Markup Language Base Schema version 1”, Open Grid Forum, GFD-R-P.206, 2013,

2012

Eli Dart, Brian Tierney, Editors, “Biological and Environmental Research Network Requirements Workshop, November 2012 - Final Report””, November 29, 2012, LBNL LBNL-6395E

David Asner, Eli Dart, and Takanori Hara, “Belle-II Experiment Network Requirements”, October 2012, LBNL LBNL-6268E

The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is
expected in 2015.

In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration’s work.

The requirements identified at the Belle II Experiment Requirements workshop are summarized in the Findings section, and are described in more detail in this report. KEK invited Belle II organizations to attend a follow-up meeting hosted by PNNL during SC12 in Salt Lake City on November 13, 2012. The notes from this meeting are in Appendix C.

Eli Dart, Brian Tierney, editors, “Advanced Scientific Computing Research Network Requirements Review, October 2012 - Final Report”, ESnet Network Requirements Review, October 4, 2012, LBNL LBNL-6109E

Von Welch, Doug Pearson, Brian Tierney, and James Williams (eds)., “Security at the Cyberborder Workshop Report”, NSF Workshop, March 28, 2012,

2011

Eli Dart, “ESnet Requirements Workshops Summary for Sites”, ESCC Meeting, Clemson, SC, February 2, 2011,

Eli Dart, Lauren Rotman, Brian Tierney, editors, “Nuclear Physics Network Requirements Workshop, August 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5518E

Eli Dart, Brian Tierney, editors, “Fusion Energy Network Requirements Workshop, December 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5905E

2010

Joe Metzger, editor, “General Service Description for DICE Network Diagnostic Services”, December 1, 2010,

Eli Dart, Brian Tierney, editors, “Biological and Environmental Research Network Requirements Workshop, April 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4089E

Office of Biological and Environmental Research, DOE Office of Science Energy Sciences Network Rockville, MD — April 29 and 30, 2010. This is LBNL report LBNL-4089E.

Participants and Contributors: Kiran Alapaty, DOE/SC/BER (Atmospheric System Research) Ben Allen, LANL (Bioinformatics) Greg Bell, ESnet (Networking) David Benton, GLBRC/University of Wisconsin (Informatics) Tom Brettin, ORNL (Bioinformatics) Shane Canon, NERSC (Data Systems) Rich Carlson, DOE/SC/ASCR (Network Research) Steve Cotter, ESnet (Networking) Silvia Crivelli, LBNL (JBEI) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager) Narayan Desai, ANL (Networking) Richard Egan, ANL (ARM) Jeff Flick, NOAA (Networking) Ken Goodwin, PSC/NLR (Networking) Susan Gregurick, DOE/SC/BER (Computational Biology) Susan Hicks, ORNL (Networking) Bill Johnston, ESnet (Networking) Bert de Jong, PNNL (EMSL/HPC) Kerstin Kleese van Dam, PNNL (Data Management) Miron Livny, University of Wisconsin (Open Science Grid) Victor Markowitz, LBNL/JGI (Genomics) Jim McGraw, LLNL (HPC/Climate) Raymond McCord, ORNL (ARM) Chris Oehmen, PNNL (Bioinformatics/ScalaBLAST) Kevin Regimbal, PNNL (Networking/HPC) Galen Shipman, ORNL (ESG/Climate) Gary Strand, NCAR (Climate) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories, Middleware) Dean Williams, LLNL (ESG/Climate) Jason Zurawski, Internet2 (Networking)  

Editors: Eli Dart, ESnet; Brian Tierney, ESnet

Eli Dart, Brian Tierney, editors, “Basic Energy Sciences Network Requirements Workshop, September 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4363E

Office of Basic Energy Sciences, DOE Office of Science; Energy Sciences Network; Gaithersburg, MD — September 22 and 23, 2010

Participants and Contributors; Alan Biocca, LBNL (Advanced Light Source); Rich Carlson, DOE/SC/ASCR (Program Manager); Jackie Chen, SNL/CA (Chemistry/Combustion); Steve Cotter, ESnet (Networking); Eli Dart, ESnet (Networking); Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager); Jim Davenport, DOE/SC/BES (BES Program); Alexander Gaenko, Ames Lab (Chemistry); Paul Kent, ORNL (Materials Science, Simulations); Monica Lamm, Ames Lab (Computational Chemistry); Stephen Miller, ORNL (Spallation Neutron Source); Chris Mundy, PNNL (Chemical Physics); Thomas Ndousse, DOE/SC/ASCR (ASCR Program); Mark Pederson, DOE/SC/BES (BES Program); Amedeo Perazzo, SLAC (Linac Coherent Light Source); Razvan Popescu, BNL (National Synchrotron Light Source); Damian Rouson, SNL/CA (Chemistry/Combustion); Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager); Bobby Sumpter, ORNL (Computer Science and Mathematics and Center for Nanophase; Materials Sciences); Brian Tierney, ESnet (Networking); Cai-Zhuang Wang, Ames Lab (Computer Science/Simulations); Steve Whitelam, LBNL (Molecular Foundry); Jason Zurawski, Internet2 (Networking)

2009

“HEP (High Energy Physics) Network Requirements Workshop, August 2009 - Final Report”, ESnet Network Requirements Workshop, August 27, 2009, LBNL LBNL-3397E

Office of High Energy Physics, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-3397E.

Participants and Contributors: Jon Bakken, FNAL (LHC/CMS) Artur Barczyk, Caltech (LHC/Networking) Alan Blatecky, NSF (NSF Cyberinfrastructure) Amber Boehnlein, DOE/SC/HEP (HEP Program Office) Rich Carlson, Internet2 (Networking) Sergei Chekanov, ANL (LHC/ATLAS) Steve Cotter, ESnet (Networking) Les Cottrell, SLAC (Networking) Glen Crawford, DOE/SC/HEP (HEP Program Office) Matt Crawford, FNAL (Networking/Storage) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (HEP/LHC/ATLAS) Ian Fisk, FNAL (LHC/CMS) Rob Gardner, University of Chicago (HEP/LHC/ATLAS) Bill Johnston, ESnet (Networking) Steve Kent, FNAL (Astroparticle) Stephan Lammel, FNAL (FNAL Experiments and Facilities) Stewart Loken, LBNL (HEP) Joe Metzger, ESnet (Networking) Richard Mount, SLAC (HEP) Thomas Ndousse-Fetter, DOE/SC/ASCR (Network Research) Harvey Newman, Caltech (HEP/LHC/Networking) Jennifer Schopf, NSF (NSF Cyberinfrastructure) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alan Stone, DOE/SC/HEP (HEP Program Office) Brian Tierney, ESnet (Networking) Craig Tull, LBNL (Daya Bay) Jason Zurawski, Internet2 (Networking)

 

“ASCR (Advanced Scientific Computing Research) Network Requirements Workshop, April 2009 - Final Report”, ESnet Networking Requirements Workshop, April 15, 2009, LBNL LBNL-2495E

Office of Advanced Scientific Computing Research, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-2495E.

Participants and Contributors: Bill Allcock, ANL (ALCF, GridFTP) Rich Carlson, Internet2 (Networking) Steve Cotter, ESnet (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Brent Draney, NERSC (Networking and Security) Richard Gerber, NERSC (User Services) Mike Helm, ESnet (DOEGrids/PKI) Jason Hick, NERSC (Storage) Susan Hicks, ORNL (Networking) Scott Klasky, ORNL (OLCF Applications) Miron Livny, University of Wisconsin Madison (OSG) Barney Maccabe, ORNL (Computer Science) Colin Morgan, NOAA (Networking) Sue Morss, DOE/SC/ASCR (ASCR Program Office) Lucy Nowell, DOE/SC/ASCR (SciDAC) Don Petravick, FNAL (HEP Program Office) Jim Rogers, ORNL (OLCF) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alex Sim, LBNL (Storage Middleware) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories/Middleware) Dean Williams, LLNL (ESG/Climate) Linda Winkler, ANL (Networking) Frank Wuerthwein, UC San Diego (OSG)

2008

“NP (Nuclear Physics) Network Requirements Workshop, May 2008 - Final Report”, ESnet Network Requirements Workshop, May 6, 2008, LBNL LBNL-1289E

Nuclear Physics Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD. LBNL-1289E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (RHIC) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Andy Kowalski, JLAB (Networking) Jerome Lauret, BNL (STAR at RHIC) Charles Maguire, Vanderbilt (LHC CMS Heavy Ion) Douglas Olson, LBNL (STAR at RHIC and ALICE at LHC) Martin Purschke, BNL (PHENIX at RHIC) Gulshan Rai, DOE/SC (NP Program Office) Brian Tierney, ESnet (Networking) Chip Watson, JLAB (CEBAF) Carla Vale, BNL (PHENIX at RHIC)

“FES (Fusion Energy Sciences ) Network Requirements Workshop, March 2008 - Final Report”, ESnet Network Requirements Workshop, March 13, 2008, LBNL LBNL-644E.

Fusion Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-644E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Tom Casper, LLNL (Fusion – LLNL) Dan Ciarlette, ORNL (ITER) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Bill Dorland, University of Maryland (Fusion – Computation) Martin Greenwald, MIT (Fusion – Alcator C-Mod) Paul Henderson, PPPL (Fusion – PPPL Networking, PPPL) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) Ihor Holod, UC Irvine (Fusion – Computation, SciDAC) William Johnston, ESnet (Networking) Scott Klasky, ORNL (Fusion – Computation, SciDAC) John Mandrekas, DOE/SC (FES Program Office) Doug McCune, PPPL (Fusion – TRANSP user community, PPPL) Thomas NDousse, DOE/SC/ASCR (ASCR Program Office) Ravi Samtaney, PPPL (Fusion – Computation, SciDAC) David Schissel, General Atomics (Fusion – DIII-D, Collaboratories) Yukiko Sekine, DOE/SC/ASCR (ASCR Program Office), Sveta Shasharina, Tech-X Corporation (Fusion – Computation) Brian Tierney, LBNL (Networking)

2007

“BER (Biological and Environmental Research) Network Requirements Workshop, July 2007 - Final Report”, ESnet Network Requirements Workshop, July 26, 2007,

Biological and Environmental Research Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD – July 26 and 27, 2007. LBNL/PUB-988.

Participants and Contributors: Dave Bader, LLNL (Climate) Raymond Bair, ANL (Comp Bio) Anjuli Bamzai, DOE/SC BER Paul Bayer, DOE/SC BER David Bernholdt, ORNL (Earth System Grid) Lawrence Buja, NCAR (Climate) Alice Cialella, BNL (ARM Data) Eli Dart, ESnet (Networking) Eric Davis, LLNL (Climate) Bert DeJong, PNNL (EMSL) Dick Eagan, ANL (ARM) Yakov Golder, JGI (Comp Bio) Dave Goodwin, DOE/SC ASCR Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Phil Jones, LANL (Climate) Raymond McCord, ORNL (ARM) Steve Meacham, NSF George Michaels, PNNL (Comp Bio) Kevin Regimbal, PNNL (EMSL) Mike Sayre, NIH Harris Shapiro, LBNL (JGI) Ellen Stechel, ASCAC Brian Tierney, LBNL (Networking) Lee Tsengdar, NASA (Geosciences) Mike Wehner, LBNL (Climate) Trey White, ORNL (Climate)

“BES (Basic Energy Sciences) Network Requirements Workshop, June 2007 - Final Report”, ESnet Network Requirements Workshop, June 4, 2007, LBNL LBNL/PUB-981

Basic Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Washington, DC – June 4 and 5, 2007. LBNL/PUB-981.

Participants and Contributors: Dohn Arms, ANL (Advanced Photon Source) Anjuli Bamzai, DOE/SC/BER (BER Program Office) Alan Biocca, LBNL (Advanced Light Source) Jackie Chen, SNL (Combustion Research Facility) Eli Dart, ESnet (Networking) Bert DeJong, PNNL (Chemistry) Paul Domagala, ANL (Computing and Information Systems) Yiping Feng, SLAC (LCLS/LUSI) David Goodwin, DOE/SC/ASCR (ASCR Program Office) Bruce Harmon, Ames Lab (Materials Science) Robert Harrison, UT/ORNL (Chemistry) Richard Hilderbrandt, DOE/SC/BES (BES Program Office) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Roger Klaffky, DOE/SC/BES (BES Program Office) Michael McGuigan, BNL (Center for Functional Nanomaterials) Stephen Miller, ORNL (Spallation Neutron Source) Richard Mount, SLAC (Linac Coherent Light Source) Jeff Neaton, LBNL (Molecular Foundry) Larry Rahn, SNL/BES (Combustion) Thomas Schulthess, ORNL (CNMS) Ken Sidorowicz, ANL (Advanced Photon Source) Ellen Stechel, SNL (ASCAC) Brian Tierney, LBNL (Networking) Linda Winkler, ANL (Networking) Zhijian Yin, BNL (National Synchrotron Light Source)

2006

Eli Dart, editor, “Science-Driven Network Requirements for ESnet: Update to the 2002 Office of Science Networking Requirements Workshop Report - February 21, 2006”, ESnet Networking Requirements Workshop, February 21, 2006,

Update to the 2002 Office of Science Networking Requirements Workshop Report February 21, 2006. LBNL report LBNL-61832.

Contributors: Paul Adams, LBNL (Advanced Light Source); Shane Canon, ORNL (NLCF); Steven Carter, ORNL (NLCF); Brent Draney, LBNL (NERSC); Martin Greenwald, MIT (Magnetic Fusion Energy); Jason Hodges, ORNL (Spallation Neutron Source); Jerome Lauret, BNL (Nuclear Physics); George Michaels, PNNL (Bioinformatics); Larry Rahn, SNL (Chemistry); David Schissel, GA (Magnetic Fusion Energy); Gary Strand, NCAR (Climate Science); Howard Walter, LBNL (NERSC); Michael Wehner, LBNL (Climate Science); Dean Williams, LLNL (Climate Science).

2004

Gunter D., Leupolt M., Tierney B., Swany M. and Zurawski J., “A Framework for the Representation of Network Measurements”, LBNL Technical Report, 2004,

2003

“DOE Science Networking Challenge: Roadmap to 2008 - Report of the June 3-5, 2003, DOE Science Networking Workshop”, DOE Science Networking Workshop, June 3, 2003,

Report of the June 3-5, 2003, DOE Science Networking Workshop Conducted by the Energy Sciences Network Steering Committee at the request of the Office of Advanced Scientific Computing Research of the U.S. Department of Energy Office of Science.

Workshop Chair Roy Whitney;  Working Group Chairs: Wu-chun Feng, William Johnston, Nagi Rao, David Schissel, Vicky White, Dean Williams; Workshop Support: Sandra Klepec, Edward May; Report Editors: Roy Whitney, Larry Price; Energy Sciences Network Steering Committee: Larry Price; Chair: Charles Catlett, Greg Chartrand, Al Geist, Martin Greenwald, James Leighton, Raymond McCord, Richard Mount, Jeff Nichols, T.P. Straatsma, Alan Turnbull, Chip Watson, William Wing, Nestor Zaluzec.

2002

“High-Performance Networks for High-Impact Science”, High-Performance Network Planning Workshop, August 13, 2002,

Report of the High-Performance Network Planning Workshop. Conducted by the
 Office of Science, U.S. Department of Energy
. August 13-15, 2002.

Participants and Contributors: Deb Agarwal, LBNL; Guy Almes, Internet 2; Bill St. Arnaund, Canarie, Inc.; Ray Bair, PNNL; Arthur Bland, ORNL; Javad Boroumand, Cisco; William Bradley, BNL; James Bury, AT&T; Charlie Catlett, ANL; Daniel Ciarlette, ORNL; Tim Clifford, Level 3; Carl Cork, LBL; Les Cottrell, SLAC; David Dixon, PNNL; Tom Dunnigan, Oak Ridge; Aaron Falk, USC/Information Sciences Inst.; Ian Foster, ANL; Dennis Gannon, Indiana Univ.; Jason Hodges, ORNL; Ron Johnson, Univ. of Washington; Bill Johnston, LBNL; Gerald Johnston, PNNL; Wesley Kaplow, Qwest; Dale Koelling, US Department of Energy; Bill Kramer, LBNL/NERSC; Maxim Kowalski, JLab; Jim Leighton, LBNL/Esnet; Phil LoCascio, ORNL; Mari Maeda, NSF; Mathew Mathis, Pittsburgh SuperComputing Center; William (Buff) Miner, US Department of Energy; Sandy Merola, LBNL; Thomas Ndousse-Fetter, US Department of Energy; Harvey Newman, Caltech; Peter O'Neil, NCAR; James Peppin, USC/Information Sciences Institute; Arnold Peskin, BNL; Walter Polansky, US Department of Energy; Larry Rahn, SNL; Anne Richeson, Qwest; Corby Schmitz, ANL; Thomas Schulthess, ORNL; George Seweryniak, US Department of Energy; David Schissel, General Atomics; Mary Anne Scott, US Department of Energy; Karen Sollins, MIT; Warren Strand, UCAR; Brian Tierney, LBL; Steven Wallace, Indiana University; James White, ORNL; Vicky White, US Department of Energy; Michael Wilde, ANL; Bill Wing, ORNL; Linda Winkler, ANL; Wu-chun Feng, LANL; Charles C. Young, SLAC.

Web Article

2022

Poster

2013

Se-Young Yu, Nevil Brownlee, Aniket Mahanti, “Comparative Analysis of Transfer Protocols For Big Data”, IFIP Performance 2013, 2013,

Other

2023

Nick Buraglio, Geoff Huston, Expanding the IPv6 Documentation Space, Internet Engineering Task Force Document, November 20, 2023,

The document describes the reservation of an additional IPv6 address prefix for use in documentation. The reservation of a /20 prefix allows documented examples to reflect a broader range of realistic current deployment scenarios.

Nick Buraglio, Chris Cummings, Russ White, Unintended Operational Issues With ULA, Internet Engineering Task Force Document, October 20, 2023,

The behavior of ULA addressing as defined by [RFC6724] is preferred below legacy IPv4 addressing, thus rendering ULA IPv6 deployment functionally unusable in IPv4 / IPv6 dual-stacked environments. The lack of a consistent and supportable way to manipulate this behavior, across all platforms and at scale is counter to the operational behavior of GUA IPv6 addressing on nearly all modern operating systems that leverage a preference model based on [RFC6724] .

Nick Buraglio, Tim Chown, Jeremy Duncan, Preference for IPv6 ULAs over IPv4 addresses in RFC6724, Internet Engineering Task Force Document, October 9, 2023,

This document updates [RFC6724] based on operational experience gained since its publication over ten years ago. In particular it updates the precedence of Unique Local Addresses (ULAs) in the default address selection policy table, which as originally defined by [RFC6724] has lower precedence than legacy IPv4 addressing. The update places both IPv6 Global Unicast Addresses (GUAs) and ULAs ahead of all IPv4 addresses on the policy table to better suit operational deployment and management of ULAs in production. In updating the [RFC6724] default policy table, this document also demotes the preference for 6to4 addresses. These changes to default behavior improve supportability of common use cases such as, but not limited to, automatic / unmanaged scenarios. It is recognized that some less common deployment scenarios may require explicit configuration or custom changes to achieve desired operational parameters.

Inder Monga, Liang Zhang, Yufeng Xin, Designing Quantum Routers for Quantum Internet, ASCR Basic Research Needs in Quantum Computing and Networking Workshop, July 11, 2023,

Yufeng Xin, Inder Monga, Liang Zhang, Hybrid Quantum Networks: Modeling and Optimization, ASCR Basic Research Needs in Quantum Computing and Networking Workshop, July 11, 2023,

Dale W. Carder, Tim Chown, Shawn McKee, Marian Babik, Use of the IPv6 Flow Label for WLCG Packet Marking, IETF Internet-Draft, 2023,

   This document describes an experimentally deployed approach currently
   used within the Worldwide Large Hadron Collider Computing Grid (WLCG)
   to mark packets with their project (experiment) and application.  The
   marking uses the 20-bit IPv6 Flow Label in each packet, with 15 bits
   used for semantics (community and activity) and 5 bits for entropy.
   Alternatives, in particular use of IPv6 Extension Headers (EH), were
   considered but found to not be practical.  The WLCG is one of the
   largest worldwide research communities and has adopted IPv6 heavily
   for movement of many hundreds of PB of data annually, with the
   ultimate goal of running IPv6 only.

Nick Buraglio, X. Xiao, E. Vasilenko, E. Metz, G. Mishra,, Selectively Isolating Hosts to Prevent Potential Neighbor Discovery Issues and Simplify IPv6 First-hops, Internet Engineering Task Force Document, July 9, 2023,

Neighbor Discovery (ND) is a key protocol of IPv6 first-hop. ND uses multicast extensively and trusts all hosts. In some scenarios like wireless networks, multicast can be inefficient. In other scenarios like public access networks, hosts may not be trustable. Consequently, ND has potential issues in various scenarios. The issues and the solutions for them are documented in more than 30 RFCs. It is difficult to keep track of all these issues and solutions. Therefore, an overview is useful. This document firstly summarizes the known ND issues and optimization solutions into a one-stop reference. Analyzing these solutions reveals an insight: isolating hosts is effective in preventing ND issues. Five isolation methods are proposed and their applicability is discussed. Guidelines are described for selecting a suitable isolation method based on the deployment scenario. When ND issues are prevented with a proper isolation method, the solutions for these issues are not needed. This simplifies the IPv6 first- hops.

2022

AUTONOMOUS TRAFFIC (SELF-DRIVING) NETWORK WITH TRAFFIC CLASSES AND PASSIVE ACTIVE LEARNING

U.S. Patent Application Ser. No: 18/052,614

2015

Wenji Wu, Phil DeMar, Liang Zhang, Packet capture engine for commodity network interface cards in high-speed networks, Patent US20160127276A1 United States, November 4, 2015,

ANL – Linda Winkler, Kate Keahey, Caltech – Harvey Newman, Ramiro Voicu, FNAL – Phil DeMar, LBNL/ESnet – Chin Guok, John MacAuley, LBNL/NERSC – Jason Hick, UMD/MAX – Tom Lehman, Xi Yang, Alberto Jimenez, SENSE: SDN for End-to-end Networked Science at the Exascale, August 1, 2015,

Funded Project from DOE