ESnetESnetESnet

Publications and Presentations

2014

Karel van der Veldt, Inder Monga, Jon Dugan, Cees de Laat, Paola Grosso, “Carbon-aware path provisioning for NRENs”, International Green Computing Conference, November 3, 2014,

 

National Research and Education Networks (NRENs) are becoming keener in providing information on the energy consumption of their equipment. However there are only few NRENs trying to use the available information to reduce power consumption and/or carbon footprint. We set out to study the impact that deploying energy-aware networking devices may have in terms of CO2 emissions, taking the ESnet network as use case. We defined a model that can be used to select paths that lead to a lower impact on the CO2 footprint of the network. We implemented a simulation of the ESnet network using our model to investigate the CO2 footprint under different traffic conditions. Our results suggest that NRENs such as ESnet could reduce their network’s environmental impact if they would deploy energy- aware hardware combined with paths setup tailored to reduction of carbon footprint. This could be achieved by modification of the current path provisioning systems used in the NREN community. 

 

Jason Zurawski, Science Engagement: A Non-Technical Approach to the Technical Divide, Cyber Summit 2014: Crowdsourcing Innovation, September 25, 2014,

Jason Zurawski, Mary Hester, Of Mice and Elephants: Supporting Research with the Science DMZ and Software Defined Networking, Cyber Summit 2014: Crowdsourcing Innovation, September 24, 2014,

Henrique Rodriguez, Inder Monga, Abhinava Sadasivarao , Sharfuddin Sayed, Chin Guok, Eric Pouyoul, Chris Liou,Tajana Rosing, “Traffic Optimization in Multi-Layered WANs using SDN”, www.hoti.org, Best Student Paper Award, August 27, 2014,

Wide area networks (WAN) forward traffic through a mix of packet and optical data planes, composed by a variety of devices from different vendors. Multiple forwarding technologies and encapsulation methods are used for each data plane (e.g. IP, MPLS, ATM, SONET, Wavelength Switching). Despite standards defined, the control planes of these devices are usually not interoperable, and different technologies are used to manage each forwarding segment independently (e.g. OpenFlow, TL-1, GMPLS). The result is lack of coordination between layers and inefficient resource usage. In this paper we discuss the design and implementation of a system that uses unmodified OpenFlow to optimize network utilization across layers, enabling practical bandwidth virtualization. We discuss strategies for scalable traffic monitoring and to minimize losses on route updates across layers. We explore two use cases that benefit from multi-layer bandwidth on demand provisioning. A prototype of the system was built open using a traditional circuit reservation application and an unmodified SDN controller, and its evaluation was per-formed on a multi-vendor testbed.

http://blog.infinera.com/2014/09/05/henrique-rodrigues-wins-best-student-paper-at-ieee-hot-interconnects-for-infinerabrocadeesnet-multi-layer-sdn-demo/

http://esnetupdates.wordpress.com/2014/09/05/esnet-student-assistant-henrique-rodrigues-wins-best-student-paper-award-at-hot-interconnects/

 

 

Jason Zurawski, FAIL-transfer: Removing the Mystery of Network Performance from Scientific Data Movement, XSEDE Campus Champions Webinar, August 20, 2014,

Jason Zurawski, A Brief Overview of the Science DMZ, Open Science Grid Campus Grids Webinar, May 23, 2014,

Malathi Veeraraghavan, Inder Monga, “Broadening the scope of optical circuit networks”, May 22, 2014,

 

Advances in optical communications and switching technologies are enabling energy-efficient, flexible, higher- utilization network operations. To take full advantage of these capabilities, the scope of optical circuit networks can be increased in both the vertical and horizontal directions. In the vertical direction, some of the existing Internet applications, transport-layer protocols, and application-programming interfaces need to be redesigned and new ones invented to leverage the high-bandwidth, low-latency capabilities of optical circuit networks. In the horizontal direction, inter-domain control and management-protocols are required to create a global-scale interconnection of optical circuit-switched networks. 

 

Jason Zurawski, Brian Tierney, Mary Hester, The Role of End-user Engagement for Scientific Networking, TERENA Networking Conference (TNC), May 20, 2014,

Jason Zurawski, Brian Tierney, Jason Zurawski, An Overview in Emerging (and not) Networking Technologies, TERENA Networking Conference (TNC), May 19, 2014,

Jason Zurawski, Fundamentals of Data Movement Hardware, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, Essentials of perfSONAR, NSF CC-NIE PI Workshop, April 30, 2014,

Jason Zurawski, The perfSONAR Project at 10 Years: Status and Trajectory, GN3 (GÉANT) NA3, Task 2 - Campus Network Monitoring and Security Workshop, April 25, 2014,

Jason Zurawski, Network and Host Design to Facilitate High Performance Data Transfer, Globus World 2014, April 15, 2014,

Jason Zurawski, Brian Tierney, ESnet perfSONAR Update, 2014 Winter ESnet Site Coordinators Committee (ESCC) Meeting, February 25, 2014,

Jason Zurawski, Security and the perfSONAR Toolkit, Second NSF Workshop on perfSONAR based Multi-domain Network Performance Measurement and Monitoring (pSW 2014), February 21, 2014,

Overview of recent security breaches and practices for the perfSONAR Toolkit. 

2013

Eli Dart, Lauren Rotman, Brian Tierney, Mary Hester, and Jason Zurawski, “The Science DMZ: A Network Design Pattern for Data-Intensive Science”, SC13: The International Conference for High Performance Computing, Networking, Storage and Analysis, Best Paper Nominee. Denver CO, USA, ACM. DOI:10.1145/2503210.2503245, November 19, 2013, LBNL 6366E.

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The Science DMZ paradigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that create an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.

 

Nathan Hanford, Vishal Ahuja, Mehmet Balman, Matthew Farrens, Dipak Ghosal, Eric Pouyoul and Brian Tierney, “Characterizing the Impact of End-System Affinities On the End-to-End Performance of High-Speed Flows”, The 3rd International Workshop on Network-aware Data Management, in conjunction with SC'13, November 17, 2013,

Ezra Kissel, Martin Swany, Brian Tierney and Eric Pouyoul, “Efficient Wide Area Data Transfer Protocols for 100 Gbps Networks and Beyond”, The 3rd International Workshop on Network-aware Data Management, in conjunction with SC'13:, November 17, 2013,

Z. Yan, M. Veeraraghavan, C. Tracy, C. Guok, “On How to Provision Virtual Circuits for Network-Redirected Large-Sized, High-Rate Flows”, International Journal on Advances in Internet Technology, vol. 6, no. 3 & 4, 2013, November 1, 2013,

Campana S., Bonacorsi D., Brown A., Capone E., De Girolamo D., Fernandez Casani A., Flix Molina J., Forti A., Gable I., Gutsche O., Hesnaux A., Liu L., Lopez Munoz L., Magini N., McKee S., Mohammad K., Rand D., Reale M., Roiser S., Zielinski M., and Zurawski J.}, “Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology”, 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2013), October 2013,

Jason Zurawski, Sowmya Balasubramanian, Aaron Brown, Ezra Kissel, Andrew Lake, Martin Swany, Brian Tierney, Matt Zekauskas, “perfSONAR: On-board Diagnostics for Big Data”, 1st Workshop on Big Data and Science: Infrastructure and Services Co-located with IEEE International Conference on Big Data 2013 (IEEE BigData 2013), October 6, 2013,

Jason Zurawski, The Science DMZ - Architecture, Monitoring Performance, and Constructing a DTN, Operating Innovative Networks (OIN), October 3, 2013,

Eli Dart, Mary Hester, Jason Zurawski, Editors, “High Energy Physics and Nuclear Physics Network Requirements - Final Report”, ESnet Network Requirements Workshop, August 2013, LBNL 6642E

Abhinava Sadasivarao, Sharfuddin Syed, Chris Liou, Ping Pan, Andrew Lake, Chin Guok, Inder Monga, “Open Transport Switch - A Software Defined Networking Architecture for Transport Networks”, August 17, 2013,

 

There have been a lot of proposals to unify the control and management of packet and circuit networks but none have been deployed widely. In this paper, we propose a sim- ple programmable architecture that abstracts a core transport node into a programmable virtual switch, that meshes well with the software-defined network paradigm while leverag- ing the OpenFlow protocol for control. A demonstration use-case of an OpenFlow-enabled optical virtual switch im- plementation managing a small optical transport network for big-data applications is described. With appropriate exten- sions to OpenFlow, we discuss how the programmability and flexibility SDN brings to packet-optical backbone networks will be substantial in solving some of the complex multi- vendor, multi-layer, multi-domain issues service providers face today. 

 

Abhinava Sadasivarao, Sharfuddin Syed, Ping Pan, Chris Liou, Andy Lake, Chin Guok, Inder Monga, Open Transport Switch: A Software Defined Networking Architecture for Transport Networks, Workshop, August 16, 2013,

Presentation at HotSDN Workshop as part of SIGCOMM 2013

Jason Zurawski, Kathy Benninger, Network PerformanceTutorial featuring perfSONAR, XSEDE13: Gateway to Discovery, July 22, 2013,

Jason Zurawski, A Completely Serious Overview of Network Performance for Scientific Networking, Focused Technical Workshop: Network Issues for Life Sciences Research, July 18, 2013,

Jason Zurawski, Site Performance Measurement & Monitoring Best Practices, 2013 Summer ESnet Site Coordinators Committee (ESCC) Meeting, July 16, 2013,

Baris Aksanli, Jagannathan Venkatesh, Tajana Rosing, Inder Monga, “A Comprehensive Approach to Reduce the Energy Cost of Network of Datacenters”, International Symposium on Computers and Communications, Best Student Paper award, July 7, 2013,

Best Student Paper

Several studies have proposed job migration over the wide area network (WAN) to reduce the energy of networks of datacenters by taking advantage of different electricity prices and load demands. Each study focuses on only a small subset of network parameters and thus their results may have large errors. For example,  datacenters usually have long-term power contracts instead of paying market prices. However, previous work neglects these contracts, thus overestimating the energy savings by 2.3x. We present a comprehensive approach to minimize the energy cost of networks of datacenters by modeling performance of the workloads, power contracts, local renewable energy sources, different routing options for WAN and future router technologies. Our method can reduce the energy cost of datacenters by up to 28%, while reducing the error in the energy cost estimation by 2.6x.

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, SSERCA / FLR Summit, June 14, 2013,

William E. Johnston, Eli Dart, Michael Ernst, Brian Tierney, “Enabling high throughput in widely distributed data management and analysis systems: Lessons from the LHC”, TERENA Networking Conference, June 3, 2013,

Jason Zurawski, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Great Plains Network Annual Meeting, May 29, 2013,

Jason Zurawski, Matt Lessins, Things That Go Bump in the Net: Implementing and Securing a Scientific Network, Merit Member Conference 2013, May 15, 2013,

Lauren Rotman, Jason Zurawski, Building User Outreach Strategies: Challenges & Best Practices, Internet2 Annual Meeting, April 22, 2013,

Z. Yan, M. Veeraraghavan, C. Tracy, and C. Guok, “On how to Provision Quality of Service (QoS) for Large Dataset Transfers”, Proceedings of the Sixth International Conference on Communication Theory, Reliability, and Quality of Service, April 21, 2013,

Michael Sinatra, IPv6 Deployment Panel Discussion, Department of Energy Information Managers’ Conference, April 2013,

Jason Zurawski, Network Tools Tutorial, Internet2 Annual Meeting, April 11, 2013,

Bill Johnston, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, The Third International Conference on Parallel, Distributed, Grid and Cloud Computing for Engineering, March 2013,

Jason Zurawski, Networking Potpourri, OSG All Hands Meeting, March 11, 2013,

Jason Zurawski, Debugging Network Performance With perfSONAR, eduPERT Performance U! Winter school, March 6, 2013,

Joe Metzger, ESnet5 Network Engineering Group Update, Winter ESCC 2013, January 2013,

Inder Monga, Network Abstractions: The first step towards a programmable WAN, TIP 2013, January 14, 2013,

University campuses, Supercomputer centers and R&E networks are challenged to architect, build and support IT infrastructure to deal effectively with the data deluge facing most science disciplines. Hybrid network architecture, multi-domain bandwidth reservations, performance monitoring and GLIF Open Lightpath Exchanges (GOLE) are examples of network architectures that have been proposed, championed and implemented successfully to meet the needs of science. This talk explores a new "one virtual switch" abstraction leveraging software-defined networking and OpenFlow concepts, that provides the science users a simple, adaptable network framework to meet their future application requirements. The talk will include the high-level design that includes use of OpenFlow and OSCARS as well as implementation details from demonstration planned for super-computing.

Michael Sinatra, DNSSEC: Signing, Validating, and Troubleshooting, TIP 2013: Joint Techs, January 2013,

Eli Dart, Brian Tierney, Raj Kettimuthu, Jason Zurawski, Achieving the Science DMZ, January 13, 2013,

Tutorial at TIP2013, Honolulu, HI

  • Part 1: Architecture and Security
  • Part 2: Data Transfer Nodes and Data Transfer Tools
  • Part 3: perfSONAR

 

 

Joe Metzger, Lessons Learned Deploying a 100G Nationwide Network, TIP 2013, January 2013,

J. van der Ham, F. Dijkstra, R. Łapacz, J. Zurawski, “Network Markup Language Base Schema version 1”, Open Grid Forum, GFD-R-P.206, 2013,

2012

Brian Tierney, Efficient Data Transfer Protocols for Big Data, CineGrid Workshop, December 2012,

M. Boddie, T. Entel, C. Guok, A. Lake, J. Plante, E. Pouyoul, B. H. Ramaprasad, B. Tierney, J. Triay, V. M. Vokkarane, On Extending ESnet's OSCARS with a Multi-Domain Anycast Service, IEEE ONDM 2012, December 2012,

Inder Monga, Introduction to Bandwidth on Demand to LHCONE, LCHONE Point-to-point Service Workshop, December 13, 2012,

Introducing Bandwidth on Demand concepts to the application community of CMS and ATLAS experiments.

Michael Sinatra, Risks of Not Deploying IPv6 Now, CANS 2012, December 2012,

Michael Sinatra, IPv6 Measurement Related Activities, CANS 2012, December 2012,

Michael Sinatra, Don’t Ignore the Substrate: What Networkers Need to Worry About in the Era of Big Clouds and Big Data, Merit Networkers Workshop, December 2012,

Inder Monga, Software Defined Networking for big-data science, Worldwide LHC Grid meeting, December 2012,

Eli Dart, Brian Tierney, Editors, “Biological and Environmental Research Network Requirements Workshop, November 2012 - Final Report””, November 29, 2012, LBNL LBNL-6395E

Inder Monga, Eric Pouyoul, Chin Guok, Software Defined Networking for big-data science, SuperComputing 2012, November 15, 2012,

 

The emerging era of “Big Science” demands the highest possible network performance. End-to-end circuit automation and workflow-driven customization are two essential capabilities needed for networks to scale to meet this challenge. This demonstration showcases how combining software-defined networking techniques with virtual circuits capabilities can transform the network into a dynamic, customer-configurable virtual switch. In doing so, users are able to rapidly customize network capabilities to meet their unique workflows with little to no configuration effort. The demo also highlights how the network can be automated to support multiple collaborations in parallel.

 

Greg Bell, Network as Instrument, CANARIE Users’ Forum, November 2012,

Greg Bell, Measuring Success in R&E Networking, The Quilt, November 2012,

Greg Bell, Lead Panel on DOE Computing Resources, National Laboratory Day in Mississippi, November 12, 2012,

Yufei Ren, Tan Li, Dantong Yu, Shudong Jin, Thomas Robertazzi, Brian L. Tierney, Eric Pouyoul, “Protocols for Wide-Area Data-intensive Applications: Design and Performance Issues”, Proceedings of IEEE Supercomputing 2012, November 12, 2012,

Providing high-speed data transfer is vital to various data-intensive applications. While there have been remarkable technology advances to provide ultra-high-speed network band- width, existing protocols and applications may not be able to fully utilize the bare-metal bandwidth due to their inefficient design. We identify the same problem remains in the field of Remote Direct Memory Access (RDMA) networks. RDMA offloads TCP/IP protocols to hardware devices. However, its benefits have not been fully exploited due to the lack of efficient software and application protocols, in particular in wide-area networks. In this paper, we address the design choices to develop such protocols. We describe a protocol implemented as part of a communication middleware. The protocol has its flow control, connection management, and task synchronization. It maximizes the parallelism of RDMA operations. We demonstrate its performance benefit on various local and wide-area testbeds, including the DOE ANI testbed with RoCE links and InfiniBand links.

 

Wu, Q., Yun, D., Zhu, M., Brown, P., and Zurawski, J., “A Workflow-based Network Advisor for Data Movement with End-to-end Performance Optimization”, The Seventh Workshop on Workflows in Support of Large-Scale Science (WORKS12), Salt Lake City Utah, USA, November 2012,

Gunter D., Kettimuthu R., Kissel E., Swany M., Yi J., Zurawski J., “Exploiting Network Parallelism for Improving Data Transfer Performance”, IEEE/ACM Annual SuperComputing Conference (SC12) Companion Volume, Salt Lake City Utah, USA, November 2012,

Inder Monga, Programmable Information Highway, November 11, 2012,

 

Suggested Panel Questions:

- What do you envision will have dramatic impact in the future networking and data management?  What research challenges do you expect in achieving your vision? 

- Do we need to re-engineer existing tools and middleware software? Elaborate on network management middleware in terms of virtual circuits, performance monitoring, and diagnosis tools.

- How do current applications match increasing data sizes and enhancements in network infrastructure? Please list a few network-aware application.  What is the scope of networking in the application domain?

- Resource management and scheduling problems are gaining importance due to  current developments in utility computing and high interest in Cloud infrastructure. Explain your vision.  What sort of algorithms/mechanisms will practically be used in the future?

- What are the main issues in designing/modelling cutting edge dynamic networks for large-scale data processing? What sort of performance problems do you expect?

- What necessary step do we need to implement to benefit from next generation high bandwidth networks? Do you think there will be radical changes such as novel APIs or new network stacks?

 

Inder Monga, Eric Pouyoul, Chin Guok, “Software Defined Networking for big-data science (paper)”, SuperComputing 2012, November 11, 2012,

 

University campuses, Supercomputer centers and R&E networks are challenged to architect, build and support IT infrastructure to deal effectively with the data deluge facing most science disciplines. Hybrid network architecture, multi-domain bandwidth reservations, performance monitoring and GLIF Open Lightpath Exchanges (GOLE) are examples of network architectures that have been proposed, championed and implemented successfully to meet the needs of science. Most recently, Science DMZ, a campus design pattern that bypasses traditional performance hotspots in typical campus network implementation, has been gaining momentum. In this paper and corresponding demonstration, we build upon the SC11 SCinet Research Sandbox demonstrator with Software-Defined networking to explore new architectural approaches. A virtual switch network abstraction is explored, that when combined with software-defined networking concepts provides the science users a simple, adaptable network framework to meet their upcoming application requirements. 

 

I. Monga, E. Pouyoul, C. Guok, Software-Define Networking for Big-Data Science – Arthictectural Models from Campus to the WAN, SC12: IEEE HPC, November 2012,

Inder Monga, Software-defined networking (SDN) and OpenFlow: Hot topics in networking, Masters Class at CMU, NASA Ames, October 2012,

David Asner, Eli Dart, and Takanori Hara, “Belle-II Experiment Network Requirements”, October 2012, LBNL LBNL-6268E

The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is
expected in 2015.

In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration’s work.

The requirements identified at the Belle II Experiment Requirements workshop are summarized in the Findings section, and are described in more detail in this report. KEK invited Belle II organizations to attend a follow-up meeting hosted by PNNL during SC12 in Salt Lake City on November 13, 2012. The notes from this meeting are in Appendix C.

Brian Tierney, ESnet’s Research Testbeds, GLIF Meeting, October 2012,

Paola Grosso, Inder Monga, Cees DeLaat, GreenSONAR, GLIF, October 12, 2012,

Michael Sinatra, DNS Security: Panel Discussion, NANOG 56, October 2012,

Eli Dart, Network expectations, or what to tell your system administrator, ALS user group meeting tomography workshop, October 2012,

C.Guok, E, Chaniotakis, A. Lake, OSCARS Production Deployment Experiences, GLIF NSI Operationalization Meeting, October 2012,

Inder Monga, Bill St. Arnaud, Erik-Jan Bos, Defining GLIF Architecture Task Force, GLIF, October 11, 2012,

12th Annual LambdaGrid Workshop in Chicago

Brian Tierney, Ezra Kissel, Martin Swany, Eric Pouyoul, “Efficient Data Transfer Protocols for Big Data”, Proceedings of the 8th International Conference on eScience, IEEE, October 9, 2012,

Abstract—Data set sizes are growing exponentially, so it is important to use data movement protocols that are the most efficient available. Most data movement tools today rely on TCP over sockets, which limits flows to around 20Gbps on today’s hardware. RDMA over Converged Ethernet (RoCE) is a promising new technology for high-performance network data movement with minimal CPU impact over circuit-based infrastructures. We compare the performance of TCP, UDP, UDT, and RoCE over high latency 10Gbps and 40Gbps network paths, and show that RoCE-based data transfers can fill a 40Gbps path using much less CPU than other protocols. We also show that the Linux zero-copy system calls can improve TCP performance considerably, especially on current Intel “Sandy Bridge”-based PCI Express 3.0 (Gen3) hosts.

 

Eli Dart, Brian Tierney, editors, “Advanced Scientific Computing Research Network Requirements Review, October 2012 - Final Report”, ESnet Network Requirements Review, October 4, 2012, LBNL LBNL-6109E

Inder Monga, Network Service Interface: Concepts and Architecture, I2 Fall Member Meeting, September 2012,

Bill Johnston, Eli Dart, and Brian Tierney, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, ECT2012: The Eighth International Conference on Engineering Computational Technology, September 2012,

Brian Tierney, High Performance Bulk Data Transfer with Globus Online, Webinar, September 2012,

Mike Bennett, Energy Efficiency in IEEE Ethernet Networks – Current Status and Prospects for the Future, Joint ITU/IEEE Workshop on Ethernet--Emerging Applications and Technologies, September 2012,

Mike Bennett, EEE for P802.3bm, Objective Proposal, IEEE 40G and 100G Next Generation Optics Task Force, IEEE 802.3 Interim meeting, September 2012,

Mike Bennett, An Overview of Energy-Efficient Ethernet, NGBASE-T Study Group, IEEE 802.3 Interim meeting, September 2012,

Greg Bell, Network as Instrument, NORDUnet 2012, September 2012,

Inder Monga, Architecting and Operating Energy-Efficient Networks, September 10, 2012,

The presentation outlines the network energy efficiency challenges, the growth of network traffic and the simulation use-case to build next-generation energy-efficient network designs.

Greg Bell, ESnet Dark Fiber Footprint and Testbed, CESNET Customer Empowered Fiber Networks Workshop, September 2012,

Brian Tierney, ESnet perfSONAR-PS Plans and Perspective, OSG Meeting, August 2012,

Eli Dart, Networks for Data Intensive Science Environments, BES Neutron and Photon Detector Workshop, August 2012,

Bill Johnston, Eli Dart, and Brian Tierney, Addressing the Problem of Data Mobility for Data-Intensive Science: Lessons Learned from the data analysis and data management systems of the LHC, ARNES: The Academic and Research Network of Slovenia, August 2012,

Greg Bell, ESnet Manifesto, Joint Techs Conference, July 2012,

Inder Monga, Eric Pouyoul, Chin Guok, Eli Dart, SDN for Science Networks, Summer Joint Techs 2012, July 17, 2012,

Inder Monga, A Data-Intensive Network Substrate for eResearch, eScience Workshop, July 2012,

Jon Dugan, The MyESnet Portal: Making the Network Visible, Summer 2012 ESCC/Internet2 Joint Techs, July 2012,

Inder Monga, Marching Towards …a Net-Zero Network, WIN2012 Conference, July 2012,

Greg Bell, ESnet Update, ESnet Coordinating Committee Meeting, July 2012,

Joe Metzger, ANI & ESnet5, Summer ESCC 2012, July 2012,

Mehmet Balman, Eric Pouyoul, Yushu Yao, E. Wes Bethel Burlen Loring, Prabhat, John Shalf, Alex Sim, Brian L. Tierney, “Experiences with 100Gbps Network Applications”, The Fifth International Workshop on Data Intensive Distributed Computing (DIDC 2012), June 20, 2012,

100Gbps networking has finally arrived, and many research and educational institutions have begun to deploy 100Gbps routers and services. ESnet and Internet2 worked together to make 100Gbps networks available to researchers at the Supercomputing 2011 conference in Seattle Washington. In this paper, we describe two of the first applications to take advantage of this network. We demonstrate a visualization application that enables remotely located scientists to gain insights from large datasets. We also demonstrate climate data movement and analysis over the 100Gbps network. We describe a number of application design issues and host tuning strategies necessary for enabling applications to scale to 100Gbps rates.

Inder Monga, Energy Efficiency starts with measurement, Greentouch Meeting, June 2012,

Chris Tracy, 100G Deployment--Challenges & Lessons Learned from the ANI Prototype & SC11, NANOG 55, June 2012,

Inder Monga, ESnet Update: Networks and Research, JGNx and NTT, June 2012,

Jon Dugan, Gopal Vaswani, Gregory Bell, Inder Monga, “The MyESnet Portal: Making the Network Visible”, TERENA 2012 Conference, May 22, 2012,

 

ESnet provides a platform for moving large data sets and accelerating worldwide scientific collaboration. It provides high-bandwidth, reliable connections that link scientists at national laboratories, universities and other research institutions, enabling them to collaborate on some of the world's most important scientific challenges including renewable energy sources, climate science, and the origins of the universe.

ESnet has embarked on a major project to provide substantial visibility into the inner-workings of the network by aggregating diverse data sources, exposing them via web services, and visualizing them with user-centered interfaces. The portal’s strategy is driven by understanding the needs and requirements of ESnet’s user community and carefully providing interfaces to the data to meet those needs. The 'MyESnet Portal' allows users to monitor, troubleshoot, and understand the real time operations of the network and its associated services.

This paper will describe the MyESnet portal and the process of developing it. The data for the portal comes from a wide variety of sources: homegrown systems, commercial products, and even peer networks. Some visualizations from the portal are presented highlighting some interesting and unusual cases such as power consumption and flow data. Developing effective user interfaces is an iterative process. When a new feature is released, users are both interviewed and observed using the site. From this process valuable insights were found concerning what is important to the users and other features and services they may also want. Open source tools were used to build the portal and the pros and cons of these tools are discussed.

 

Brian Tierney, ESnet’s Research Testbed, LSN Meeting, May 2012,

Eli Dart, High Performance Networks to Enable and Enhance Scientific Productivity, WRNP 13, May 2012,

Greg Bell, ESnet Overview, LBNL Advisory Board, May 2012,

Jon Dugan, The MyESnet Portal: Making the Network Transparent, TERENA Networking Conference 2012, May 2012,

Bill Johnston, Evolution of R&E Networks to Enable LHC Science, Istituto Nazionale di Fisica Nucleare (INFN) and Italian Research & Education Network network (GARR) joint meeting, May 2012,

Bill Johnston, Some ESnet Observations on Using and Managing OSCARS Point-to-Point Circuit, LHCONE / LHCOPN meeting, May 2012,

Bill Johnston and Eli Dart, The Square Kilometer Array: A next generation scientific instrument and its implications for networks (and possible lessons from the LHC experience), TERENA Networking Conference 2012, May 2012,

Brian Tierney, ESnet, the Science DMZ, and the role of Globus Online, Globus World, April 2012,

Michael Sinatra, IPv6 Panel: Successes and Setbacks, ARIN XXIX, April 2012,

Eli Dart, Cyberinfrastructure for Data Intensive Science, Joint Techs: Internet2 Spring Member Meeting, April 2012,

Von Welch, Doug Pearson, Brian Tierney, and James Williams (eds)., “Security at the Cyberborder Workshop Report”, NSF Workshop, March 28, 2012,

Greg Bell, ESnet Update, National Laboratory CIO Meeting, March 2012,

Greg Bell, ESnet Update, CENIC Annual Conference, March 2012,

McKee S., Lake A., Laurens P., Severini H., Wlodek T., Wolff S., and Zurawski J., “Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

Zurawski J., Ball R., Barczyk A., Binkley M., Boote J., Boyd E., Brown A., Brown R., Lehman T., McKee S., Meekhof B., Mughal A. Newman H., Rozsa S., Sheldon P., Tackett A., Voicu R., Wolff S., and Yang X., “The DYNES Instrument: A Description and Overview”, 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2012), New York, NY, USA, March 2012,

C. Guok, I. Monga, IDCP and NSI: Lessons Learned, Deployments and Gap Analysis, OGF 34, March 2012,

Andy Lake, Network Performance Monitoring and Measuring Using perfSONAR, CENIC 2012, March 2012,

Baris Aksanli, Tajana Rosing, Inder Monga, “Benefits of Green Energy and Proportionality in High Speed Wide Area Networks Connecting Data Centers”, Design, Automation and Test in Europe (DATE), March 5, 2012,

Abstract: Many companies deploy multiple data centers across the globe to satisfy the dramatically increased computational demand. Wide area connectivity between such geographically distributed data centers has an important role to ensure both the quality of service, and, as bandwidths increase to 100Gbps and beyond, as an efficient way to dynamically distribute the computation. The energy cost of data transmission is dominated by the router power consumption, which is unfortunately not energy proportional. In this paper we not only quantify the performance benefits of leveraging the network to run more jobs, but also analyze its energy impact. We compare the benefits of redesigning routers to be more energy efficient to those obtained by leveraging locally available green energy as a complement to the brown energy supply. Furthermore, we design novel green energy aware routing policies for wide area traffic and compare to state-of-the-art shortest path routing algorithm. Our results indicate that using energy proportional routers powered in part by green energy along with our new routing algorithm results in 10x improvement in per router energy efficiency with 36% average increase in the number of jobs completed.

 

T. Lehman, C. Guok, Advanced Resource Computation for Hybrid Service and TOpology Networks (ARCHSTONE), DOE ASCR PI Meeting, March 2012,

Eli Dart, Network Impacts of Data Intensive Science, Ethernet Technology Summit, February 2012,

Inder Monga, Enabling Science at 100G, ON*Vector Conference, February 2012,

Bill Johnston and Eli Dart, Design Patterns for Data-Intensive Science--LHC lessons and SKA, Pawsey Supercomputer Center User Meeting, February 2012,

Michael Sinatra, ESnet as an ISP, Winter ESnet Site Coordinators Committee Meeting, January 26, 2012,

Michael Sinatra, Site IPv6 Deployment Status & Issues, Winter ESnet Site Coordinators Committee Meeting, January 26, 2012,

Inder Monga, John MacAuley, GLIF NSI Implementation Task Force Presentation, Winter GLIF Tech Meeting at Baton Rouge, LA, January 26, 2012,

Eric Pouyoul, Brian Tierney, Achieving 98Gbps of Cross-country TCP traffic using 2.5 hosts, 10 10G NICs, and 10 TCP streams, Winder 2012 Joint Techs, January 25, 2012,

Joe Metzger, ESnet 5 Deployment Plans, Winter ESnet Site Coordinators Committee Meeting, January 25, 2012,

Greg Bell, Science at the Center: ESnet Update, Joint Techs, January 25, 2012,

In this talk, Acting Director Greg Bell will provide an update on ESnet's recent activities through the lens of its mission to accelerate discovery for researchers in the DOE Office of Science. Topics covered: what makes ESnet distinct? Why does its ScienceDMZ strategy matter? What are potential 'design patterns' for data-intensive science? Does 100G matter?

Patty Giuntoli, Sheila Cisko, ESnet Collaborative Services (ECS) / RCWG updates, Winter ESnet Site Coordinators Committee Meeting, January 25, 2012,

Eli Dart, Brent Draney, National Laboratory Success Stories, Joint Techs, January 24, 2012,

Reports from ESnet and National Laboratories that have successfully deployed methods to enhance their infrastructure support for data intensive science.

This presentation will discuss the challenges and lessons learned in the deployment of the 100GigE ANI Prototype network and support of 100G circuit services during SC11 in Seattle. Interoperability, testing, measurement, debugging, and operational issues at both the optical and layer-2/3 will be addressed. Specific topics will include: (1) 100G pluggable optics – options, support, and standardization issues, (2) Factors negatively affecting 100G line-side transmission, (3) Saturation testing and measurement with hosts connected at 10G, (4) Debugging and fault isolation with creative use of loops/circuit services, (5) Examples of interoperability problems in a multi-vendor environment, and (6) Case study: Transport of 2x100G waves to SC11.

Chin Guok, Evolution of OSCARS, Joint Techs, January 23, 2012,

On-demand Secure Circuits and Advance Reservation System (OSCARS) has evolved tremendously since its conception as a DOE funded project to ESnet back in 2004. Since then, it has grown from a research project to a collaborative open-source software project with production deployments in several R&E networks including ESnet and Internet2. In the latest release of OSCARS as version 0.6, the software was redesigned to flexibly accommodate both research and production needs. It is being used currently by several research projects to study path computation algorithms, and demonstrate multi-layer circuit management. Just recently, OSCARS 0.6 was leveraged to support production level bandwidth management in the ESnet ANI 100G prototype network, SCinet at SC11 in Seattle, and the Internet2 DYNES project. This presentation will highlight the evolution of OSCARS, activities surrounding OSCARS v0.6 and lessons learned, and share with the community the roadmap for future development that will be discussed within the open-source collaboration.

Joe Breen, Eli Dart, Eric Pouyoul, Brian Tierney, Achieving a Science "DMZ", Winter 2012 Joint Techs, Full day tutorial, January 22, 2012,

There are several aspects to building successful infrastructure to support data intensive science. The Science DMZ Model incorporates three key components into a cohesive whole: a high-performance network architecture designed for ease of use; well-configured systems for data transfer; and measurement hosts to provide visibility and rapid fault isolation. This tutorial will cover aspects of network architecture and network device configuration, the design and configuration of a Data Transfer Node, and the deployment of perfSONAR in the Science DMZ. Aspects of current deployments will also be discussed.

2011

Zurawski J., Boyd E., Lehman T., McKee S., Mughal A., Newman H., Sheldon P, Wolff S., and Yang X., “Scientific data movement enabled by the DYNES instrument”, Proceedings of the first international workshop on Network-aware data management (NDM ’11), Seattle WA, USA, November 2011,

Steve Cotter, ANI details leading to ESnet5, ESCC, Summer 2011, July 13, 2011,

C. Guok, OSCARS, GENI Project Office Meeting, May 2011,

William E. Johnston, Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service, TERENA Networking Conference, 16 - 19 May, 2011, Prague, Czech Republic, May 16, 2011,

Tom Lehman, Xi Yang, Nasir Ghani, Feng Gu, Chin Guok, Inder Monga, and Brian Tierney, “Multilayer Networks: An Architecture Framework”, IEEE Communications Magazine, May 9, 2011,

Neal Charbonneau, Vinod M. Vokkarane, Chin Guok, Inder Monga, “Advance Reservation Frameworks in Hybrid IP-WDM Networks”, IEEE Communications Magazine, May 9, 2011, 59, Issu:132-139,

Steve Cotter, Early Lessons Learned Deploying a 100Gbps Network, Enterprise Innovation Symposium in Atlanta, May 4, 2011,

Inder Monga, Chin Guok, William E. Johnston, and Brian Tierney, “Hybrid Networks: Lessons Learned and Future Challenges Based on ESnet4 Experience”, IEEE Communications Magazine, May 1, 2011,

W.E. Johnston, C. Guok, J. Metzger, B. Tierney, Network Services for High Performance Distributed Computing and Data Management, The Second International Conference on Parallel, Distributed, Grid, and Cloud Computing for Engineering, Ajaccio - Corsica - France, April 12, 2011,

Eli Dart, “ESnet Requirements Workshops Summary for Sites”, ESCC Meeting, Clemson, SC, February 2, 2011,

Brian Tierney, ANI Testbed Project Update, Winter 2011 Joint Techs, Clemson, SC, February 2, 2011,

Steve Cotter, ESnet Update, Winter 2011 Joint Techs Clemson, SC, February 2, 2011,

Eli Dart, The Science DMZ, Winter 2011 Joint Techs, February 1, 2011,

Joe Metzger, DICE Diagnostic Service, Joint Techs - Clemson, South Carolina, January 27, 2011,

Eli Dart, Lauren Rotman, Brian Tierney, editors, “Nuclear Physics Network Requirements Workshop, August 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5518E

Eli Dart, Brian Tierney, editors, “Fusion Energy Network Requirements Workshop, December 2011 - Final Report”, ESnet Network Requirements Workshop, January 1, 2011, LBNL LBNL-5905E

2010

Chaitanya S. K. Vadrevu, Massimo Tornatore, Chin P. Guok, Inder Monga, A Heuristic for Combined Protection of IP Services and Wavelength Services in Optical WDM Networks, IEEE ANTS 2010, December 2010,

Joe Metzger, editor, “General Service Description for DICE Network Diagnostic Services”, December 1, 2010,

Chris Tracy, Introduction to OpenFlow: Bringing Experimental Protocols to a Network Near You, NANOG50 Conference, Atlanta, Oct. 4, 2010, October 4, 2010,

Chris Tracy, Eli Dart, Science DMZs: Understanding their role in high-performance data transfers, CANS 2010, September 20, 2010,

Kevin Oberman, IPv6 Implementation at a Network Service Provider, 2010 Inter Agency IPv6 Information Exchange, August 4, 2010,

Joe Metzger, PerfSONAR Update, ESCC Meeting, July 15, 2010,



Evangelos Chaniotakis, Virtual Circuits Landscape, ESCC Meeting, Columbus, Ohio, July 15, 2010,

Jon Dugan, Using Graphite to Visualize Network Data, ESCC Meeting
, Columbus, Ohio, July 15, 2010,

Eli Dart, High Performance Data Transfer, Joint Techs, Summer 2010, July 15, 2010,

Kevin Oberman, Future DNSSEC Directions, ESCC Meeting, Columbus, Ohio, July 15, 2010,

C. Guok, OSCARS Roadmap, OGF 28; DICE Control Plane WG, May 2010,

Steve Cotter, ESnet Update, ESCC Meeting, Salt Lake City, Utah, February 3, 2010,

Jon Dugan, Network Monitoring and Visualization at ESnet, Joint Techs
, Salt Lake City, Utah, February 3, 2010,

Kevin Oberman, IPv6 SNMP Network Management, http://events.internet2.edu/2010/jt-slc/, February 3, 2010,

Joint Techs
Salt Lake City, Utah
http://events.internet2.edu/2010/jt-slc/

Steve Cotter, ESnet Update, Joint Techs, Salt Lake City, Utah, February 2, 2010,

Kevin Oberman, DNSSEC Implementation at ESnet, Joint Techs
, Salt Lake City, Utah, February 2, 2010,

C. Guok, I. Monga, Composible Network Service Framework, ESCC, February 2010,

Swany M. Portnoi M., Zurawski J., “Information services algorithm to heuristically summarize IP addresses for distributed, hierarchical directory”, 11th IEEE/ACM International Conference on Grid Computing (Grid2010), 2010,

Eli Dart, Brian Tierney, editors, “Basic Energy Sciences Network Requirements Workshop, September 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4363E

Office of Basic Energy Sciences, DOE Office of Science; Energy Sciences Network; Gaithersburg, MD — September 22 and 23, 2010

Participants and Contributors; Alan Biocca, LBNL (Advanced Light Source); Rich Carlson, DOE/SC/ASCR (Program Manager); Jackie Chen, SNL/CA (Chemistry/Combustion); Steve Cotter, ESnet (Networking); Eli Dart, ESnet (Networking); Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager); Jim Davenport, DOE/SC/BES (BES Program); Alexander Gaenko, Ames Lab (Chemistry); Paul Kent, ORNL (Materials Science, Simulations); Monica Lamm, Ames Lab (Computational Chemistry); Stephen Miller, ORNL (Spallation Neutron Source); Chris Mundy, PNNL (Chemical Physics); Thomas Ndousse, DOE/SC/ASCR (ASCR Program); Mark Pederson, DOE/SC/BES (BES Program); Amedeo Perazzo, SLAC (Linac Coherent Light Source); Razvan Popescu, BNL (National Synchrotron Light Source); Damian Rouson, SNL/CA (Chemistry/Combustion); Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager); Bobby Sumpter, ORNL (Computer Science and Mathematics and Center for Nanophase; Materials Sciences); Brian Tierney, ESnet (Networking); Cai-Zhuang Wang, Ames Lab (Computer Science/Simulations); Steve Whitelam, LBNL (Molecular Foundry); Jason Zurawski, Internet2 (Networking)

Eli Dart, Brian Tierney, editors, “Biological and Environmental Research Network Requirements Workshop, April 2010 - Final Report”, ESnet Network Requirements Workshop, January 1, 2010, LBNL LBNL-4089E

Office of Biological and Environmental Research, DOE Office of Science Energy Sciences Network Rockville, MD — April 29 and 30, 2010. This is LBNL report LBNL-4089E.

Participants and Contributors: Kiran Alapaty, DOE/SC/BER (Atmospheric System Research) Ben Allen, LANL (Bioinformatics) Greg Bell, ESnet (Networking) David Benton, GLBRC/University of Wisconsin (Informatics) Tom Brettin, ORNL (Bioinformatics) Shane Canon, NERSC (Data Systems) Rich Carlson, DOE/SC/ASCR (Network Research) Steve Cotter, ESnet (Networking) Silvia Crivelli, LBNL (JBEI) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ESnet Program Manager) Narayan Desai, ANL (Networking) Richard Egan, ANL (ARM) Jeff Flick, NOAA (Networking) Ken Goodwin, PSC/NLR (Networking) Susan Gregurick, DOE/SC/BER (Computational Biology) Susan Hicks, ORNL (Networking) Bill Johnston, ESnet (Networking) Bert de Jong, PNNL (EMSL/HPC) Kerstin Kleese van Dam, PNNL (Data Management) Miron Livny, University of Wisconsin (Open Science Grid) Victor Markowitz, LBNL/JGI (Genomics) Jim McGraw, LLNL (HPC/Climate) Raymond McCord, ORNL (ARM) Chris Oehmen, PNNL (Bioinformatics/ScalaBLAST) Kevin Regimbal, PNNL (Networking/HPC) Galen Shipman, ORNL (ESG/Climate) Gary Strand, NCAR (Climate) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories, Middleware) Dean Williams, LLNL (ESG/Climate) Jason Zurawski, Internet2 (Networking)  

Editors: Eli Dart, ESnet; Brian Tierney, ESnet

2009

William E Johnston, Progress in Integrating Networks with Service Oriented Architectures / Grids. The Evolution of ESnet's Guaranteed Bandwidth Service, Cracow ’09 Grid Workshop, October 12, 2009,

“HEP (High Energy Physics) Network Requirements Workshop, August 2009 - Final Report”, ESnet Network Requirements Workshop, August 27, 2009, LBNL LBNL-3397E

Office of High Energy Physics, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-3397E.

Participants and Contributors: Jon Bakken, FNAL (LHC/CMS) Artur Barczyk, Caltech (LHC/Networking) Alan Blatecky, NSF (NSF Cyberinfrastructure) Amber Boehnlein, DOE/SC/HEP (HEP Program Office) Rich Carlson, Internet2 (Networking) Sergei Chekanov, ANL (LHC/ATLAS) Steve Cotter, ESnet (Networking) Les Cottrell, SLAC (Networking) Glen Crawford, DOE/SC/HEP (HEP Program Office) Matt Crawford, FNAL (Networking/Storage) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (HEP/LHC/ATLAS) Ian Fisk, FNAL (LHC/CMS) Rob Gardner, University of Chicago (HEP/LHC/ATLAS) Bill Johnston, ESnet (Networking) Steve Kent, FNAL (Astroparticle) Stephan Lammel, FNAL (FNAL Experiments and Facilities) Stewart Loken, LBNL (HEP) Joe Metzger, ESnet (Networking) Richard Mount, SLAC (HEP) Thomas Ndousse-Fetter, DOE/SC/ASCR (Network Research) Harvey Newman, Caltech (HEP/LHC/Networking) Jennifer Schopf, NSF (NSF Cyberinfrastructure) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alan Stone, DOE/SC/HEP (HEP Program Office) Brian Tierney, ESnet (Networking) Craig Tull, LBNL (Daya Bay) Jason Zurawski, Internet2 (Networking)

 

William Johnston, Energy Sciences Network Enabling Virtual Science, TERENA Conference, Malaga, Spain, July 9, 2009,

William E Johnston, The Evolution of Research and Education Networks and their Essential Role in Modern Science, TERENA Conference, Malaga, Spain, June 9, 2009,

“ASCR (Advanced Scientific Computing Research) Network Requirements Workshop, April 2009 - Final Report”, ESnet Networking Requirements Workshop, April 15, 2009, LBNL LBNL-2495E

Office of Advanced Scientific Computing Research, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-2495E.

Participants and Contributors: Bill Allcock, ANL (ALCF, GridFTP) Rich Carlson, Internet2 (Networking) Steve Cotter, ESnet (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Brent Draney, NERSC (Networking and Security) Richard Gerber, NERSC (User Services) Mike Helm, ESnet (DOEGrids/PKI) Jason Hick, NERSC (Storage) Susan Hicks, ORNL (Networking) Scott Klasky, ORNL (OLCF Applications) Miron Livny, University of Wisconsin Madison (OSG) Barney Maccabe, ORNL (Computer Science) Colin Morgan, NOAA (Networking) Sue Morss, DOE/SC/ASCR (ASCR Program Office) Lucy Nowell, DOE/SC/ASCR (SciDAC) Don Petravick, FNAL (HEP Program Office) Jim Rogers, ORNL (OLCF) Yukiko Sekine, DOE/SC/ASCR (NERSC Program Manager) Alex Sim, LBNL (Storage Middleware) Brian Tierney, ESnet (Networking) Susan Turnbull, DOE/SC/ASCR (Collaboratories/Middleware) Dean Williams, LLNL (ESG/Climate) Linda Winkler, ANL (Networking) Frank Wuerthwein, UC San Diego (OSG)

C. Guok, ESnet OSCARS, DOE Joint Engineering Taskforce, February 2009,

Tierney B., Metzger J., Boote J., Brown A., Zekauskas M., Zurawski J., Swany M., Grigoriev M., “perfSONAR: Instantiating a Global Network Measurement Framework”, 4th Workshop on Real Overlays and Distributed Systems (ROADS’09) Co-located with the 22nd ACM Symposium on Operating Systems Principles (SOSP), January 1, 2009, LBNL LBNL-1452

Swany M. Brown A., Zurawski J., “A General Encoding Framework for Representing Network Measurement and Topology Data”, Concurrency and Computation: Practice and Experience, 2009, 21:1069--1086,

Grigoriev M., Boote J., Boyd E., Brown A., Metzger J., DeMar P., Swany M., Tierney B., Zekauskas M., Zurawski J., “Deploying distributed network monitoring mesh for LHC Tier-1 and Tier-2 sites”, 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2009), January 1, 2009,

2008

William Johnston, Evangelos Chaniotakis, Eli Dart, Chin Guok, Joe Metzger, Brian Tierney, “The Evolution of Research and Education Networks and their Essential Role in Modern Science”, Trends in High Performance & Large Scale Computing, ( November 1, 2008)

Published in: "Trends in High Performance & Large Scale Computing" Lucio Grandinetti and Gerhard Joubert, Editors

Chin Guok, David Robertson, Evangelos Chaniotakis, Mary Thompson, William Johnston, Brian Tierney, A User Driven Dynamic Circuit Network Implementation, IEEE DANMS 2008, November 2008,

William Johnston, ESnet Planning, Status, and Future Issues, ASCAC Meeting, August 1, 2008,

A. Baranovski, K. Beattie, S. Bharathi, J. Boverhof, J. Bresnahan, A. Chervenak, I. Foster, T. Freeman, D. Gunter, K. Keahey, C. Kesselman, R. Kettimuthu, N. Leroy, M. Link, M. Livny, R. Madduri, G. Oleynik, L. Pearlman, R. Schuler, and B. Tierney, “Enabling Petascale Science: Data Management, Troubleshooting and Scalable Science Services”, Proceedings of SciDAC 2008,, July 1, 2008,

“NP (Nuclear Physics) Network Requirements Workshop, May 2008 - Final Report”, ESnet Network Requirements Workshop, May 6, 2008, LBNL LBNL-1289E

Nuclear Physics Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD. LBNL-1289E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Michael Ernst, BNL (RHIC) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Andy Kowalski, JLAB (Networking) Jerome Lauret, BNL (STAR at RHIC) Charles Maguire, Vanderbilt (LHC CMS Heavy Ion) Douglas Olson, LBNL (STAR at RHIC and ALICE at LHC) Martin Purschke, BNL (PHENIX at RHIC) Gulshan Rai, DOE/SC (NP Program Office) Brian Tierney, ESnet (Networking) Chip Watson, JLAB (CEBAF) Carla Vale, BNL (PHENIX at RHIC)

William E Johnston, ESnet4: Networking for the Future of DOE Science, Office of Science, Science Programs Requirements Workshops: Nuclear Physics, May 1, 2008,

“FES (Fusion Energy Sciences ) Network Requirements Workshop, March 2008 - Final Report”, ESnet Network Requirements Workshop, March 13, 2008, LBNL LBNL-644E.

Fusion Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Gaithersburg, MD. LBNL-644E.

Participants and Contributors: Rich Carlson, Internet2 (Networking) Tom Casper, LLNL (Fusion – LLNL) Dan Ciarlette, ORNL (ITER) Eli Dart, ESnet (Networking) Vince Dattoria, DOE/SC/ASCR (ASCR Program Office) Bill Dorland, University of Maryland (Fusion – Computation) Martin Greenwald, MIT (Fusion – Alcator C-Mod) Paul Henderson, PPPL (Fusion – PPPL Networking, PPPL) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) Ihor Holod, UC Irvine (Fusion – Computation, SciDAC) William Johnston, ESnet (Networking) Scott Klasky, ORNL (Fusion – Computation, SciDAC) John Mandrekas, DOE/SC (FES Program Office) Doug McCune, PPPL (Fusion – TRANSP user community, PPPL) Thomas NDousse, DOE/SC/ASCR (ASCR Program Office) Ravi Samtaney, PPPL (Fusion – Computation, SciDAC) David Schissel, General Atomics (Fusion – DIII-D, Collaboratories) Yukiko Sekine, DOE/SC/ASCR (ASCR Program Office), Sveta Shasharina, Tech-X Corporation (Fusion – Computation) Brian Tierney, LBNL (Networking)

William Johnston, Joe Metzger, Mike O'Connor, Michael Collins, Joseph Burrescia, Eli Dart, Jim Gagliardi, Chin Guok, Kevin Oberman, “Network Communication as a Service-Oriented Capability”, High Performance Computing and Grids in Action, Volume 16, Advances in Parallel Computing, ( March 1, 2008)

ABSTRACT    
In widely distributed systems generally, and in science-oriented Grids in particular, software, CPU time, storage, etc., are treated as “services” – they can be allocated and used with service guarantees that allows them to be integrated into systems that perform complex tasks. Network communication is currently not a service – it is provided, in general, as a “best effort” capability with no guarantees and only statistical predictability.

In order for Grids (and most types of systems with widely distributed components) to be successful in performing the sustained, complex tasks of large-scale science – e.g., the multi-disciplinary simulation of next generation climate modeling and management and analysis of the petabytes of data that will come from the next generation of scientific instrument (which is very soon for the LHC at CERN) – networks must provide communication capability that is service-oriented: That is it must be configurable, schedulable, predictable, and reliable. In order to accomplish this, the research and education network community is undertaking a strategy that involves changes in network architecture to support multiple classes of service; development and deployment of service-oriented communication services, and; monitoring and reporting in a form that is directly useful to the application-oriented system so that it may adapt to communications failures.

In this paper we describe ESnet's approach to each of these – an approach that is part of an international community effort to have intra-distributed system communication be based on a service-oriented capability.

Kevin Oberman, The Gathering Storm: The Coming Internet Crisis, Joint Techs, Honolulu, Hawaii, January 21, 2008,

Joseph Burrescia, ESnet Update, Joint Techs, Honolulu, Hawaii, January 21, 2008,

J. Zurawski, D Wang, “Fault-tolerance schemes for clusterheads in clustered mesh networks”, International Journal of Parallel, Emergent and Distributed Systems, 2008, 23:271--287,

Chin P. Guok, Jason R. Lee, Karlo Berket, “Improving The Bulk Data Transfer Experience”, International Journal of Internet Protocol Technology 2008 - Vol. 3, No.1 pp. 46 - 53, January 1, 2008,

2007

C. Guok, Impact of ESnet OSCARS and Collaborative Projects, SC07, November 2007,

Joe Metzger, ESnet4: Networking for the Future of DOE Science, ICFA International Workshop on Digital Divide, October 25, 2007,

Dan Gunter, Brian L. Tierney, Aaron Brown, Martin Swany, John Bresnahan, Jennifer M. Schopf, “Log Summarization and Anomaly Detection for Troubleshooting Distributed Systems”, Proceedings of the 8th IEEE/ACM International Conference on Grid Computing, September 19, 2007,

“BER (Biological and Environmental Research) Network Requirements Workshop, July 2007 - Final Report”, ESnet Network Requirements Workshop, July 26, 2007,

Biological and Environmental Research Program Office, DOE Office of Science Energy Sciences Network Bethesda, MD – July 26 and 27, 2007. LBNL/PUB-988.

Participants and Contributors: Dave Bader, LLNL (Climate) Raymond Bair, ANL (Comp Bio) Anjuli Bamzai, DOE/SC BER Paul Bayer, DOE/SC BER David Bernholdt, ORNL (Earth System Grid) Lawrence Buja, NCAR (Climate) Alice Cialella, BNL (ARM Data) Eli Dart, ESnet (Networking) Eric Davis, LLNL (Climate) Bert DeJong, PNNL (EMSL) Dick Eagan, ANL (ARM) Yakov Golder, JGI (Comp Bio) Dave Goodwin, DOE/SC ASCR Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Phil Jones, LANL (Climate) Raymond McCord, ORNL (ARM) Steve Meacham, NSF George Michaels, PNNL (Comp Bio) Kevin Regimbal, PNNL (EMSL) Mike Sayre, NIH Harris Shapiro, LBNL (JGI) Ellen Stechel, ASCAC Brian Tierney, LBNL (Networking) Lee Tsengdar, NASA (Geosciences) Mike Wehner, LBNL (Climate) Trey White, ORNL (Climate)

William E Johnston, “ESnet: Advanced Networking for Science”, SciDAC Review, July 1, 2007,

“BES (Basic Energy Sciences) Network Requirements Workshop, June 2007 - Final Report”, ESnet Network Requirements Workshop, June 4, 2007, LBNL LBNL/PUB-981

Basic Energy Sciences Program Office, DOE Office of Science Energy Sciences Network Washington, DC – June 4 and 5, 2007. LBNL/PUB-981.

Participants and Contributors: Dohn Arms, ANL (Advanced Photon Source) Anjuli Bamzai, DOE/SC/BER (BER Program Office) Alan Biocca, LBNL (Advanced Light Source) Jackie Chen, SNL (Combustion Research Facility) Eli Dart, ESnet (Networking) Bert DeJong, PNNL (Chemistry) Paul Domagala, ANL (Computing and Information Systems) Yiping Feng, SLAC (LCLS/LUSI) David Goodwin, DOE/SC/ASCR (ASCR Program Office) Bruce Harmon, Ames Lab (Materials Science) Robert Harrison, UT/ORNL (Chemistry) Richard Hilderbrandt, DOE/SC/BES (BES Program Office) Daniel Hitchcock, DOE/SC/ASCR (ASCR Program Office) William Johnston, ESnet (Networking) Roger Klaffky, DOE/SC/BES (BES Program Office) Michael McGuigan, BNL (Center for Functional Nanomaterials) Stephen Miller, ORNL (Spallation Neutron Source) Richard Mount, SLAC (Linac Coherent Light Source) Jeff Neaton, LBNL (Molecular Foundry) Larry Rahn, SNL/BES (Combustion) Thomas Schulthess, ORNL (CNMS) Ken Sidorowicz, ANL (Advanced Photon Source) Ellen Stechel, SNL (ASCAC) Brian Tierney, LBNL (Networking) Linda Winkler, ANL (Networking) Zhijian Yin, BNL (National Synchrotron Light Source)

“Measurements On Hybrid Dedicated Bandwidth Connections”, INFOCOM 2007, IEEE (TCHSN/ONTC), May 1, 2007,

INFOCOM 2007, IEEE (TCHSN/ONTC)

William Johnston, “The Advanced Networks and Services Underpinning Modern, Large-Scale Science”, SciDAC Review Paper, May 1, 2007,

Tom Lehman , Xi Yang, Chin P. Guok, Nageswara S. V. Rao, Andy Lake, John Vollbrecht, Nasir Ghani, “Control Plane Architecture and Design Considerations for Multi-Service, Multi-Layer, Multi-Domain Hybrid Networks”, INFOCOM 2007, IEEE (TCHSN/ONTC), May 1, 2007,

William E Johnston, ESnet4 - Networking for the Future of DOE Science: High Energy Physics / LHC Networking, ON Vector (ON*Vector) Workshop
, February 26, 2007,

2006

Chin Guok, David Robertson, Mary Thompson, Jason Lee, Brian Tierney and William Johnston, “Intra and Interdomain Circuit Provisioning Using the OSCARS Reservation System”, Third International Conference on Broadband Communications Networks, and Systems, IEEE/ICST, October 1, 2006,

Eli Dart, editor, “Science-Driven Network Requirements for ESnet: Update to the 2002 Office of Science Networking Requirements Workshop Report - February 21, 2006”, ESnet Networking Requirements Workshop, February 21, 2006,

Update to the 2002 Office of Science Networking Requirements Workshop Report February 21, 2006. LBNL report LBNL-61832.

Contributors: Paul Adams, LBNL (Advanced Light Source); Shane Canon, ORNL (NLCF); Steven Carter, ORNL (NLCF); Brent Draney, LBNL (NERSC); Martin Greenwald, MIT (Magnetic Fusion Energy); Jason Hodges, ORNL (Spallation Neutron Source); Jerome Lauret, BNL (Nuclear Physics); George Michaels, PNNL (Bioinformatics); Larry Rahn, SNL (Chemistry); David Schissel, GA (Magnetic Fusion Energy); Gary Strand, NCAR (Climate Science); Howard Walter, LBNL (NERSC); Michael Wehner, LBNL (Climate Science); Dean Williams, LLNL (Climate Science).

ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS), Google invited talk; Advanced Networking for Distributed Petascale Science Workshop; IEEE GridNets; QUILT Fall Fiber Workshop, 2008, 2006,

Zurawski, J., Swany M., and Gunter D., “A Scalable Framework for Representation and Exchange of Network Measurements”, 2nd International IEEE/Create-Net Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TridentCom 2006), Barcelona, Spain, 2006,

2005

Hanemann A., Boote J. , Boyd E., Durand J., Kudarimoti L., Lapacz R., Swany M., Trocha S., and Zurawski J., “PerfSONAR: A Service-Oriented Architecture for Multi-Domain Network Monitoring”, International Conference on Service Oriented Computing (ICSOC 2005), Amsterdam, The Netherlands, 2005,

Zurawski J., Swany M., Beck M. and Ding Y., “Logistical Multicast for Data Distribution”, Proceedings of CCGrid, Workshop on Grids and Advanced Networks 2005 (GAN05), Cardiff, Wales, 2005,

Wang D. Zurawski J., “Fault-Tolerance Schemes for Hierarchical Mesh Networks”, The 6th International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT 2005), Dalian, China, 2005,

2004

Gunter D., Leupolt M., Tierney B., Swany M. and Zurawski J., “A Framework for the Representation of Network Measurements”, LBNL Technical Report, 2004,

2003

“DOE Science Networking Challenge: Roadmap to 2008 - Report of the June 3-5, 2003, DOE Science Networking Workshop”, DOE Science Networking Workshop, June 3, 2003,

Report of the June 3-5, 2003, DOE Science Networking Workshop Conducted by the Energy Sciences Network Steering Committee at the request of the Office of Advanced Scientific Computing Research of the U.S. Department of Energy Office of Science.

Workshop Chair Roy Whitney;  Working Group Chairs: Wu-chun Feng, William Johnston, Nagi Rao, David Schissel, Vicky White, Dean Williams; Workshop Support: Sandra Klepec, Edward May; Report Editors: Roy Whitney, Larry Price; Energy Sciences Network Steering Committee: Larry Price; Chair: Charles Catlett, Greg Chartrand, Al Geist, Martin Greenwald, James Leighton, Raymond McCord, Richard Mount, Jeff Nichols, T.P. Straatsma, Alan Turnbull, Chip Watson, William Wing, Nestor Zaluzec.

2002

“High-Performance Networks for High-Impact Science”, High-Performance Network Planning Workshop, August 13, 2002,

Report of the High-Performance Network Planning Workshop. Conducted by the
 Office of Science, U.S. Department of Energy
. August 13-15, 2002.

Participants and Contributors: Deb Agarwal, LBNL; Guy Almes, Internet 2; Bill St. Arnaund, Canarie, Inc.; Ray Bair, PNNL; Arthur Bland, ORNL; Javad Boroumand, Cisco; William Bradley, BNL; James Bury, AT&T; Charlie Catlett, ANL; Daniel Ciarlette, ORNL; Tim Clifford, Level 3; Carl Cork, LBL; Les Cottrell, SLAC; David Dixon, PNNL; Tom Dunnigan, Oak Ridge; Aaron Falk, USC/Information Sciences Inst.; Ian Foster, ANL; Dennis Gannon, Indiana Univ.; Jason Hodges, ORNL; Ron Johnson, Univ. of Washington; Bill Johnston, LBNL; Gerald Johnston, PNNL; Wesley Kaplow, Qwest; Dale Koelling, US Department of Energy; Bill Kramer, LBNL/NERSC; Maxim Kowalski, JLab; Jim Leighton, LBNL/Esnet; Phil LoCascio, ORNL; Mari Maeda, NSF; Mathew Mathis, Pittsburgh SuperComputing Center; William (Buff) Miner, US Department of Energy; Sandy Merola, LBNL; Thomas Ndousse-Fetter, US Department of Energy; Harvey Newman, Caltech; Peter O'Neil, NCAR; James Peppin, USC/Information Sciences Institute; Arnold Peskin, BNL; Walter Polansky, US Department of Energy; Larry Rahn, SNL; Anne Richeson, Qwest; Corby Schmitz, ANL; Thomas Schulthess, ORNL; George Seweryniak, US Department of Energy; David Schissel, General Atomics; Mary Anne Scott, US Department of Energy; Karen Sollins, MIT; Warren Strand, UCAR; Brian Tierney, LBL; Steven Wallace, Indiana University; James White, ORNL; Vicky White, US Department of Energy; Michael Wilde, ANL; Bill Wing, ORNL; Linda Winkler, ANL; Wu-chun Feng, LANL; Charles C. Young, SLAC.