Presentations and Demos
Booth #558

Distributed Volume Visualization for Surgical Treatment Planning and Education
This presentation will illustrate how our medical imaging system has, and is being successfully used in distributed teaching of college level virtual anatomy. The benefits of dynamic stereo visualization of Computer Tomography (CT) data of complex three dimensional anatomical structures in this learning context will be discussed. Also to be covered: our research in surgical preoperative planning techniques allowing liver transplant surgeons to interactively manipulating stereo donor patient CT data to assist in determining donor patient liver resection strategy.
Presenter: Fred Dech
Date & Time: Tuesday, 10:00 am - 10:30 am, 12 pm - 1:00 pm, 4:00 - 5:00 pm; Wednesday, 12 pm - 1:00 pm, 3 pm - 3:30 pm;
Thursday, 12 pm - 1:00 pm, 2 pm - 2:30 pm

Falkon, a Fast and Light-weight tasK executiON framework for Clusters, Grids, and Supercomputers
To enable the rapid execution of many tasks on compute clusters, Grids, and supercomputers, we have developed Falkon, a Fast and Light-weight tasK executiON framework. Falkon integrates (1) dynamic resource provisioning – multi-level scheduling techniques to enable separate treatments of resource provisioning and the dispatch of user tasks to those resources; (2) streamlined task dispatching – which is able to achieve orders-of-magnitude higher task dispatch rates than conventional schedulers; and (3) data diffusion – which performs data caching and uses a data-aware scheduler to leverage the co-located computational and storage resources to minimize the use of shared storage. Falkon’s integration of multi-level scheduling and streamlined dispatchers delivers performance not provided by any other system. Falkon has been Globus Incubator projects since 2007.
Presenter: Ioan Raicu
Date & Time:

GLOSS: Collaborative tagging for scientific data GLOSS (Generalized Labels Over Scientific data Source)
GLOSS is a collaborative tagging platform for online resources. There are three main differences between general collaborative tagging systems (such as Connotea, Del.icio.us, StumbleUpon, etc.) and GLOSS. First, our system can operate on multiple levels of abstraction (variables, sets of variables, surveys, sections) whereas each of the general systems is focused on only one type of objects (web page, article, workflow). Second, our system engages both data producers and data users while current systems are predominantly user driven and do not allow for direct data producer participation. Third, GLOSS is integrated and operates within the web pages of the online data sources, and thus is open to everyone who visits these sources. Typical online tagging systems require users to visit their own websites in order to discover and share tags. In my talk, I will give an overview of the distinguishing features of GLOSS, discuss some web programming tricks, show you GLOSS in action and do my best to convince you to glossify your online data.
Presenter: Svetlozar Nestorov
Date & Time: Tuesday, 12:30 pm - 1:00 pm; Wednesday, 10:00 - 10:30 pm

Grid Enabling the Great Plains Network
The Great Plains Network is a consortium of Universities in the midwestern states, dedicated to supporting research and education through the use of advanced networking technology. In this talk we demonstrate the Entitlement Service, a capability for dynamically establishing trust relationships between collaborators at different universities. Also, we describe plans to incorporate the Grid Security Interface (GSI) into the Entitlement service and thereby enable researchers with Grid credentials from other grids to access capabilities on the GPN.
Presenter: Dan Fraser
Date & Time: Wednesday, 5 pm - 5:30pm

I/O with ZeptoOS Linux on IBM Blue Gene/P
We will outline the design and implementation of ZOID - the ZeptoOS I/O Daemon - an alternative I/O forwarding infrastructure for IBM Blue Gene developed at Argonne. ZOID provides a high-performance, extensible, open source infrastructure that significantly exceeds the standard infrastructure in terms of flexibility.
Presenter: Kamil Iskra
Date & Time: Thursday, 2:30 pm -3:00 pm

Methods and Challenges Scaling FLASH to Petascale Computation
Using the Argonne Leadership Computing Facility's (ALCF) Intrepid 163,840-core Blue Gene/P, the FLASH group has obtained preliminary results for a fundamental physical process in the modeling of thermonuclear supernovae: the degree to which buoyancy-driven turbulence enhances the burning rate of the nuclear flame during the initial deflagration phase of the explosion. We describe the challenges and successes in optimizing this large-scale adaptive mesh simulation. We focus on the challenges presented by the memory needs and load balancing of this application, and the steps we took to improve single-cpu performance. We present scaling results up to 131,072 cores and an analysis of the scaling properties, and discuss how these results are guiding future efforts to enhance performance.
Presenter: Katherine Riley
Date & Time: Tuesday, 1:30 pm - 2:00 pm

An Overview of the Special PRiority and Urgent Computing Environment (SPRUCE)
This presentation includes a description of the architecture, the current state and areas of current and future research.
Demo: This sessions consists of demonstrations of several SPRUCE capabilities, including interactions with the SPRUCE portals, how users have integrated SPRUCE into their own workflows, urgent computing resource selection and Condor integration.
Presenters: Nick Trebon and Jason Cope
Date & Time: Tuesday, 3 pm - 3:30 pm; Wednesday, 1 pm - 2 pm

Quantum Chromodynamics on a Lattice: Approaching the Physical Limit
I will give a brief overview of some of the objectives of Lattice QCD research and the challenges of simulating this fundamental theory as it is found in nature.
Presenter: James Osborn
Date & Time: Wednesday, 10:30 am - 11:00 am

The Quest for Scalable Support of Data Intensive Applications in Distributed Systems
Data intensive applications involving the analysis of large datasets often require large amounts of compute and storage resources; if these are distributed resources, data locality can be crucial to high throughput and performance. We propose a “data diffusion” approach that acquires compute and storage resources dynamically, replicates data in response to demand, and schedules computations close to data. As demand increases, more resources are acquired, thus allowing faster response to subsequent requests that refer to the same data; when demand drops, resources are released. This approach can provide the benefits of dedicated hardware without the associated high costs, depending on workload and resource characteristics.
Presenter: Ioan Raicu
Date & Time: Thursday, 11:30 - 12 pm

Scalable Tools Communication Infrastructure
The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This presentation will describe STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types.
Presenter: Darius Buntinas
Date & Time: Tuesday, 2 pm - 2:30 pm; Wednesday, 2 pm - 2:30 pm; Thursday, 2 pm - 2:30 pm

Swift Parallel Scripting System
The Swift parallel scripting system (www.ci.uchicago.edu/swift) enables users to execute ordinary application programs in highly-parallel workflows on clusters, grids, and petascale supercomputers. This demonstration will introduce and illustrate this useful tool and programming model.
Presenter: Mike Wilde

TotalView on IBM Blue Gene
History: BG/L support, scale, customers, etc. Recent: BG/P support, threads, shared libraries, collaboration with IBM to get an operational debug interface for multi-threaded applications Now: LLNL subset attach project and TotalView feature improvements for jobs at scale Future: BG/Q support, investigations, many-core, transactional memory, speculative execution Some strategies for debugging large jobs.
Presenter: John DelSignore
Date & Time: Wednesday, 11:00 am - 11:30 am

Parallel scripting on the ALCF BG/P: Enabling diverse science on petascale systems
Conventional wisdom tells us that the only reasonable way to program a petascale system is in a tightly coupled manner, using message-passing models or a hybrid multithread/message-passing. However, many applications can readily – and rapidly – benefit from petascale systems such as the ALCF BG/P by using the more flexible, easier-to-develop programming model of loosely coupling existing application programs through simple, compact and easy-to-write scripts. Using an innovative software stack composed of the ZeptoOS operating system, the Falkon lightweight task dispatcher and the Swift parallel scripting language, we show how to efficiently harness the BG/P for applications such as computational biology, neuroscience and biochemistry with simple, flexible, powerful scripts.
Mike Wilde

Toward an OpenSocial Life Science Gateway
This presentation introduces the service-oriented framework of Open Life Science Gateway and describes our efforts to develop OpenSocial gadgets for running bioinformatics analysis tools on the TeraGrid resources.
Presenter: Wenjun Wu
Date & Time: Tuesday, 12 pm - 12:30 pm

ZeptoOS Linux on IBM Blue Gene/P
We present our flat memory architecture that we have recently implemented, which make user program's memory operations faster(up to 5x) and allows MPI to run on BlueGene/P Compute Node Linux.
Presenter: Kazutomo Yoshii
Date & Time: Tuesday, 2:30 pm- 3:00 pm; Wednesday, 2:30 pm - 3:00 pm