SEE WHAT ACCELERATED COMPUTING
CAN DO FOR YOU
At ISC'14, NVIDIA® will be featuring advances in applications and scientific discovery made with accelerated computing. We invite you to visit us to see how others in your field are advancing science, as well as checking out the latest NVIDIA GPU technologies in accelerated computing including the Tesla Kepler-based GPU Accelerators.
Vendor Showdown Session
Monday, 23 June | 02:05 pm - 02:17 pm | Hall 1
In just one afternoon, 18 leading vendor representatives have max. three slides each to present their company mission and HPC portfolio and to carve out why their products help to meet the mission. They then have to answer four tricky questions from the two moderators.
Monday, 23 June | 04:00 pm - 04:20 pm | Hall 2
Duncan Poole (About the speaker)
Manager of Strategic Alliances for GPU Computing, NVIDIA
This talk will focus on the emergence of OpenACC as a viable programming model for accelerated computing.
It will discuss what motivated the creators of the standard, and how OpenACC benefits developers in relation to other computing standards. The current OpenACC 2.0 standard will be explored, both in terms of available commercial implementations and open source projects. Lastly we will give examples to show where it is demonstrating success, and what challenges must be faced to gain broader adoption.
Tuesday, 24 June | 10:00 am - 11:00 am | Hall 5
Guido Juckeland, TU Dresden
Duncan Poole, NVIDIA
Will Sawyer, CSCS
Thomas Schulthess, CSCS
Nathan Sidwell, Mentor Graphics
This BoF brings together the European OpenACC user community to discuss the recent and future changes to the OpenACC specification. The focus of this BoF is the gathering of feedback from users on OpenACC 2.0 and the discussion of upcoming features (such as deep copy, performance tools API, etc) in the next release.
We also plan to discuss the future road map, and in more detail, needs of the community to be considered for future releases. OpenACC has gained momentum, with support on competing architectures, highlighted by being included in the very competitive SPEC benchmark suite. Also represented will be developers working on OpenACC in GCC and other open source compiler research projects and tools for the OpenACC standard, where it is seen as a complementary alternative to coding in CUDA or OpenCL.
Advancements in the NVIDIA GPU Ecosystem
Tuesday, 24 June | 02:20 pm - 02:40 pm | Exhibition Hall #660
Computational researchers, scientists and engineers are rapidly shifting to computing solutions running on GPUs as this offers significant advantages in performance and energy efficiency. This presentation will give an update about the latest GPU developments from NVIDIA. Furthermore it will cover the different approaches to program and to use GPUs with a special focus on new CUDA 6 features.
Wednesday, 25 June | 09:20 am - 09:40 am | Hall 2
Ilari Hänninen (About the speaker)
Senior Application Engineer, CST – Computer Simulation Technology
CST offers accurate, efficient computational solutions for electromagnetic design and analysis. Our user-friendly 3D EM simulation software CST STUDIO SUITE enables engineers or scientists to choose the most appropriate method for the design and optimization of devices operating in a wide range of frequencies.
In a modern research and design environment it is no longer sufficient to rely on simple CPU processing power on the workstation on the engineer's or scientist's desk. The time pressure to finish designs in faster cycles puts increasing demand on the hardware resources.
CST invests a great amount of work to improve the performance of time consuming simulations and has worked with NVIDIA to offer the most powerful GPU acceleration options for high frequency 3D electromagnetic simulations and the result of this joined effort will be shown in this talk. Using multiple GPUs it is possible to run simulations with challenging models in a fraction of the time compared to CPU only solutions. The latest NVIDIA Tesla Kepler-series GPUs offer an exceptional performance for complicated simulation tasks, which has made the lives of many CST users easier.
If the model under study is extremely large the resources of a single system, even with multiple GPUs, might not be sufficient. MPI computing is a way to handle such challenging models. Combining MPI computing and GPU computing can also be used to overcome the memory limit of the GPU hardware for extremely large simulations.
However, dedicated high performance hardware is a major investment for any user. For smaller companies and research groups, it often does not make sense to buy a cluster and pay for its upkeep when very large simulation projects only come along occasionally. For users who need access to high-performance computing resources on a moderate budget, cloud computing is the way forward. CST STUDIO SUITE is available on the HPC systems of several cloud providers and the setup will be shown and discussed during this talk as well.
Wednesday, 25 June | 11:30 am - 11:50 am | Hall 2
Dr. Nicola Bienati (About the speaker)
Senior Geophysicist, Eni E&P
Oil and Gas Exploration industry is probably one of the toughest clients for HPC systems and HPC software. Indeed in order to maximize the probability of successfully identifying and accurately localize hydrocarbon reservoirs it is necessary to process and analyze very large seismic datasets (having the size of tens of Terabytes) subject to strict time constraints, and by using extremely computationally demanding algorithms. The heaviest jobs usually run for tens of days using hundreds of nodes. At the same time, HPC resources have to be accessible to users hiding the inherent complexity of HPC systems, since user skills, know-how, and effort have to remain focalized on data processing and data analysis tasks.
On one side there is therefore the need to maximize throughput to deliver results in a timely manner. Such requirement is addressed by working both on hardware (through the adoption of state of the art components) and on software optimization, and also by optimizing the interaction among all the components of the HPC infrastructure (computing nodes, network, storage, software stack, and applications). At the same time it is required to implement complex workflows characterized by a strict interaction with the users. In such context, in order to maximize productivity aspects like user friendliness and fault tolerance have to be addressed by working on the application side.
In this presentation, we will show how Eni has tackled these issues while moving to a new Petaflop class computing facility and switching from a standard to an accelerated computing architecture.