Topic 13: Accelerator Computing
1 Description
Hardware accelerators of various kinds offer a potential for achieving massive performance in applications that can leverage their high degree of parallelism and customization. Examples include graphics processors (GPUs), "manycore" devices, as well as more custom devices, customizable FPGA-based systems and streaming dataflow architectures.
The research challenge for this topic is to explore new avenues for actually realizing this potential. We encourage submissions in all areas related to accelerators: architectures, algorithms, languages, compilers, libraries, runtime systems, coordination of accelerators and CPU, and debugging and profiling tools. Application-related submissions that contribute new insights into fundamental problems or solution approaches in this domain are very welcome as well.
2 Focus
- New accelerator architectures
- Languages, compilers, and runtime environments for accelerator programming
- Programing techniques for clusters of accelerators
- Tools for debugging, profiling, and optimizing programs on accelerators
- Hybrid and heterogeneous computing with several, possibly different types of accelerators
- Parallel algorithms for accelerators
- Applications benefitting from acceleration
- Models and benchmarks for accelerators
- Manual optimization and auto-tuning
- Library support for accelerators
3 Topic Committee
3.1 Global chair
- Jörg Keller, University of Hagen, Germany
3.2 Local chair
- Andreas Steininger, TU Wien, Austria
3.3 Additional members
- Lee Howes, Qualcomm, USA
- Michael Klemm, Intel, Deutschland
- Naoya Maruyama, RIKEN, Japan
- Norbert Eicker, Jülich Supercomputing Centre, Germany
- Erik Saule, UNC Charlotte, USA
- Benedict Gaster, University of the West of England, UK