Difference between revisions of "High Performance Computing (HPC) Technical Committee"
Ivan Spisso (talk | contribs) |
Ivan Spisso (talk | contribs) |
||
Line 42: | Line 42: | ||
'''Fabrizio Magugliani''' ''Strategic Planning and Business'', E4 (Italy) | '''Fabrizio Magugliani''' ''Strategic Planning and Business'', E4 (Italy) | ||
− | '''Michael Klemm''' ''Senior Field Application Engineer at AMD'' & ''Chief Executive Officer'' at OpenMP ARB | + | '''Michael Klemm''' ''Senior Field Application Engineer at AMD'' & ''Chief Executive Officer'' at OpenMP ARB (Germany) |
'''Giacomo Rossi''' ''Application Engineer'', Intel (Italy) | '''Giacomo Rossi''' ''Application Engineer'', Intel (Italy) |
Revision as of 11:21, 19 February 2021
A warm welcome to the High-Performance Computing (HPC) Technical Committee's wiki page!
Contents
Commitment
As part of the OpenFOAM Governance structure, HPC Technical Committee' commitment is to work together with the community to overcome the actual HPC bottlenecks of OpenFOAM. In order to demonstrate improvements in performance and scalability to move forward from actual near petascale to pre- and exascale class performances. An important part of the work is formulating code improvement proposal and recommendations to the Steering Committee, which makes the final decision. Here, you can find information about us, our current activities and also how to get in touch.
Members of the Committee
This is the list of the current active committee members, led by Ivan Spisso who acts as chairman. The members represent a well balanced and geographically distributed mix of Release and Maintenace Authority (ESI-OpenCFD, Wikki), HPC experts (CINECA, KAUST, ORNL, Shimizu Corp.), Hardware OEM (NVIDIA, Intel, ARM), HPC system integrators (E4, AWS), domain-specific expert (FM Global, GM)
A brief description follows:
Chair: Ivan Spisso, HPC specialist for academic and industrial CFD applications [1], SuperComputing Applications and Innovation (SCAI) Department, CINECA (Itay)
Mark Olesen: Principal Engineer, ESI-OpenCFD (Germany)
Simone Bnà: HPC developer, SuperComputing Applications and Innovation (SCAI) Department, CINECA (Italy)
Henrik Rusche, Wikki Ltd. (Germany)
Fabrizio Magugliani Strategic Planning and Business, E4 (Italy)
Michael Klemm Senior Field Application Engineer at AMD & Chief Executive Officer at OpenMP ARB (Germany)
Giacomo Rossi Application Engineer, Intel (Italy)
Oliver Perks, Staff Field Application Engineer, ARM (UK)
Stan Posey, CFD Domain world-wide HPC Program Manager, Filippo Spiga EMEA HPC Developer Relations, NVIDIA (US/UK)
Neil Ashton, Principal CFD Specialist, Amazon Web Services, (UK)
Solal Amouyal, HPC Application Engineer, Huawei Tel-Aviv Research Center (Israel)
William F. Godoy, Scientific Data Group, Oak Ridge National Lab (US)
Pham Van Phuc, Senior Researcher, Institute of Technology, Shimizu Corporation (Japan)
Stefano Zampini, Research Scientist, Extreme Computing Research Center, KAUST (Saudi Arabia). Member of PETSC dev. Team
Oluwayemisi Oluwole, Lead Research Scientist, Fire Dynamics Group, FM Global (USA)
Moududur Rahman, Raman Bansal, HPC SW Innovation Group, General Motors (USA)
How to contact us
To contact the committee, a specific email-address [2] has been set-up. This alias will automatically forward the incoming requests to all current members of the committee. The chairman is responsible for processing any incoming emails and providing an appropriate and timely answer.
Our Workflow
The Committee meets biannually:
- One physical, at the annual ESI OpenFOAM Conference (typically, October time)
- One virtual, to be held six months after the physical one (around April)
In between the meetings, we carry out planned activities and common projects on-going keep in touch via online collaboration tools.
Remits of the Committee
The recommendations to Steering Committee in respect of HPC technical area are:
- Work together with the Community to overcome the actual HPC bottlenecks of OpenFOAM, to name a few:
- Scalability of linear solvers
- Adapt/modify data structures for SpMV (Sparse Matrix-Vector Multiply) to enable vectorization/hybridization
- Improve memory access on new architectures
- Improve memory bandwidth
- Porting to new and emerging technologies
- Parallel pre- and post-processing, parallel I/O
- Load balancing
- In-situ Visualization
- Strong co-design approach
- Identify algorithm improvements to enhance HPC scalability
- Interaction with other Technical Committees (Numerics, Documentations, etc.)
Priorities
The current priorities with respect to the aforementioned remits are:
- Improve scalability of linear algebra solvers
- HPC Benchmarks
- GPU enabling of OpenFOAM
- Parallel I/O
Tasks
- OpenFOAM HPC Benchmark Project ( Recent Presentation at 8th OpenFOAM Virtual Conference)
- HPC Performance Improvements for OpenFOAM linear solvers: PRACE Preparatory Access Project. News: White paper public available
- GPU enabling of OpenFOAM: i ) Acceleration using PETSc4OAM ii) AmgX GPU solver development
Activity Log
(Reverse chronological order)
- 2020-10-13/15 Several members present relevant work at the HPC session of 8th OpenFOAM Virtual Conference.
- 2020-01-14 submitted proposal to EuroHPC-03-2019 call: Industrial software codes for extreme-scale computing environments and applications
- 2019-10-17 Committee Meeting held at the 7th OpenFOAM Conference in Berlin.
- 2019-10-16/15 Several members present relevant work at the 7th OpenFOAM Conference in Berlin. HPC Session, chaired by Ivan Spisso.
- 2019-04-17 Committee officially ratified by the Steering Committee.
Repository
The Code repository for the HPC Technical Committee: is an open and shared repository with HPC relevant data-sets and terms of references. Work in progress!
Code contributions
The code contributions of the HPC TC are available as regular modules:
- Parallel I/O with OpenFOAM Adios 2: Since the release of OpenFOAM v1912 the adiosWrite function object has been rewritten to use the parallel I/O
- A collection of visualization interfaces for OpenFOAM, primarily VTK/ParaView based.
- PETSc4FOAM library: a library for OpenFOAM that provides a solver for embedding Petsc and its external dependencies (i.e. Hypre) into arbitrary OpenFOAM simulations.
Planned / Future activities
- First Italian OpenFOAM User Meeting (around November 2020 tbd)
- Parallele I/O Test at scale with Adios2
- Include GPUs support for PETSc4FOAM library
Overview | Structure | Steering Committee | Get involved | Documents | |
Functionality-based Committees | Documentation and Tutorials | HPC | Meshing | Multiphase | Optimisation | Turbulence |
Application Committees | Marine | Nuclear |
last modified 03 June 2023