Difference between revisions of "High Performance Computing (HPC) Technical Committee"

From OpenFOAM Wiki
Jump to navigation Jump to search
(140 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
A warm welcome to the High-Performance Computing (HPC) Technical Committee's wiki page!
 
A warm welcome to the High-Performance Computing (HPC) Technical Committee's wiki page!
As part of the OpenFOAM Governance structure our job is to work together with the community to overcome the actual HPC bottlenecks of OpenFOAM.
+
 
 +
==Commitment==
 +
 
 +
As part of the OpenFOAM Governance structure, HPC Technical Committee' commitment is to work together with the community to overcome the actual HPC bottlenecks of OpenFOAM. In order to demonstrate improvements in performance and scalability to move forward from actual near petascale to pre- and exascale class performances. An important part of the work is formulating code improvement proposal and recommendations to the [https://www.openfoam.com/governance/steering-committee.php Steering Committee], which makes the final decision.  
 
Here, you can find information about us, our current activities and also how to get in touch.
 
Here, you can find information about us, our current activities and also how to get in touch.
  
Line 6: Line 9:
  
 
This is the list of the current active committee members, led by Ivan Spisso who acts as chairman.  
 
This is the list of the current active committee members, led by Ivan Spisso who acts as chairman.  
 +
The members represent a well balanced and geographically distributed mix of Release and Maintenace Authority (ESI-OpenCFD, Wikki), HPC experts (CINECA, KAUST, ORNL, Shimizu Corp.), Hardware OEM (NVIDIA, Intel, ARM), HPC system integrators (E4, AWS), domain-specific expert (FM Global, GM)
  
The members represent a balanced mix of Release and Maintenace Authority, HPC experts and system integrators, Hardware OEM, domain-specific expert, geographically distributed. A brief description follows:  
+
A brief description follows:
  
 +
<gallery mode="packed" heights=160px>
 +
File:ivan.jpg| Ivan Spisso, Committee chair
 +
File:Olesen_cropped.jpg | Mark Olesen
 +
File:bna_cropped.jpg | Simone Bna
 +
File:rusche_cropped.jpg | Henrik Rusche
 +
File:rossi.jpg | Giacomo Rossi
 +
File:Magugliani_half.jpg | Fabrizio Magugliani
 +
File:mathieu_amd.jpg | Mathieu Gontier
 +
File:olly.jpg | Olivier Perks
 +
FIle:stan.jpg | Stan Posey
 +
File:william.jpg | William F. Godoy
 +
File:Pham.jpg | Pham Van Phuc
 +
File:fspiga_cropped.jpg | Filippo Spiga
 +
File:Ashton.jpeg | Neil Ashton
 +
FIle:Zampini_slim.jpg | Stefano Zampini
 +
File:luwi_cropped.jpg | Oluwayemisi Oluwole
 +
File:gregor_olenik.jpg | Gregor Olenik
 +
File:Wasserman_cropped.JPG | Mark Wasserman
  
'''Chair: Ivan Spisso''': HPC specialist for academic and industrial CFD applications at CINECA
+
</gallery>
  
'''Mark Olesen''': Principal Eng. ESI-OpenCFD (Release and Maintenance AUTHORITY)
+
'''Chair: Ivan Spisso''', ''Senior HPC specialist for CFD applications'' [https://www.linkedin.com/in/ivan-spisso-324094a/], Leonardo Labs, Leonardo Finmeccanica (Itay)  
  
'''Simone Bnà''': HPC developer, CINECA 
+
'''Mark Olesen''': ''Principal Engineer'', ESI-OpenCFD (Germany)
  
'''Henrik Rusche''', Wikki Ltd. (Release Authority)  
+
'''Simone Bnà''': ''HPC developer'', SuperComputing Applications and Innovation (SCAI) Department, CINECA (Italy)
  
'''Fabrizio Magugliani''' E4, (HPC system integrator)  
+
'''Henrik Rusche''', Wikki Ltd. (Germany)
  
'''Michael Kleem''' '''Giacomo Rossi''', Intel
+
'''Fabrizio Magugliani''' ''Strategic Planning and Business'', E4 (Italy)
  
Repository: [https://develop.openfoam.com/committees/hpc Code repository]
+
'''Mathieu Gontier''' ''Field Support Manager, HPC CPU apps'', Santa Clara, CA (USA)
 +
 
 +
'''Giacomo Rossi''' ''Application Engineer'', Intel (Italy)
 +
 
 +
'''Oliver Perks'''
 +
 
 +
'''Stan Posey''', ''CFD Domain world-wide HPC Program Manager'', '''Filippo Spiga''' ''EMEA HPC Developer Relations'', NVIDIA (US/UK)
 +
 
 +
'''Neil Ashton''', ''Principal CFD Specialist'', Amazon Web Services, (UK)
 +
 
 +
'''William F. Godoy''', ''Scientific Data Group'', Oak Ridge National Lab (US)
 +
 
 +
'''Pham Van Phuc''', ''Senior Researcher'', Institute of Technology, Shimizu Corporation (Japan)             
 +
 
 +
'''Stefano Zampini''', ''Research Scientist'', Extreme Computing Research Center, KAUST (Saudi Arabia). Member of PETSC dev. Team
 +
 
 +
'''Oluwayemisi Oluwole''', ''Lead Research Scientist'', ''Fire Dynamics Group'', FM Global (USA)
 +
 
 +
'''Moududur Rahman''', '''Raman Bansal''', ''HPC SW Innovation Group'', General Motors (USA)
 +
 
 +
'''Gregor Olenik''', ''Post-doctoral researcher at KIT'', Karlsruher Institut fur Technologie (Germany)
 +
 
 +
'''Mark Wasserman''', ''Senior HPC Applications Architect'', Huawei Tel-Aviv Research Center (Israel)
  
 
== How to contact us ==
 
== How to contact us ==
  
==
+
To contact the committee, a specific email-address [mailto:openfoam-hpc-tc@googlegroups.com] has been set-up. This alias will automatically forward the incoming requests to all current members of the committee. The chairman is responsible for processing any incoming emails and providing an appropriate and timely answer. 
 +
 
 +
==Our Workflow==
 +
 
 +
The Committee meets biannually:
 +
 
 +
* One physical, at the annual ESI OpenFOAM Conference (typically, October time)
 +
* One virtual, to be held six months after the physical one (around April) 
 +
 
 +
In between the meetings, we carry out planned activities and common projects on-going keep in touch via online collaboration tools.
 +
 
 +
==Remits of the Committee==
 +
 
 +
The recommendations to Steering Committee in respect of HPC technical area are:
 +
 
 +
*Work together with the Community to overcome the actual HPC bottlenecks of OpenFOAM, to name a few:
 +
**Scalability of linear solvers
 +
**Adapt/modify data structures for SpMV (Sparse Matrix-Vector Multiply) to enable vectorization/hybridization
 +
**Improve memory access on new architectures
 +
**Improve memory bandwidth
 +
**Porting to new and emerging technologies
 +
**Parallel pre- and post-processing, parallel I/O
 +
**Load balancing
 +
**In-situ Visualization
 +
*Strong co-design approach
 +
*Identify algorithm improvements to enhance HPC scalability
 +
*Interaction with other [https://wiki.openfoam.com/Technical_Committees Technical Committees] (Numerics, Documentations, etc.)
 +
 
 +
==Priorities==
 +
The current priorities with respect to the aforementioned remits are:
 +
*Improve scalability of linear algebra solvers
 +
*HPC Benchmarks
 +
*GPU enabling of OpenFOAM
 +
*Parallel I/O
 +
 
 +
==Tasks==
 +
 
 +
* OpenFOAM HPC Benchmark Project ([[Media:OpenFOAM 2020 CINECA Spisso.pdf| Recent Presentation at 8th OpenFOAM Virtual Conference]])
 +
* HPC Performance Improvements for OpenFOAM linear solvers: [https://prace-ri.eu/hpc-access/preparatory-access/preparatory-access-awarded-projects/prace-preparatory-access-cut-off-34/#TypeC PRACE Preparatory Access Project]. '''News''': [https://prace-ri.eu/wp-content/uploads/WP294-PETSc4FOAM-A-Library-to-plug-in-PETSc-into-the-OpenFOAM-Framework.pdf White paper public available]
 +
*GPU enabling of OpenFOAM: i ) [[Media:OpenFOAM 2020 KAUST Zampini.pdf | Acceleration using PETSc4OAM]] ii) [[Media:OpenFOAM 2020 NVIDIA Martineau.pdf | AmgX GPU solver development]]
 +
 
 +
==Activity Log==
 +
(Reverse chronological order)
 +
* 2021-04-01: kick-off of EU funded project [https://exafoam.eu/ exaFOAM]
 +
* 2020-10-13/15 Several members present relevant work at the HPC session of [https://www.esi-group.com/company/events/2020/8th-openfoam-conference-2020 8th OpenFOAM Virtual Conference].
 +
* 2020-01-14 submitted proposal to EuroHPC-03-2019 call: [https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/topic-details/eurohpc-03-2019;freeTextSearchKeyword=;typeCodes=1;statusCodes=31094501,31094502,31094503;programCode=H2020;programDivisionCode=null;focusAreaCode=null;crossCuttingPriorityCode=null;callCode=H2020-JTI-EUROHPC-2019-1;sortQuery=openingDate;orderBy=asc;onlyTenders=false;topicListKey=callTopicSearchTableState Industrial software codes for extreme-scale computing environments and applications]
 +
* 2019-10-17 Committee Meeting held at the [https://www.esi-group.com/it/lazienda/eventi/2019/7th-openfoam-conference-2019 7th OpenFOAM Conference] in Berlin.
 +
* 2019-10-16/15 Several members present relevant work at the [https://www.esi-group.com/it/lazienda/eventi/2019/7th-openfoam-conference-2019 7th OpenFOAM Conference in Berlin]. [https://www.esi-group.com/it/lazienda/eventi/2019/7th-openfoam-conference-2019/agenda HPC Session], chaired by Ivan Spisso.
 +
* 2019-04-17 Committee officially ratified by the Steering Committee.
 +
 
 +
==Repository==
 +
The [https://develop.openfoam.com/committees/hpc Code repository] for the HPC Technical Committee:  is an open and shared repository with HPC relevant data-sets and terms of references. '''Work in progress!'''
 +
 
 +
==Code contributions==
 +
The code contributions of the HPC TC are available as regular [https://www.openfoam.com/documentation/guides/latest/api/modules.html modules]:
 +
* Parallel I/O with [https://develop.openfoam.com/Community/adiosfoam OpenFOAM  Adios 2]: Since the release of OpenFOAM v1912 the adiosWrite function object has been rewritten to use the parallel I/O
 +
* A collection of visualization [https://develop.openfoam.com/modules/visualization interfaces] for OpenFOAM, primarily VTK/ParaView based.
 +
* PETSc4FOAM [https://develop.openfoam.com/modules/external-solver library]: a library for OpenFOAM that provides a solver for embedding Petsc and its external dependencies (i.e. Hypre) into arbitrary OpenFOAM simulations.
 +
 
 +
== Job Posting/Oppurtunities ==
 +
 
 +
'''GPU Acceleration Pilot'''
 +
*Productisation of newly developed AmgX4foam external module for GPU acceleration of linear algebra solver inside OpenFOAM-vYYMM Linear algebra profiling in standard output
 +
*Proposal Deadline Date: 22 July 2022
 +
* RFQ Proposal [https://wiki.openfoam.com/images/b/b6/RFQ_Proposal_-_GPU_Acceleration_Pilot_-_HPC_TC-public-tender.pdf]
 +
 
 +
 
 +
==Planned / Future activities==
 +
*First Italian OpenFOAM User Meeting (around November 2022 tbd)
 +
*Parallele I/O Test at scale with Adios2
 +
*Include GPUs support for ''PETSc4FOAM library''
 +
 
 +
 
 +
 
 +
{{OpenFOAM Governance}}
 +
 
 +
''last modified {{CURRENTDAY2}} {{CURRENTMONTHNAMEGEN}} {{CURRENTYEAR}}''

Revision as of 06:12, 6 July 2022

A warm welcome to the High-Performance Computing (HPC) Technical Committee's wiki page!

Commitment

As part of the OpenFOAM Governance structure, HPC Technical Committee' commitment is to work together with the community to overcome the actual HPC bottlenecks of OpenFOAM. In order to demonstrate improvements in performance and scalability to move forward from actual near petascale to pre- and exascale class performances. An important part of the work is formulating code improvement proposal and recommendations to the Steering Committee, which makes the final decision. Here, you can find information about us, our current activities and also how to get in touch.

Members of the Committee

This is the list of the current active committee members, led by Ivan Spisso who acts as chairman. The members represent a well balanced and geographically distributed mix of Release and Maintenace Authority (ESI-OpenCFD, Wikki), HPC experts (CINECA, KAUST, ORNL, Shimizu Corp.), Hardware OEM (NVIDIA, Intel, ARM), HPC system integrators (E4, AWS), domain-specific expert (FM Global, GM)

A brief description follows:

Chair: Ivan Spisso, Senior HPC specialist for CFD applications [1], Leonardo Labs, Leonardo Finmeccanica (Itay)

Mark Olesen: Principal Engineer, ESI-OpenCFD (Germany)

Simone Bnà: HPC developer, SuperComputing Applications and Innovation (SCAI) Department, CINECA (Italy)

Henrik Rusche, Wikki Ltd. (Germany)

Fabrizio Magugliani Strategic Planning and Business, E4 (Italy)

Mathieu Gontier Field Support Manager, HPC CPU apps, Santa Clara, CA (USA)

Giacomo Rossi Application Engineer, Intel (Italy)

Oliver Perks

Stan Posey, CFD Domain world-wide HPC Program Manager, Filippo Spiga EMEA HPC Developer Relations, NVIDIA (US/UK)

Neil Ashton, Principal CFD Specialist, Amazon Web Services, (UK)

William F. Godoy, Scientific Data Group, Oak Ridge National Lab (US)

Pham Van Phuc, Senior Researcher, Institute of Technology, Shimizu Corporation (Japan)

Stefano Zampini, Research Scientist, Extreme Computing Research Center, KAUST (Saudi Arabia). Member of PETSC dev. Team

Oluwayemisi Oluwole, Lead Research Scientist, Fire Dynamics Group, FM Global (USA)

Moududur Rahman, Raman Bansal, HPC SW Innovation Group, General Motors (USA)

Gregor Olenik, Post-doctoral researcher at KIT, Karlsruher Institut fur Technologie (Germany)

Mark Wasserman, Senior HPC Applications Architect, Huawei Tel-Aviv Research Center (Israel)

How to contact us

To contact the committee, a specific email-address [2] has been set-up. This alias will automatically forward the incoming requests to all current members of the committee. The chairman is responsible for processing any incoming emails and providing an appropriate and timely answer.

Our Workflow

The Committee meets biannually:

  • One physical, at the annual ESI OpenFOAM Conference (typically, October time)
  • One virtual, to be held six months after the physical one (around April)

In between the meetings, we carry out planned activities and common projects on-going keep in touch via online collaboration tools.

Remits of the Committee

The recommendations to Steering Committee in respect of HPC technical area are:

  • Work together with the Community to overcome the actual HPC bottlenecks of OpenFOAM, to name a few:
    • Scalability of linear solvers
    • Adapt/modify data structures for SpMV (Sparse Matrix-Vector Multiply) to enable vectorization/hybridization
    • Improve memory access on new architectures
    • Improve memory bandwidth
    • Porting to new and emerging technologies
    • Parallel pre- and post-processing, parallel I/O
    • Load balancing
    • In-situ Visualization
  • Strong co-design approach
  • Identify algorithm improvements to enhance HPC scalability
  • Interaction with other Technical Committees (Numerics, Documentations, etc.)

Priorities

The current priorities with respect to the aforementioned remits are:

  • Improve scalability of linear algebra solvers
  • HPC Benchmarks
  • GPU enabling of OpenFOAM
  • Parallel I/O

Tasks

Activity Log

(Reverse chronological order)

Repository

The Code repository for the HPC Technical Committee: is an open and shared repository with HPC relevant data-sets and terms of references. Work in progress!

Code contributions

The code contributions of the HPC TC are available as regular modules:

  • Parallel I/O with OpenFOAM Adios 2: Since the release of OpenFOAM v1912 the adiosWrite function object has been rewritten to use the parallel I/O
  • A collection of visualization interfaces for OpenFOAM, primarily VTK/ParaView based.
  • PETSc4FOAM library: a library for OpenFOAM that provides a solver for embedding Petsc and its external dependencies (i.e. Hypre) into arbitrary OpenFOAM simulations.

Job Posting/Oppurtunities

GPU Acceleration Pilot

  • Productisation of newly developed AmgX4foam external module for GPU acceleration of linear algebra solver inside OpenFOAM-vYYMM Linear algebra profiling in standard output
  • Proposal Deadline Date: 22 July 2022
  • RFQ Proposal [3]


Planned / Future activities

  • First Italian OpenFOAM User Meeting (around November 2022 tbd)
  • Parallele I/O Test at scale with Adios2
  • Include GPUs support for PETSc4FOAM library


OpenFOAM Governance
Overview | Structure | Steering Committee | Get involved | Documents
Functionality-based Committees Documentation and Tutorials | HPC | Meshing | Multiphase | Optimisation | Turbulence
Application Committees Marine | Nuclear

last modified 28 March 2024