IPPL (Independent Parallel Particle Layer)
IPPL
|
Independent Parallel Particle Layer (IPPL) is a performance portable C++ library for Particle-Mesh methods. IPPL makes use of Kokkos (https://github.com/kokkos/kokkos), HeFFTe (https://github.com/icl-utk-edu/heffte), and MPI (Message Passing Interface) to deliver a portable, massively parallel toolkit for particle-mesh methods. IPPL supports simulations in one to six dimensions, mixed precision, and asynchronous execution in different execution spaces (e.g. CPUs and GPUs).
All IPPL releases (< 3.2.0) are available under the BSD 3-clause license. Since version 3.2.0, this repository includes a modified version of the variant
header by GNU, created to support compilation under CUDA 12.2 with GCC 12.3.0. This header file is available under the same terms as the GNU Standard Library; note the GNU runtime library exception. As long as this file is not removed, IPPL is available under GNU GPL version 3.
All the new developments of IPPL are merged into the master
branch which can make it potentially unstable from time to time. So if you want a stable and more tested version please checkout the tagged branch correspodning to the latest release (e.g. git checkout tags/IPPL-x.x.x
). Otherwise if you want the latest developments go with the master with the above caveat in mind.
IPPL is a CMake Project and can be configured by passing options in CMake syntax:
The relevant options of IPPL are
SERIAL, OPENMP, CUDA, "OPENMP;CUDA"
, default SERIAL
Kokkos_VERSION
, default 4.1.00
Heffte_VERSION
, default MASTER
MASTER
, an additional flag Heffte_COMMIT_HASH
can be set, default 9eab7c0eb18e86acaccc2b5699b30e85a9e7bdda
ENABLE_SOLVERS
, default OFF
ENABLE_FFT
, default OFF
ENABLE_FFT
is set, Heffte_ENABLE_CUDA
will default to ON
if IPPL_PLATFORMS
contains cuda
Heffte_ENABLE_AVX2
is enabled. FFTW has to be enabled explicitly.Heffte_ENABLE_FFTW
, default OFF
ENABLE_TESTS
, default OFF
ENABLE_UNIT_TESTS
, default OFF
ENABLE_ALPINE
, default OFF
USE_ALTERNATIVE_VARIANT
, default OFF
. Can turned on for GPU builds where the use of the system-provided variant doesn't work. Furthermore, be aware of CMAKE_BUILD_TYPE
, which can be either
Release
for optimized buildsRelWithDebInfo
for optimized builds with debug info (default)Debug
for debug builds (with Sanitizers enabled)Download and setup a build directory:
[architecture]
should be the target architecture, e.g.
PASCAL60
PASCAL61
VOLTA70
VOLTA72
TURING75
AMPERE80
(PSI GWENDOLEN machine)AMPERE86
We are open and welcome contributions from others. Please open an issue and a corresponding pull request in the main repository if it is a bug fix or a minor change.
For larger projects we recommend to fork the main repository and then submit a pull request from it. More information regarding github workflow for forks can be found in this page and how to submit a pull request from a fork can be found here. Please follow the coding guidelines as mentioned in this page.
You can add an upstream to be able to get all the latest changes from the master. For example, if you are working with a fork of the main repository, you can add the upstream by:
You can then easily pull by typing
@inproceedings{muralikrishnan2024scaling, title={Scaling and performance portability of the particle-in-cell scheme for plasma physics applications through mini-apps targeting exascale architectures}, author={Muralikrishnan, Sriramkrishnan and Frey, Matthias and Vinciguerra, Alessandro and Ligotino, Michael and Cerfon, Antoine J and Stoyanov, Miroslav and Gayatri, Rahulkumar and Adelmann, Andreas}, booktitle={Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing (PP)}, pages={26–38}, year={2024}, organization={SIAM} }
#!/bin/bash #SBATCH –partition=hourly # Using 'hourly' will grant higher priority #SBATCH –nodes=1 # No. of nodes #SBATCH –ntasks-per-node=1 # No. of MPI ranks per node. Merlin CPU nodes have 44 cores #SBATCH –cpus-per-task=44 # No. of OMP threads #SBATCH –time=00:05:00 # Define max time job will run (e.g. here 5 mins) #SBATCH –hint=nomultithread # Without hyperthreading
#SBATCH –output=<output_file_name>.out # Name of output file #SBATCH –error=<error_file_name>.err # Name of error file
export OMP_NUM_THREADS=44 export OMP_PROC_BIND=spread export OMP_PLACES=threads
srun –cpus-per-task=44 ./<your_executable> <args>
#!/bin/bash #SBATCH –time=00:05:00 # Define max time job will run (e.g. here 5 mins) #SBATCH –nodes=1 # No. of nodes (there is only 1 node on Gwendolen) #SBATCH –ntasks=4 # No. of tasks (max. 8) #SBATCH –clusters=gmerlin6 # Specify that we are running on the GPU cluster #SBATCH –partition=gwendolen # Running on the Gwendolen partition of the GPU cluster #SBATCH –account=gwendolen
#SBATCH –gpus=4 # No. of GPUs (max. 8)
#SBATCH –output=<output_file_name>.out # Name of output file #SBATCH –error=<error_file_name>.err # Name of error file
srun ./<your_executable> <args> –kokkos-map-device-id-by=mpi_rank ```