This online workshop is meant to give an overview of working with research data in Python using general libraries for storing, processing, analysing and sharing data. The focus is on improving performance. After covering tools for performant processing (netcdf, numpy, pandas, scipy) on single workstations the focus shifts to parallel, distributed and GPU computing (snakemake, numba, dask, multiprocessing, mpi4py).
ENCCS is now joining forces with NordiQuEst to deliver a two-day training workshop covering the fundamentals of quantum computing (QC), including introduction to key concepts: quantum states, qubits, quantum algorithms, QC programming in high-level languages for use cases in optimisation, finance and quantum chemistry followed by testing quantum programs to esure their correctness, overview of the main QC hardware approaches Integration of QC with classical computing: hybrid classical/quantum algorithms and HPC-QC systems.
Quantum molecular modeling of complex molecular systems is an indispensable and integrated component in advanced material design, as such simulations provide a microscopic insight into the underlying physical processes. ENCCS and PDC will offer training on using the VeloxChem program package. We will highlight its efficient use on modern HPC architectures, such as the Dardel system at PDC and the pre-exascale supercomputer LUMI, 50% of which is available to academic users of the consortium states, including Sweden and Denmark.
Julia is a modern high-level programming language which is both fast (on par with traditional HPC languages like Fortran and C) and relatively easy to write like Python or Matlab. It thus solves the “two language problem”, i.e. when prototype code in a high-level language needs to be combined with or rewritten in a lower-level language to improve performance. Although Julia is a general purpose language, many of its features are particularly useful for numerical scientific computation, and a wide range of both domain-specific and general libraries are available for statistics, machine learning and numerical modeling. The language supports parallelisation for both shared-memory and distributed HPC architectures, and native Julia libraries are available for running on GPUs from different vendors.
The message passing interface (MPI) is the go-to technology for the development of distributed parallel programs. In this […]