Julia for High-Performance Scientific Computing
OnlineJulia is a modern high-level programming language which is both fast (on par with traditional HPC languages like Fortran and C) and relatively easy to write like Python or Matlab.
Julia is a modern high-level programming language which is both fast (on par with traditional HPC languages like Fortran and C) and relatively easy to write like Python or Matlab.
AIDA invites our technical community to an AIDA-day on Zoom where we will deep dive into hot topics in AI training. Our keynote speaker, Veronika Cheplygina, IT University Copenhagen DK, is invited to present her recent paper on methodological failures in ML for medical imaging and to discuss her interesting work on tackling limited access to labelled data. […]
In this course, you will become familiar with tools and best practices for scientific software development. This course will not teach a programming language, but we teach the tools you need to do programming well and avoid common inefficiency traps
CMake is a language-agnostic, cross-platform build tool and is nowadays the de facto standard, with large projects using it to reliably build, test, and deploy their codebases.
This workshop targets programmers in both academia and industry who already have experience with basic MPI and are ready to take the next step to more advanced usage. Topics which will be covered include communicators, groups, derived data types, one-sided communication, non-blocking collectives and hybrid MPI+threading approaches. Lectures will be interleaved with hands-on exercises. All […]
ENCCS is now joining forces with NordiQuEst to deliver a two-day training workshop covering the fundamentals of quantum computing (QC), including introduction to key concepts: quantum states, qubits, quantum algorithms, QC programming in high-level languages for use cases in optimisation, finance and quantum chemistry followed by testing quantum programs to esure their correctness, overview of the main QC hardware approaches
Integration of QC with classical computing: hybrid classical/quantum algorithms and HPC-QC systems.
This online workshop is meant to give an overview of working with research data in Python using general libraries for storing, processing, analysing and sharing data. The focus is on improving performance. After covering tools for performant processing (netcdf, numpy, pandas, scipy) on single workstations the focus shifts to parallel, distributed and GPU computing (snakemake, numba, dask, multiprocessing, mpi4py).
In this workshop, we overview the basics of Docker and Singularity. (Working knowledge of Singularity as given in the workshop (https://www.uppmax.uu.se/support/courses-and-workshops/singularity-workshop-announcement) is desirable.) Distributed training using TensorFlow and Horovod frameworks on a supercomputer will be covered. Moreover, it will be shown how to use Singularity containers in conjunction with TensorFlow and Horovod to upscale an AI app.
SYCL is a C++ abstraction layer for programming heterogeneous hardware with a single-source approach. SYCL is high-level, cross-platform, and extends standard ISO C++17. You will learn to:
In this course, you will become familiar with tools and best practices for version control and reproducibility in modern research software development. The main focus is on using Git for efficiently writing and maintaining research software.
In recent years, Graph Neural Networks (GNNs) and Transformers have led to numerous breakthrough achievements in a variety of fields such as Natural Language Processing (NLP), chemistry, and physics. By doing away with the need for fixed-size inputs, these architectures significantly extend the scope of problems in which deep learning can be applied.
Quantum molecular modeling of complex molecular systems is an indispensable and integrated component in advanced material design, as such simulations provide a microscopic insight into the underlying physical processes. ENCCS and PDC will offer training on using the VeloxChem program package. We will highlight its efficient use on modern HPC architectures, such as the Dardel system at PDC and the pre-exascale supercomputer LUMI, 50% of which is available to academic users of the consortium states, including Sweden and Denmark.
The use of Deep Learning has seen a sharp increase of popularity and applicability over the last decade. While Deep Learning can be a useful tool for researchers from a wide range of domains, taking the first steps in the world of Deep Learning can be somewhat intimidating. This introduction aims to cover the basics of Deep Learning in a practical and hands-on manner, so that upon completion, you will be able to train your first neural network and understand what next steps to take to improve the model.
Julia is a modern high-level programming language which is both fast (on par with traditional HPC languages like Fortran and C) and relatively easy to write like Python or Matlab. It thus solves the “two language problem”, i.e. when prototype code in a high-level language needs to be combined with or rewritten in a lower-level language to improve performance. Although Julia is a general purpose language, many of its features are particularly useful for numerical scientific computation, and a wide range of both domain-specific and general libraries are available for statistics, machine learning and numerical modeling. The language supports parallelisation for both shared-memory and distributed HPC architectures, and native Julia libraries are available for running on GPUs from different vendors.
This course gives advanced practical tips on how to run GROMACS MD simulations efficiently on modern hardware including both CPUs and GPUs. In addition to speeding up MD simulations, also workflow automation, advanced sampling techniques, and future developments are discussed. The course consists of lectures and hands-on exercises. GROMACS will be used in the exercise sessions.
This workshop will take you from the representation of graphs and finite sets as inputs for neural networks to the implementation of full GNNs for a variety of tasks. You will learn about the central concepts used in GNNs in a hands-on setting using Jupyter Notebooks and a series of coding exercises. While the workshop will use problems from the field of chemistry as an example for applications, the skills you learn can be transferred to any domain where finite set or graph-based representations of data are appropriate. From GNNs, we will make the leap to Transformer architectures, and explain the conceptual ties between the two.
ENCCS with RISE offer a half-day course in A.I. as a tool for change. Participants will get an understanding of what artificial intelligence is, which problems can be solved with those techniques, and how they can be used within an organization.
We will give an overview on what processes one goes through when they utilize A.I. to solve problems. We are also going to illuminate the limitations of those techniques and give examples of when they are appropriate.
This workshop will cover all foundational aspects of OpenFOAM, including an introduction to OpenFOAM enviroment as well as running on HPC resources. It will be useful for new users to broaden their basic knowledge of OpenFOAM.
GPU hackathons offer a unique opportunity for domain scientists and research software engineers to accelerate and optimize their applications on GPUs. Teams of researchers are paired with experienced GPU mentors to learn and apply the accelerated and parallel computing skills needed by the scientific community. Both current or prospective users of large hybrid CPU/GPU HPC clusters who develop applications that could benefit from GPU acceleration are encouraged to participate!
CMake is a language-agnostic, cross-platform build tool and is nowadays the de facto standard, with large projects using it to reliably build, test, and deploy their codebases.