Summary
A global collaboration of AI researchers has accessed EuroHPC JU supercomputer Leonardo, enabling rapid deployment of adaptive AI models with HPC.
Organisations involved
The research group consists of AI researchers, professors, scientists, and engineers from leading industry and academia. Leading institutions such as KTH Royal Institute of Technology, University of Bologna, ETH Zurich, Technical University of Munich (TUM), University of Toronto, Silo AI (now AMD), King, Google, Univrses, and others.
Technical/scientific Challenge
Initially, the group wanted to tackle the problem of creating adaptive AI models that can dynamically and quickly adjust to the environments they encounter. This is a challenge they had been wrestling with for years, but this time they decided to tackle the problem from a whole different angle. Usually, the adaptation process is highly computationally expensive and difficult to interpret because it involves test-time training—meaning the network keeps retraining itself based on new inputs, using older versions of itself as “teachers.” This makes the process slow, hard to understand, and tricky to control. There’s a better way to make AI models adapt more efficiently and transparently.
Proposed solution
Starting from scratch, the group effectively moved away from even in-house innovative research on online and real-time adaptation. The goal was to design a solution that naturally fits the needs of industry, focusing on high controllability, interpretability, transparency, and scalability.
So, the group came up with the idea of generating a large library of fine-tuned adapters—think of them as plug-and-play weights—that the system can autonomously pick and merge according to the input it receives. This way, creating an ad-hoc expert tailored for every possible deployment scenario, without the need for continuous retraining or complex adaptation processes.
By accessing HPC resources, they generate and manage this extensive library of adapters, perform large-scale experiments, and optimize the selection and merging algorithms.

Business impact
This innovation will be transformative for AI research and industry. It shifts the problem away from continuously training large backbones from scratch with more and more data or trying to adapt to new domains using sophisticated processes—both of which are slow and often impractical. The approach allows to cover potentially any new scenario just by adding ad-hoc knowledge bits where needed, without negatively impacting any existing results.
This is a game changer for many businesses, providing a scalable and efficient way to deploy AI systems that can adapt in real-time to changing environments.
Benefits
Let’s take a practical example, such as autonomous driving. Despite the abundance of data, vehicles continuously encounter new and challenging deployment scenarios—new countries, weather conditions, lighting conditions, road infrastructures, and any possible combination of these. It’s often simply impossible to cover all cases exhaustively. Even when one tries, they often see trade-offs between fine-tuning on a specific environment and maintaining good generalization. Moreover, with every new batch of data, there’s a need to retrain everything, making the process slow, expensive, and the results uncertain.
The new system creates a new paradigm, effectively allowing the AI model to access a library of experts and pick the ones that are most relevant every time, fuse them, and plug them into the prediction model.

This creates an ad-hoc expert for every single possible deployment. Moreover, this system enables the creation of marketplace dynamics, where these expert adapters (weights) can be shared between users. Contributors can be rewarded based on how much their adapters are utilized by the system, promoting collaboration and continuous improvement.
- Time and Cost Savings: By eliminating the need for continuous retraining of large models, one saves on computational costs and significantly speed up deployment times.
- Product Optimization: The system enhances AI model performance by allowing it to adapt dynamically to new scenarios using the library of expert adapters, ensuring optimal performance in diverse conditions.
- Scalability and Flexibility: Easily accommodates new deployment scenarios—such as different countries, weather conditions, lighting, and infrastructure—by adding targeted adapters without affecting existing capabilities.
- Data Privacy and Interpretability: Since only the adapter weights are shared, data privacy is protected. The system is highly interpretable—one always knows which adapters are contributing to each prediction, which helps in identifying gaps in the library or poor-quality adapters that need improvement.
Get supercomputers access for your projects for free!
Are you a company with limited resources but big ideas? Get supercomputer access for your R&D and never think of computing power as an issue again. Europe has 8 operational supercomputers and multiple access calls your company can use. Get supercomputer access now!