Notes from DeepLearn 2025, Porto — a week on the frontier of deep neural networks

One of our researchers spent the last week of July at DeepLearn 2025 in Porto, Portugal, hosted at Universidade da Maia (UMAIA). Framed as a summer school rather than a classic conference, the program packed ~40 hours of lectures, labs, and a running hackathon into five intensive days focused on AI and deep neural networks, with a strong tilt toward large language models (LLMs), efficiency, and safety.

What stood out

Hands-on from minute one. The first in-person hackathon session was wall-to-wall energy—mixed backgrounds, shared docs open, and ideas turning into scrappy prototypes fast. Because the competition ran in a hybrid format through the end of July, we framed problems the way they show up in the wild—across science and the humanities. Certificates were nice; the real payoff was the tight feedback loop with mentors that leveled up each iteration.

The lectures clicked because they bridged foundations with deployment. Notes were taken on efficient LLM algorithms and serving strategies for real-time inference—the kind of detail that saves both GPU hours and user patience. The healthcare tracks stood out too: multimodal and generative models pushing into drug discovery, clinical decision support, and patient-specific digital twins. Adaptation talks made a strong case for test-time updates so models stay useful on “new and different” data without full retraining. Safety conversations grounded transformer capabilities in long-term risk frameworks, and low-rank approaches showed a credible path to scaling without melting the budget (or the planet). On the modeling front, graph-aware transformers and non-Euclidean geometry broadened how we think about structure in foundation models.

Community, coffee, and Porto

The open sessions were a great sampler—MRI report generation with LLMs, code repair on compact models, Netflix K-content dynamics, and early looks at hallucination and knowledge conflict. The Local Group Presentation was a highlight for me: Portugal-based work on brain decoding with geometric deep learning and vehicle-routing that actually matches operational reality. In between, the hallway track did its thing—whiteboard sketches over espresso, recruiters trading notes with PhDs, and a few of us discovering francesinha for the first time.

Takeaways

  1. Efficiency is strategy. Methods like low-rank adaptation and smarter serving are no longer “nice to have”; they determine which ideas make it into production.
  2. Healthcare is a proving ground. Multimodal LLMs and digital twins are moving from papers to pilots.
  3. Adaptivity matters. Test-time and continual adaptation feel essential for models living in the wild.
  4. Local ecosystems thrive. The Portuguese research community is shipping serious work—great to see on a global stage.

DeepLearn 2025 was more than a summer school; it was a maker’s week for deep learning. Porto introduced new tools for our team to try, collaborators to ping, and a renewed sense that we can build useful, safer systems.

Scroll to Top