About Event

Much has been said about LLMs, AI agents, and artificial intelligence. Unfortunately, much of it is oversimplified, exaggerated, or simply misunderstood. 

At some point in every engineer’s journey, treating a tool as a black box is no longer enough. To use it effectively, you need to understand how it actually works — under the hood

During this session, we’ll explore what it truly means for a language model to be a probabilistic machine.

We will discuss

- What LLMs are actually doing most of the time 
- Why they require so many GPUs 
- Where hallucinations and nondeterminism come from 
- How embeddings and attention mechanisms work 
- What jailbreaking is — and why it works 
- A high-level architecture overview of agent systems 
- Myths around agents, RAGs, MCPs, and more 

We won’t debate whether LLMs can “think” or are “conscious.” That’s for philosophers. 

This session is about engineering — understanding how model architecture shapes behavior and why many “intelligent” outputs are direct consequences of statistical design rather than awareness or intent. 

If you use LLMs in your daily work, this talk is for you. 

Speaker

Tomasz Ducin - Co-Founder of DeveloperJutra.pl, Trainer at Bottega IT Minds, MVP 

Event details:

Date: 12 March 
Time: CDT - 12:00 PM |7:00 PM UA | 6:00 PM PL |11:00 AM MX |11:00 AM CR| 2:00 PM AR  
Duration: 1.5 hours 
Language: English   
All other details will be sent after registration.   

Registration is free and mandatory.