You’ll support the development and integration of open-source large language models (LLMs) within our product stack. Your mission will be to help evaluate, fine-tune, and deploy models like LLaMA, Mistral, Qwen, etc., adapted to real-world enterprise use cases. You’ll work closely with our founding team and AI engineers on: Selecting and benchmarking open-source LLMs Experimenting with inference optimization and fine-tuning (LoRA, QLoRA…) Testing model behaviors in production-like conditions Contributing to Sagy’s goal of running on fully open and efficient AI infrastructure This role is perfect for a student passionate about applied AI and looking for hands-on experience with real constraints and a strong technical team. The ideal candidate: Has hands-on experience with Python and frameworks like Hugging Face, PyTorch or Transformers Has experimented with open-source LLMs (LLaMA, Mistral, Qwen, etc.) Understands concepts like fine-tuning, RAG, quantization or LoRA — or is eager to learn quickly Is curious, autonomous, and comfortable working in a fast-paced startup environment Is excited by real-world constraints and building AI that ships Speaks French and English fluently You’ll join a fast-moving AI startup as a core technical contributor. We offer: A unique opportunity to work on open-source LLMs in production settings Close collaboration with experienced AI engineers and the founder (15 years in software & AI) Flexible setup: part-time or full-time, in Louvain-la-Neuve or hybrid Freedom to explore, test, and ship your own ideas A real impact on product and strategy Internship stipend and potential for long-term collaboration or freelance continuation This is a hands-on experience where you’ll learn fast, build real things, and shape how teams use AI at work. 200.0 à 300.0 EUR par mois Sagy is building the autonomous memory layer for modern teams. We turn conversations, tickets, decisions and documents into a living, trusted, and structured company memory — automatically, with humans in control. Our mission is to help engineering and product teams stop losing context, ramp new hires faster, and reduce repeated questions by capturing knowledge as it happens. Founded in 2025 and based in Louvain-la-Neuve (Belgium), Sagy is an early-stage AI startup tackling the knowledge fragmentation problem at its root. We’re backed by technical expertise, product obsession, and a vision to redefine how teams access and grow their internal knowledge.