Skip to main content
Bluecoders
← Tech glossary

Fine-tuning

TermConcept

Fine-tuning is the process of continuing the training of a pre-trained AI model (for example an LLM) on a dataset specific to a domain or task, in order to specialise its behaviour without starting from scratch.

Fine-tuning is the process of continuing the training of a pre-trained AI model (for example an LLM) on a dataset specific to a domain or task, in order to specialise its behaviour without starting from scratch.

Several variants exist: classic supervised fine-tuning, RLHF (Reinforcement Learning from Human Feedback), DPO (Direct Preference Optimisation) and parameter-efficient techniques such as LoRA and QLoRA, which only update a small part of the model.

In 2026, fine-tuning is still useful for niche cases (style, tone, domain vocabulary), but it is often superseded by RAG and good prompting, which are simpler to keep up to date.

Ready to find the missing piece of your team?

Let's talk about your hiring needs. A team member will get back to you quickly to qualify the brief and kick off the search.