A blue and white logo for a social media management tool called Socialionals.

Fine-tuning (deep learning)

Share This
« Back to Glossary Index

Fine-tuning, a technique utilized in deep learning within the realm of artificial intelligence[1], specifically pertains to machine learning algorithms. It primarily serves to boost the efficacy of existing neural network models by modifying and repurposing specific parameters within these models. This method is a subset of transfer learning, wherein insights obtained from one task are leveraged for another related task. Fine-tuning can be implemented across the entire network or a selected group of layers, often incorporating adapters for enhancement. It proves particularly beneficial in natural language processing for language modeling. Nonetheless, it’s crucial to acknowledge that fine-tuning can occasionally impact a model’s stability, necessitating techniques like linear interpolation to maintain performance. Various strategies, such as the Low-rank adaptation (LoRA) method, present alternative ways to fine-tune.

Terms definitions
1. artificial intelligence. The discipline of Artificial Intelligence (AI) is a subset of computer science dedicated to developing systems capable of executing tasks usually requiring human intellect, such as reasoning, learning, planning, perception, and language comprehension. Drawing upon diverse fields such as psychology, linguistics, philosophy, and neuroscience, AI is instrumental in the creation of machine learning models and natural language processing systems. It also significantly contributes to the development of virtual assistants and affective computing systems. AI finds applications in numerous sectors like healthcare, industry, government, and education. However, it also brings up ethical and societal issues, thus requiring regulatory policies. With the advent of sophisticated techniques like deep learning and generative AI, the field continues to expand, opening up new avenues in various sectors.

In deep learning, fine-tuning is an approach to transfer learning in which the weights of a pre-trained model are trained on new data. Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen" (not updated during the backpropagation step). A model may also be augmented with "adapters" that consist of far fewer parameters than the original model, and fine-tuned in a parameter–efficient way by tuning the weights of the adapters and leaving the rest of the model's weights frozen.

For some architectures, such as convolutional neural networks, it is common to keep the earlier layers (those closest to the input layer) frozen because they capture lower-level features, while later layers often discern high-level features that can be more related to the task that the model is trained on.

Models that are pre-trained on large and general corpora are usually fine-tuned by reusing the model's parameters as a starting point and adding a task-specific layer trained from scratch. Fine-tuning the full model is common as well and often yields better results, but it is more computationally expensive.

Fine-tuning is typically accomplished with supervised learning, but there are also techniques to fine-tune a model using weak supervision. Fine-tuning can be combined with a reinforcement learning from human feedback-based objective to produce language models like ChatGPT (a fine-tuned version of GPT-3) and Sparrow.

« Back to Glossary Index
en_USEnglish