Repurpose Refuels finetuning workflow to distill your own models
Distillation is a technique used to create a smaller, faster, and more efficient version of a pre-trained language model. Our customers have successfully distilled the performance of larger models like GPT-4o into smaller models like Refuel-LLM V2 Mini.
Once you have the task and dataset set up in Refuel that you would like to distill, select the larger model you would like to use to label the dataset with for distilling knowledge. Hit “Run Task” to start labeling and generating the training dataset.
Click “Finetune Model” and select the base model you would like to distill into. We recommend using the “Refuel LLM V2 Mini” model as the base model for distillation.
Step 3: Enable data augmentation and start distillation
Once you have selected the base model, enable “Augment human-verified labels” and choose the larger model you used to run the task earlier with for distilling knowledge from. Hit “Start” to begin the distillation process!
Assistant
Responses are generated using AI and may contain mistakes.