How-to Guide#

This guide will help you with all the practical aspects of setting up an annotation project for training and fine-tuning LLMs using Argillaโ€™s Feedback Task Datasets. It covers everything from defining your task to collecting, organizing, and using the feedback effectively.

Create a Feedback Dataset

Methods to configure a Feedback Dataset and push it to Argilla.

Set up your annotation team

Workflows to organize your annotation team.

Annotate a Feedback Dataset

Check the Feedback Task UI and the available shortcuts.

Collect responses

Collect annotations and solve disagreements.

Export a Feedback Dataset

Export your dataset and save it in the Hugging Face Hub or locally.

Monitoring LangChain apps

Use the Argilla LangChain callback for monitoring, evaluation, and fine-tuning.

Fine-tune LLMs

Fine-tune an LLM with the feedback collected from Argilla.

Fine-tune other models

Fine-tune basic models with feedback collected from Argilla.

Feedback dataset snapshot