How-to Guide#

This guide will help you with all the practical aspects of setting up an annotation project for training and fine-tuning LLMs using Argillaโ€™s Feedback Task Datasets. It covers everything from defining your task to collecting, organizing, and using the feedback effectively.

Create a Feedback Dataset

Methods to configure a Feedback Dataset and push it to Argilla.

Assign annotations to your team

Workflows to organize your annotation team.

Update a Feedback Dataset

Make changes to an existing Feedback Dataset.

Filter a Feedback Dataset

Obtain a filtered version of your dataset based on the status of the annotations.

Annotate a Feedback Dataset

Check the Feedback Task UI and the available shortcuts.

Collect responses

Collect annotations and solve disagreements.

Export a Feedback Dataset

Export your dataset and save it in the Hugging Face Hub or locally.

Monitoring LangChain apps

Use the Argilla LangChain callback for monitoring, evaluation, and fine-tuning.

Fine-tune LLMs

Fine-tune an LLM or other models with the feedback collected from Argilla.

Feedback Dataset snapshot