Open In Colab  View Notebook on GitHub

Note

This tutorial demonstrates a sample usage for FeedbackDataset, which offers implementations different from the old TextClassificationDataset, Text2TextDataset and TokenClassificationDataset. To have info about old datasets, you can have a look at them here. Not sure which dataset to use? Check out our section on choosing a dataset.

Workflow Feedback Dataset#

Argilla Feedback is a tool designed to obtain and manage both the feedback data from annotators and the suggestions from small and large language models.

Install Libraries#

Install the latest version of Argilla in Colab, along with other libraries and models used in this notebook.

[ ]:
!pip install argilla datasets setfit evaluate seqeval

Set Up Argilla#

If you have already deployed Argilla Server, then you can skip this step. Otherwise, you can quickly deploy it in two different ways:

  • You can deploy Argilla Server on HF Spaces.

  • Alternatively, if you want to run Argilla locally on your own computer, the easiest way to get Argilla UI up and running is to deploy on Docker:

    docker run -d --name quickstart -p 6900:6900 argilla/argilla-quickstart:latest
    

More info on Installation here.

Connect to Argilla#

It is possible to connect to our Argilla instance by simply importing the Argilla library and using the environment variables and rg.init().

  • ARGILLA_API_URL: It is the url of the Argilla Server.

    • If youโ€™re using Docker, it is http://localhost:6900 by default.

    • If youโ€™re using HF Spaces, it is constructed as https://[your-owner-name]-[your_space_name].hf.space.

  • ARGILLA_API_KEY: It is the API key of the Argilla Server. It is owner by default.

  • HF_TOKEN: It is the Hugging Face API token. It is only needed if youโ€™re using a private HF Space. You can configure it in your profile: Setting > Access Tokens.

  • workspace: It is a โ€œspaceโ€ inside your Argilla instance where authorized users can collaborate. Itโ€™s argilla by default.

For more info about custom configurations like headers, workspace separation or access credentials, check our config page.

[1]:
import argilla as rg
from argilla._constants import DEFAULT_API_KEY
[2]:
# Argilla credentials
api_url = "http://localhost:6900"  # "https://<YOUR-HF-SPACE>.hf.space"
api_key = DEFAULT_API_KEY  # admin.apikey
# Huggingface credentials
hf_token = "hf_..."
[3]:
import argilla as rg
rg.init(api_url=api_url, api_key=api_key)

# # If you want to use your private HF Space
# rg.init(extra_headers={"Authorization": f"Bearer {hf_token}"})
C:\Users\sarah\Documents\argilla\src\argilla\client\client.py:154: UserWarning: Default user was detected and no workspace configuration was provided, so the default 'argilla' workspace will be used. If you want to setup another workspace, use the `rg.set_workspace` function or provide a different one on `rg.init`
  warnings.warn(

Enable Telemetry#

We gain valuable insights from how you interact with our tutorials. To improve ourselves in offering you the most suitable content, using the following lines of code will help us understand that this tutorial is serving you effectively. Though this is entirely anonymous, you can choose to skip this step if you prefer. For more info, please check out the Telemetry page.

[ ]:
try:
    from argilla.utils.telemetry import tutorial_running
    tutorial_running()
except ImportError:
    print("Telemetry is introduced in Argilla 1.20.0 and not found in the current installation. Skipping telemetry.")

Create a Dataset#

FeedbackDataset is the container for Argilla Feedback structure. Argilla Feedback offers different components for FeedbackDatasets that you can employ for various aspects of your workflow. For a more detailed explanation, refer to the documentation and the end-to-end tutorials for beginners.

To start, we need to configure the FeedbackDatasest. To do so, there are two options: use a pre-defined template or create a custom one.

Use a Task Template#

Argilla offers a set of pre-defined templates for different tasks. You can use them to configure your dataset straightforward. For instance, if you want to create a dataset for simple text classification, you can use the following code:

[25]:
dataset = rg.FeedbackDataset.for_text_classification(
    labels=["joy", "sadness"],
    multi_label=False,
    use_markdown=True,
    guidelines=None,
    metadata_properties=None,
    vectors_settings=None,
)
dataset
[25]:
FeedbackDataset(
   fields=[TextField(name='text', title='Text', required=True, type='text', use_markdown=True)]
   questions=[LabelQuestion(name='label', title='Label', description='Classify the text by selecting the correct label from the given list of labels.', required=True, type='label_selection', labels=['joy', 'sadness'], visible_labels=None)]
   guidelines=This is a text classification dataset that contains texts and labels. Given a set of texts and a predefined set of labels, the goal of text classification is to assign one label to each text based on its content. Please classify the texts by making the correct selection.)
   metadata_properties=[])
)

Now that we have our dataset, we can push the dataset to the Argilla space.

Note

From Argilla 1.14.0, calling push_to_argilla will not just push the FeedbackDataset into Argilla, but will also return the remote FeedbackDataset instance, which implies that the additions, updates, and deletions of records will be pushed to Argilla as soon as they are made. This is a change from previous versions of Argilla, where you had to call push_to_argilla again to push the changes to Argilla.

[ ]:
try:
    dataset.push_to_argilla(name="my-first-dataset", workspace="argilla")
except:
    pass

Configure a Custom Dataset#

If your dataset does not fit into one of the pre-defined templates, you can create a custom dataset by defining the fields, the different question types, the metadata properties and the vectors settings.

Add the Records#

A record refers to each of the data items that will be annotated by the annotator team. The records will be the pieces of information that will be shown to the user in the UI in order to complete the annotation task. In the current dataset sample, it can only consist of a text to be labeled.

[27]:
records = [
    rg.FeedbackRecord(
        fields={
            "text": "I am so happy today",
        },
    ),
    rg.FeedbackRecord(
        fields={
            "text": "I feel sad today",
        },
    )
]
dataset.add_records(records)
[28]:
dataset.records
[28]:
[FeedbackRecord(fields={'text': 'I am so happy today'}, metadata={}, vectors={}, responses=[], suggestions=(), external_id=None),
 FeedbackRecord(fields={'text': 'I feel sad today'}, metadata={}, vectors={}, responses=[], suggestions=(), external_id=None)]

Argilla also offers a way to use suggestions and responses from other models as a starting point for annotators. This way, annotators can save time and effort by correcting the predictions or answers instead of annotating from scratch.

Train a model#

As with other datasets, Feedback datasets also allow to create a training pipeline and make inferences with the resulting model. After you gather responses with Argilla Feedback, you can easily fine-tune an LLM. In this example, we will have to complete a text classification task.

For fine-tuning, we will use setfit library and the Argilla Trainer, which is a powerful wrapper around many of our favorite NLP libraries. It provides a very intuitive abstract representation to facilitate simple training workflows using decent default pre-set configurations without having to worry about any data transformations from Argilla.

Let us first create our dataset to train. For this example, we will use the emotion dataset from Argilla, which was created using Argilla. Each text item has its responses as 6 different sentiments, which are Sadness, Joy, Love, Anger, Fear and Surprise.

[ ]:
# Besides Argilla, it can also be imported with load_dataset from datasets
dataset_hf = rg.FeedbackDataset.from_huggingface("argilla/emotion", split="train[1:101]")
[17]:
dataset_hf
[17]:
FeedbackDataset(
   fields=[TextField(name='text', title='Text', required=True, type=<FieldTypes.text: 'text'>, use_markdown=False)]
   questions=[LabelQuestion(name='label', title='Label', description=None, required=True, type=<QuestionTypes.label_selection: 'label_selection'>, labels={'0': 'sadness', '1': 'joy', '2': 'love', '3': 'anger', '4': 'fear', '5': 'surprise'}, visible_labels=6)]
   guidelines=Argilla port of [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion).)
   metadata_properties=[])
)

We can then start to create a training pipeline by first defining TrainingTask, which is used to define how the data should be processed and formatted according to the associated task and framework. Each task has its own classmethod and the data formatting can always be customized via formatting_func. You can visit this page for more info. Simpler tasks like text classification can be defined using default definitions, as we do in this example.

[ ]:
from argilla.feedback import TrainingTask

task = TrainingTask.for_text_classification(
    text=dataset_hf.field_by_name("text"),
    label=dataset_hf.question_by_name("label")
)

We can then define our ArgillaTrainer for any of the supported frameworks and customize the training config using ArgillaTrainer.update_config.

Let us define ArgillaTrainer with any of the supported frameworks.

[ ]:
from argilla.feedback import ArgillaTrainer

trainer = ArgillaTrainer(
    dataset=dataset_hf,
    task=task,
    framework="setfit",
    train_size=0.8
)

You can update the model config via update_config.

[ ]:
trainer.update_config(num_train_epochs=1, num_iterations=1)

We can now train the model with train

[ ]:
trainer.train(output_dir="setfit_model")

and make inferences with predict.

[ ]:
trainer.predict("This is just perfect!")

We have trained a model with FeedbackDataset in this tutorial. For more info about concepts in Argilla Feedback and LLMs, look here. For a more detailed explanation, refer to the documentation and the end-to-end tutorials for beginners.