Datasets#

This guide showcases some features of the Dataset classes in the Argilla client. The Dataset classes are lightweight containers for Argilla records. These classes facilitate importing from and exporting to different formats (e.g., pandas.DataFrame, datasets.Dataset) as well as sharing and versioning Argilla datasets using the Hugging Face Hub.

For each record type thereโ€™s a corresponding Dataset class called DatasetFor<RecordType>. You can look up their API in the reference section

Creating a Dataset#

Under the hood the Dataset classes store the records in a simple Python list. Therefore, working with a Dataset class is not very different to working with a simple list of records:

[ ]:
import argilla as rg

# Start with a list of Argilla records
dataset_rg = rg.DatasetForTextClassification(my_records)

# Loop over the dataset
for record in dataset_rg:
    print(record)

# Index into the dataset
dataset_rg[0] = rg.TextClassificationRecord(text="replace record")

# log a dataset to the Argilla web app
rg.log(dataset_rg, "my_dataset")

The Dataset classes do some extra checks for you, to make sure you do not mix record types when appending or indexing into a dataset.

Updating a Dataset#

Argilla datasets have certain settings that you can configure via the rg.*Settings classes, for example rg.TextClassificationSettings.

Define a labeling schema#

You can define a labeling schema for your Argilla dataset, which fixes the allowed labels for your predictions and annotations. Once you set a labeling schema, each time you log to the corresponding dataset, Argilla will perform validations of the added predictions and annotations to make sure they comply with the schema.

[ ]:
import argilla as rg

# Define labeling schema
settings = rg.TextClassificationSettings(label_schema=["A", "B", "C"])

# Apply settings to a new or already existing dataset
rg.configure_dataset(name="my_dataset", settings=settings)

# Logging to the newly created dataset triggers the validation checks
rg.log(rg.TextClassificationRecord(text="text", annotation="D"), "my_dataset")
# BadRequestApiError: Argilla server returned an error with http status: 400

Updating Records#

It is possible to update records from your Argilla datasets using our Python API. This approach works the same way as an upsert in a normal database, based on the record id. You can update any arbitrary parameters and they will be over-written if you use the id of the original record.

[ ]:
import argilla as rg

# read all records in the dataset or define a specific search via the `query` parameter
record = rg.load("my_first_dataset")

# modify first record metadata (if no previous metadata dict you might need to create it)
record[0].metadata["my_metadata"] = "im a new value"

# log record to update it, this will keep everything but add my_metadata field and value
rg.log(name="my_first_dataset", records=record[0])

Importing a Dataset#

When you have your data in a pandas DataFrame or a datasets Dataset, we provide some neat shortcuts to import this data into a Argilla Dataset. You have to make sure that the data follows the record model of a specific task, otherwise you will get validation errors. Columns in your DataFrame/Dataset that are not supported or recognized, will simply be ignored.

The record models of the tasks are explained in the reference section.

Note

Due to itโ€™s pyarrow nature, data in a datasets.Dataset has to follow a slightly different model, that you can look up in the examples of the Dataset*.from_datasets docstrings.

[ ]:
import argilla as rg

# import data from a pandas DataFrame
dataset_rg = rg.read_pandas(my_dataframe, task="TextClassification")
# or
dataset_rg = rg.DatasetForTextClassification.from_pandas(my_dataframe)

# import data from a datasets Dataset
dataset_rg = rg.read_datasets(my_dataset, task="TextClassification")
# or
dataset_rg = rg.DatasetForTextClassification.from_datasets(my_dataset)

We also provide helper arguments you can use to read almost arbitrary datasets for a given task from the Hugging Face Hub. They map certain input arguments of the Argilla records to columns of the given dataset. Letโ€™s have a look at a few examples:

[ ]:
import argilla as rg
from datasets import load_dataset

# the "poem_sentiment" dataset has columns "verse_text" and "label"
dataset_rg = rg.DatasetForTextClassification.from_datasets(
    dataset=load_dataset("poem_sentiment", split="test"),
    text="verse_text",
    annotation="label",
)

# the "snli" dataset has the columns "premise", "hypothesis" and "label"
dataset_rg = rg.DatasetForTextClassification.from_datasets(
    dataset=load_dataset("snli", split="test"),
    inputs=["premise", "hypothesis"],
    annotation="label",
)

# the "conll2003" dataset has the columns "id", "tokens", "pos_tags", "chunk_tags" and "ner_tags"
rg.DatasetForTokenClassification.from_datasets(
    dataset=load_dataset("conll2003", split="test"),
    tags="ner_tags",
)

# the "xsum" dataset has the columns "id", "document" and "summary"
rg.DatasetForText2Text.from_datasets(
    dataset=load_dataset("xsum", split="test"),
    text="document",
    annotation="summary",
)

You can also use the shortcut rg.read_datasets(dataset=..., task=..., **kwargs) where the keyword arguments are passed on to the corresponding from_datasets() method.

Reindexing a Dataset#

Sometimes updates require us to reindex the data.

Argilla Metrics#

For our internally computed metrics, this can be done by simply, loading and logging the same records back to the same index. This is because our internal metrics are computed and updated during logging.

[ ]:
import argilla as rg

dataset = "my-outdated-dataset"
ds = rg.load(dataset)
rg.log(ds, dataset)

Elasticsearch#

For Elastic indices, re-indexing requires a bit more effort. To be certain of a proper re-indexing, we requires loading the records, and storing them within a completely new index.

[ ]:
import argilla as rg

dataset = "my-outdated-dataset"
ds = rg.load(dataset)
new_dataset = "my-new-dataset"
rg.log(ds, new_dataset)

Sharing on Hugging Face#

You can easily share your Argilla dataset with your community via the Hugging Face Hub. For this you just need to export your Argilla Dataset to a datasets.Dataset and push it to the hub:

[ ]:
import argilla as rg

# load your annotated dataset from the Argilla web app
dataset_rg = rg.load("my_dataset")

# export your Argilla Dataset to a datasets Dataset
dataset_ds = dataset_rg.to_datasets()

# push the dataset to the Hugging Face Hub
dataset_ds.push_to_hub("my_dataset")

Afterward, your community can easily access your annotated dataset and log it directly to the Argilla web app:

[ ]:
from datasets import load_dataset

# download the dataset from the Hugging Face Hub
dataset_ds = load_dataset("user/my_dataset", split="train")

# read in dataset, assuming its a dataset for text classification
dataset_rg = rg.read_datasets(dataset_ds, task="TextClassification")

# log the dataset to the Argilla web app
rg.log(dataset_rg, "dataset_by_user")

Prepare dataset for training#

If you want to train a Hugging Face transformer or a spaCy NER pipeline, we provide a handy method to prepare your dataset: DatasetFor*.prepare_for_training(). It will return a Hugging Face dataset or a spaCy DocBin, optimized for the training process with the Hugging Face Trainer or the spaCy cli. Our libraries deepdive and training tutorials, show entire training workflows for your favorite packages.

TextClassification#

For text classification tasks, it flattens the inputs into separate columns of the returned dataset and converts the annotations of your records into integers and writes them in a label column:

[ ]:
import argilla as rg
dataset_rg = rg.DatasetForTextClassification(
    [
        rg.TextClassificationRecord(
            inputs={"title": "My title", "content": "My content"}, annotation="news"
        )
    ]
)

dataset_rg.prepare_for_training()[0]
# Output:
# {'title': 'My title', 'content': 'My content', 'label': 0}

TokenClassification#

For token classification tasks, it converts the annotations of a record into integers representing BIO tags and writes them in a ner_tags column: By passing the framework variable as transformers or spacy.

[ ]:
import argilla as rg

dataset_rg = rg.DatasetForTokenClassification(
    [
        rg.TokenClassificationRecord(
            text="I live in Madrid",
            tokens=["I", "live", "in", "Madrid"],
            annotation=[("LOC", 10, 16)],
        )
    ]
)

dataset_rg.prepare_for_training(framework="transformers")[0]
# Output:
# {..., 'tokens': ['I', 'live', 'in', 'Madrid'], 'ner_tags': [0, 0, 0, 1], ...}+

import spacy

nlp = spacy.blank("en")
dataset_rg.prepare_for_training(framework="spacy", lang=nlp)
# Output:
# <spacy.tokens._serialize.DocBin object at 0x280613af0>