Client#

Here we describe the Python client of Argilla that we divide into four basic modules:

  • Methods: These methods make up the interface to interact with Argillaโ€™s REST API.

  • Records: You need to wrap your data in these Records for Argilla to understand it.

  • Datasets: Datasets: You can wrap your records around these Datasets for extra functionality.

  • FeedbackDataset: FeedbackDataset: the dataset format for FeedbackTask and LLM support.

Methods#

argilla.active_client()#

Returns the active argilla client.

If Active client is None, initialize a default one.

Return type:

Argilla

argilla.copy(dataset, name_of_copy, workspace=None)#

Creates a copy of a dataset including its tags and metadata

Parameters:
  • dataset (str) โ€“ Name of the source dataset

  • name_of_copy (str) โ€“ Name of the copied dataset

  • workspace (Optional[str]) โ€“ If provided, dataset will be copied to that workspace

Examples

>>> import argilla as rg
>>> rg.copy("my_dataset", name_of_copy="new_dataset")
>>> rg.load("new_dataset")
argilla.delete(name, workspace=None)#

Deletes a dataset.

Parameters:
  • name (str) โ€“ The dataset name.

  • workspace (Optional[str]) โ€“ The workspace to which records will be logged/loaded. If None (default) and the env variable ARGILLA_WORKSPACE is not set, it will default to the private user workspace.

Examples

>>> import argilla as rg
>>> rg.delete(name="example-dataset")
argilla.delete_records(name, workspace=None, query=None, ids=None, discard_only=False, discard_when_forbidden=True)#

Delete records from a argilla dataset.

Parameters:
  • name (str) โ€“ The dataset name.

  • workspace (Optional[str]) โ€“ The workspace to which records will be logged/loaded. If None (default) and the env variable ARGILLA_WORKSPACE is not set, it will default to the private user workspace.

  • query (Optional[str]) โ€“ An ElasticSearch query with the query string syntax

  • ids (Optional[List[Union[str, int]]]) โ€“ If provided, deletes dataset records with given ids.

  • discard_only (bool) โ€“ If True, matched records wonโ€™t be deleted. Instead, they will be marked as Discarded

  • discard_when_forbidden (bool) โ€“ Only super-user or dataset creator can delete records from a dataset. So, running โ€œhardโ€ deletion for other users will raise an ForbiddenApiError error. If this parameter is True, the client API will automatically try to mark as Discarded records instead. Default, True

Returns:

The total of matched records and real number of processed errors. These numbers could not be the same if some data conflicts are found during operations (some matched records change during deletion).

Return type:

Tuple[int, int]

Examples

>>> ## Delete by id
>>> import argilla as rg
>>> rg.delete_records(name="example-dataset", ids=[1,3,5])
>>> ## Discard records by query
>>> import argilla as rg
>>> rg.delete_records(name="example-dataset", query="metadata.code=33", discard_only=True)
argilla.get_workspace()#

Returns the name of the active workspace.

Returns:

The name of the active workspace as a string.

Return type:

str

argilla.init(api_url=None, api_key=None, workspace=None, timeout=60, extra_headers=None)#

Init the Python client.

We will automatically init a default client for you when calling other client methods. The arguments provided here will overwrite your corresponding environment variables.

Parameters:
  • api_url (Optional[str]) โ€“ Address of the REST API. If None (default) and the env variable ARGILLA_API_URL is not set, it will default to http://localhost:6900.

  • api_key (Optional[str]) โ€“ Authentification key for the REST API. If None (default) and the env variable ARGILLA_API_KEY is not set, it will default to argilla.apikey.

  • workspace (Optional[str]) โ€“ The workspace to which records will be logged/loaded. If None (default) and the env variable ARGILLA_WORKSPACE is not set, it will default to the private user workspace.

  • timeout (int) โ€“ Wait timeout seconds for the connection to timeout. Default: 60.

  • extra_headers (Optional[Dict[str, str]]) โ€“ Extra HTTP headers sent to the server. You can use this to customize the headers of argilla client requests, like additional security restrictions. Default: None.

Examples

>>> import argilla as rg
>>>
>>> rg.init(api_url="http://localhost:9090", api_key="4AkeAPIk3Y")
>>> # Customizing request headers
>>> headers = {"X-Client-id":"id","X-Secret":"secret"}
>>> rg.init(api_url="http://localhost:9090", api_key="4AkeAPIk3Y", extra_headers=headers)
argilla.load(name, workspace=None, query=None, vector=None, ids=None, limit=None, sort=None, id_from=None, batch_size=250, include_vectors=True, include_metrics=True, as_pandas=None)#

Loads a argilla dataset.

Parameters:
  • name (str) โ€“ The dataset name.

  • workspace (Optional[str]) โ€“ The workspace to which records will be logged/loaded. If None (default) and the env variable ARGILLA_WORKSPACE is not set, it will default to the private user workspace.

  • query (Optional[str]) โ€“

    An ElasticSearch query with the query string syntax

  • vector (Optional[Tuple[str, List[float]]]) โ€“ Vector configuration for a semantic search

  • ids (Optional[List[Union[str, int]]]) โ€“ If provided, load dataset records with given ids.

  • limit (Optional[int]) โ€“ The number of records to retrieve.

  • sort (Optional[List[Tuple[str, str]]]) โ€“ The fields on which to sort [(<field_name>, โ€˜asc|decsโ€™)].

  • id_from (Optional[str]) โ€“ If provided, starts gathering the records starting from that Record. As the Records returned with the load method are sorted by ID, ยดid_fromยด can be used to load using batches.

  • batch_size (int) โ€“ If provided, load batch_size samples per request. A lower batch size may help avoid timeouts.

  • include_vectors (bool) โ€“ When set to False, indicates that records will be retrieved excluding their vectors, if any. By default, this parameter is set to True, meaning that vectors will be included.

  • include_metrics (bool) โ€“ When set to False, indicates that records will be retrieved excluding their metrics. By default, this parameter is set to True, meaning that metrics will be included.

  • as_pandas โ€“ DEPRECATED! To get a pandas DataFrame do rg.load('my_dataset').to_pandas().

Returns:

A argilla dataset.

Return type:

Union[DatasetForTextClassification, DatasetForTokenClassification, DatasetForText2Text]

Examples

Basic Loading: load the samples sorted by their ID

>>> import argilla as rg
>>> dataset = rg.load(name="example-dataset")
Iterate over a large dataset:

When dealing with a large dataset you might want to load it in batches to optimize memory consumption and avoid network timeouts. To that end, a simple batch-iteration over the whole database can be done employing the from_id parameter. This parameter will act as a delimiter, retrieving the N items after the given id, where N is determined by the limit parameter. NOTE If no limit is given the whole dataset after that ID will be retrieved.

>>> import argilla as rg
>>> dataset_batch_1 = rg.load(name="example-dataset", limit=1000)
>>> dataset_batch_2 = rg.load(name="example-dataset", limit=1000, id_from=dataset_batch_1[-1].id)
argilla.log(records, name, workspace=None, tags=None, metadata=None, batch_size=100, verbose=True, background=False, chunk_size=None, num_threads=0, max_retries=3)#

Logs Records to argilla.

The logging happens asynchronously in a background thread.

Parameters:
  • records (Union[TextClassificationRecord, TokenClassificationRecord, Text2TextRecord, TextGenerationRecord, Iterable[Union[TextClassificationRecord, TokenClassificationRecord, Text2TextRecord, TextGenerationRecord]], DatasetForTextClassification, DatasetForTokenClassification, DatasetForText2Text]) โ€“ The record, an iterable of records, or a dataset to log.

  • name (str) โ€“ The dataset name.

  • workspace (Optional[str]) โ€“ The workspace to which records will be logged/loaded. If None (default) and the env variable ARGILLA_WORKSPACE is not set, it will default to the private user workspace.

  • tags (Optional[Dict[str, str]]) โ€“ A dictionary of tags related to the dataset.

  • metadata (Optional[Dict[str, Any]]) โ€“ A dictionary of extra info for the dataset.

  • batch_size (int) โ€“ The batch size for a data bulk.

  • verbose (bool) โ€“ If True, shows a progress bar and prints out a quick summary at the end.

  • background (bool) โ€“ If True, we will NOT wait for the logging process to finish and return an asyncio.Future object. You probably want to set verbose to False in that case.

  • chunk_size (Optional[int]) โ€“ DEPRECATED! Use batch_size instead.

  • num_threads (int) โ€“ If > 0, will use num_thread separate number threads to batches, sending data concurrently. Default to 0, which means no threading at all.

  • max_retries (int) โ€“ Number of retries when logging a batch of records if a httpx.TransportError occurs. Default 3.

Returns:

Summary of the response from the REST API. If the background argument is set to True, an asyncio.Future will be returned instead.

Return type:

Union[BulkResponse, Future]

Examples

>>> import argilla as rg
>>> record = rg.TextClassificationRecord(
...     text="my first argilla example",
...     prediction=[('spam', 0.8), ('ham', 0.2)]
... )
>>> rg.log(record, name="example-dataset")
1 records logged to http://localhost:6900/datasets/argilla/example-dataset
BulkResponse(dataset='example-dataset', processed=1, failed=0)
>>>
>>> # Logging records in the background
>>> rg.log(record, name="example-dataset", background=True, verbose=False)
<Future at 0x7f675a1fffa0 state=pending>
argilla.set_workspace(workspace)#

Sets the active workspace.

Parameters:

workspace (str) โ€“ The new workspace

Records#

This module contains the data models for the interface

class argilla.client.models.Framework(value)#

Frameworks supported by Argilla

Options:

transformers: Transformers peft: PEFT Transformers library setfit: SetFit Transformers library spacy: Spacy Explosion spacy-transformers: Spacy Transformers Explosion library span_marker: SpanMarker Tom Aarsen library spark-nlp: Spark NLP John Snow Labs library openai: OpenAI LLMs

class argilla.client.models.Text2TextRecord(*, text, prediction=None, prediction_agent=None, annotation=None, annotation_agent=None, vectors=None, id=None, metadata=None, status=None, event_timestamp=None, metrics=None, search_keywords=None)#

Record for a text to text task

Parameters:
  • text (str) โ€“ The input of the record

  • prediction (Optional[List[Union[str, Tuple[str, float]]]]) โ€“ A list of strings or tuples containing predictions for the input text. If tuples, the first entry is the predicted text, the second entry is its corresponding score.

  • prediction_agent (Optional[str]) โ€“ Name of the prediction agent. By default, this is set to the hostname of your machine.

  • annotation (Optional[str]) โ€“ A string representing the expected output text for the given input text.

  • annotation_agent (Optional[str]) โ€“ Name of the prediction agent. By default, this is set to the hostname of your machine.

  • vectors (Optional[Dict[str, List[float]]]) โ€“ Embedding data mappings of the natural language text containing class attributesโ€™

  • id (Optional[Union[int, str]]) โ€“ The id of the record. By default (None), we will generate a unique ID for you.

  • metadata (Optional[Dict[str, Any]]) โ€“ Metadata for the record. Defaults to {}.

  • status (Optional[str]) โ€“ The status of the record. Options: โ€˜Defaultโ€™, โ€˜Editedโ€™, โ€˜Discardedโ€™, โ€˜Validatedโ€™. If an annotation is provided, this defaults to โ€˜Validatedโ€™, otherwise โ€˜Defaultโ€™.

  • event_timestamp (Optional[datetime]) โ€“ The timestamp for the creation of the record. Defaults to datetime.datetime.now().

  • metrics (Optional[Dict[str, Any]]) โ€“ READ ONLY! Metrics at record level provided by the server when using rg.load. This attribute will be ignored when using rg.log.

  • search_keywords (Optional[List[str]]) โ€“ READ ONLY! Relevant record keywords/terms for provided query when using rg.load. This attribute will be ignored when using rg.log.

Examples

>>> import argilla as rg
>>> record = rg.Text2TextRecord(
...     text="My name is Sarah and I love my dog.",
...     prediction=["Je m'appelle Sarah et j'aime mon chien."],
...     vectors = {
...         "bert_base_uncased": [1.2, 2.3, 3.4, 5.2, 6.5],
...         "xlm_multilingual_uncased": [2.2, 5.3, 5.4, 3.2, 2.5]
...     }
... )
classmethod prediction_as_tuples(prediction)#

Preprocess the predictions and wraps them in a tuple if needed

Parameters:

prediction (Optional[List[Union[str, Tuple[str, float]]]]) โ€“

class argilla.client.models.TextClassificationRecord(*, text=None, inputs=None, prediction=None, prediction_agent=None, annotation=None, annotation_agent=None, vectors=None, multi_label=False, explanation=None, id=None, metadata=None, status=None, event_timestamp=None, metrics=None, search_keywords=None)#

Record for text classification

Parameters:
  • text (Optional[str]) โ€“ The input of the record. Provide either โ€˜textโ€™ or โ€˜inputsโ€™.

  • inputs (Optional[Union[str, List[str], Dict[str, Union[str, List[str]]]]]) โ€“ Various inputs of the record (see examples below). Provide either โ€˜textโ€™ or โ€˜inputsโ€™.

  • prediction (Optional[List[Tuple[str, float]]]) โ€“ A list of tuples containing the predictions for the record. The first entry of the tuple is the predicted label, the second entry is its corresponding score.

  • prediction_agent (Optional[str]) โ€“ Name of the prediction agent. By default, this is set to the hostname of your machine.

  • annotation (Optional[Union[str, List[str]]]) โ€“ A string or a list of strings (multilabel) corresponding to the annotation (gold label) for the record.

  • annotation_agent (Optional[str]) โ€“ Name of the prediction agent. By default, this is set to the hostname of your machine.

  • vectors (Optional[Dict[str, List[float]]]) โ€“ Vectors data mappings of the natural language text containing class attributes

  • multi_label (bool) โ€“ Is the prediction/annotation for a multi label classification task? Defaults to False.

  • explanation (Optional[Dict[str, List[TokenAttributions]]]) โ€“ A dictionary containing the attributions of each token to the prediction. The keys map the input of the record (see inputs) to the TokenAttributions.

  • id (Optional[Union[int, str]]) โ€“ The id of the record. By default (None), we will generate a unique ID for you.

  • metadata (Optional[Dict[str, Any]]) โ€“ Metadata for the record. Defaults to {}.

  • status (Optional[str]) โ€“ The status of the record. Options: โ€˜Defaultโ€™, โ€˜Editedโ€™, โ€˜Discardedโ€™, โ€˜Validatedโ€™. If an annotation is provided, this defaults to โ€˜Validatedโ€™, otherwise โ€˜Defaultโ€™.

  • event_timestamp (Optional[datetime]) โ€“ The timestamp for the creation of the record. Defaults to datetime.datetime.now().

  • metrics (Optional[Dict[str, Any]]) โ€“ READ ONLY! Metrics at record level provided by the server when using rg.load. This attribute will be ignored when using rg.log.

  • search_keywords (Optional[List[str]]) โ€“ READ ONLY! Relevant record keywords/terms for provided query when using rg.load. This attribute will be ignored when using rg.log.

Examples

>>> # Single text input
>>> import argilla as rg
>>> record = rg.TextClassificationRecord(
...     text="My first argilla example",
...     prediction=[('eng', 0.9), ('esp', 0.1)],
...     vectors = {
...         "english_bert_vector": [1.2, 2.3, 3.1, 3.3]
...     }
... )
>>>
>>> # Various inputs
>>> record = rg.TextClassificationRecord(
...     inputs={
...         "subject": "Has ganado 1 million!",
...         "body": "Por usar argilla te ha tocado este premio: <link>"
...     },
...     prediction=[('spam', 0.99), ('ham', 0.01)],
...     annotation="spam",
...     vectors = {
...                     "distilbert_uncased":  [1.13, 4.1, 6.3, 4.2, 9.1],
...                     "xlm_roberta_cased": [1.1, 2.1, 3.3, 4.2, 2.1],
...             }
...     )
class argilla.client.models.TextGenerationRecord(*, text, prediction=None, prediction_agent=None, annotation=None, annotation_agent=None, vectors=None, id=None, metadata=None, status=None, event_timestamp=None, metrics=None, search_keywords=None)#
Parameters:
  • text (str) โ€“

  • prediction (Optional[List[Union[str, Tuple[str, float]]]]) โ€“

  • prediction_agent (Optional[str]) โ€“

  • annotation (Optional[str]) โ€“

  • annotation_agent (Optional[str]) โ€“

  • vectors (Optional[Dict[str, List[float]]]) โ€“

  • id (Optional[Union[int, str]]) โ€“

  • metadata (Optional[Dict[str, Any]]) โ€“

  • status (Optional[str]) โ€“

  • event_timestamp (Optional[datetime]) โ€“

  • metrics (Optional[Dict[str, Any]]) โ€“

  • search_keywords (Optional[List[str]]) โ€“

class argilla.client.models.TokenAttributions(*, token, attributions=None)#

Attribution of the token to the predicted label.

In the argilla app this is only supported for TextClassificationRecord and the multi_label=False case.

Parameters:
  • token (str) โ€“ The input token.

  • attributions (Dict[str, float]) โ€“ A dictionary containing label-attribution pairs.

class argilla.client.models.TokenClassificationRecord(text=None, tokens=None, tags=None, *, prediction=None, prediction_agent=None, annotation=None, annotation_agent=None, vectors=None, id=None, metadata=None, status=None, event_timestamp=None, metrics=None, search_keywords=None)#

Record for a token classification task

Parameters:
  • text (Optional[str]) โ€“ The input of the record

  • tokens (Optional[Union[List[str], Tuple[str, ...]]]) โ€“ The tokenized input of the record. We use this to guide the annotation process and to cross-check the spans of your prediction/annotation.

  • prediction (Optional[List[Union[Tuple[str, int, int], Tuple[str, int, int, Optional[float]]]]]) โ€“ A list of tuples containing the predictions for the record. The first entry of the tuple is the name of predicted entity, the second and third entry correspond to the start and stop character index of the entity. The fourth entry is optional and corresponds to the score of the entity (a float number between 0 and 1).

  • prediction_agent (Optional[str]) โ€“ Name of the prediction agent. By default, this is set to the hostname of your machine.

  • annotation (Optional[List[Tuple[str, int, int]]]) โ€“ A list of tuples containing annotations (gold labels) for the record. The first entry of the tuple is the name of the entity, the second and third entry correspond to the start and stop char index of the entity.

  • annotation_agent (Optional[str]) โ€“ Name of the prediction agent. By default, this is set to the hostname of your machine.

  • vectors (Optional[Dict[str, List[float]]]) โ€“ Vector data mappings of the natural language text containing class attributesโ€™

  • id (Optional[Union[int, str]]) โ€“ The id of the record. By default (None), we will generate a unique ID for you.

  • metadata (Optional[Dict[str, Any]]) โ€“ Metadata for the record. Defaults to {}.

  • status (Optional[str]) โ€“ The status of the record. Options: โ€˜Defaultโ€™, โ€˜Editedโ€™, โ€˜Discardedโ€™, โ€˜Validatedโ€™. If an annotation is provided, this defaults to โ€˜Validatedโ€™, otherwise โ€˜Defaultโ€™.

  • event_timestamp (Optional[datetime]) โ€“ The timestamp for the creation of the record. Defaults to datetime.datetime.now().

  • metrics (Optional[Dict[str, Any]]) โ€“ READ ONLY! Metrics at record level provided by the server when using rg.load. This attribute will be ignored when using rg.log.

  • search_keywords (Optional[List[str]]) โ€“ READ ONLY! Relevant record keywords/terms for provided query when using rg.load. This attribute will be ignored when using rg.log.

  • tags (Optional[List[str]]) โ€“

Examples

>>> import argilla as rg
>>> record = rg.TokenClassificationRecord(
...     text = "Michael is a professor at Harvard",
...     tokens = ["Michael", "is", "a", "professor", "at", "Harvard"],
...     prediction = [('NAME', 0, 7), ('LOC', 26, 33)],
...     vectors = {
...            "bert_base_uncased": [3.2, 4.5, 5.6, 8.9]
...          }
... )
char_id2token_id(char_idx)#

DEPRECATED, please use the argilla.utisl.span_utils.SpanUtils.char_to_token_idx dict instead.

Parameters:

char_idx (int) โ€“

Return type:

Optional[int]

spans2iob(spans=None)#

DEPRECATED, please use the argilla.utils.SpanUtils.to_tags() method.

Parameters:

spans (Optional[List[Tuple[str, int, int]]]) โ€“

Return type:

Optional[List[str]]

token_span(token_idx)#

DEPRECATED, please use the argilla.utisl.span_utils.SpanUtils.token_to_char_idx dict instead.

Parameters:

token_idx (int) โ€“

Return type:

Tuple[int, int]

Datasets#

class argilla.client.datasets.DatasetForText2Text(records=None)#

This Dataset contains Text2TextRecord records.

It allows you to export/import records into/from different formats, loop over the records, and access them by index.

Parameters:

records (Optional[List[Text2TextRecord]]) โ€“ A list of `Text2TextRecord`s.

Raises:

WrongRecordTypeError โ€“ When the record type in the provided list does not correspond to the dataset type.

Examples

>>> # Import/export records:
>>> import argilla as rg
>>> dataset = rg.DatasetForText2Text.from_pandas(my_dataframe)
>>> dataset.to_datasets()
>>>
>>> # Passing in a list of records:
>>> records = [
...     rg.Text2TextRecord(text="example"),
...     rg.Text2TextRecord(text="another example"),
... ]
>>> dataset = rg.DatasetForText2Text(records)
>>> assert len(dataset) == 2
>>>
>>> # Looping over the dataset:
>>> for record in dataset:
...     print(record)
>>>
>>> # Indexing into the dataset:
>>> dataset[0]
... rg.Text2TextRecord(text="example"})
>>> dataset[0] = rg.Text2TextRecord(text="replaced example")
classmethod from_datasets(dataset, text=None, annotation=None, metadata=None, id=None)#

Imports records from a datasets.Dataset.

Columns that are not supported are ignored.

Parameters:
  • dataset (datasets.Dataset) โ€“ A datasets Dataset from which to import the records.

  • text (Optional[str]) โ€“ The field name used as record text. Default: None

  • annotation (Optional[str]) โ€“ The field name used as record annotation. Default: None

  • metadata (Optional[Union[str, List[str]]]) โ€“ The field name used as record metadata. Default: None

  • id (Optional[str]) โ€“

Returns:

The imported records in a argilla Dataset.

Return type:

DatasetForText2Text

Examples

>>> import datasets
>>> ds = datasets.Dataset.from_dict({
...     "text": ["my example"],
...     "prediction": [["mi ejemplo", "ejemplo mio"]]
... })
>>> # or
>>> ds = datasets.Dataset.from_dict({
...     "text": ["my example"],
...     "prediction": [[{"text": "mi ejemplo", "score": 0.9}]]
... })
>>> DatasetForText2Text.from_datasets(ds)
classmethod from_pandas(dataframe)#

Imports records from a pandas.DataFrame.

Columns that are not supported are ignored.

Parameters:

dataframe (DataFrame) โ€“ A pandas DataFrame from which to import the records.

Returns:

The imported records in a argilla Dataset.

Return type:

DatasetForText2Text

class argilla.client.datasets.DatasetForTextClassification(records=None)#

This Dataset contains TextClassificationRecord records.

It allows you to export/import records into/from different formats, loop over the records, and access them by index.

Parameters:

records (Optional[List[TextClassificationRecord]]) โ€“ A list of `TextClassificationRecord`s.

Raises:

WrongRecordTypeError โ€“ When the record type in the provided list does not correspond to the dataset type.

Examples

>>> # Import/export records:
>>> import argilla as rg
>>> dataset = rg.DatasetForTextClassification.from_pandas(my_dataframe)
>>> dataset.to_datasets()
>>>
>>> # Looping over the dataset:
>>> for record in dataset:
...     print(record)
>>>
>>> # Passing in a list of records:
>>> records = [
...     rg.TextClassificationRecord(text="example"),
...     rg.TextClassificationRecord(text="another example"),
... ]
>>> dataset = rg.DatasetForTextClassification(records)
>>> assert len(dataset) == 2
>>>
>>> # Indexing into the dataset:
>>> dataset[0]
... rg.TextClassificationRecord(text="example")
>>> dataset[0] = rg.TextClassificationRecord(text="replaced example")
classmethod from_datasets(dataset, text=None, id=None, inputs=None, annotation=None, metadata=None)#

Imports records from a datasets.Dataset.

Columns that are not supported are ignored.

Parameters:
  • dataset (datasets.Dataset) โ€“ A datasets Dataset from which to import the records.

  • text (Optional[str]) โ€“ The field name used as record text. Default: None

  • id (Optional[str]) โ€“ The field name used as record id. Default: None

  • inputs (Optional[Union[str, List[str]]]) โ€“ A list of field names used for record inputs. Default: None

  • annotation (Optional[str]) โ€“ The field name used as record annotation. Default: None

  • metadata (Optional[Union[str, List[str]]]) โ€“ The field name used as record metadata. Default: None

Returns:

The imported records in a argilla Dataset.

Return type:

DatasetForTextClassification

Examples

>>> import datasets
>>> ds = datasets.Dataset.from_dict({
...     "inputs": ["example"],
...     "prediction": [
...         [{"label": "LABEL1", "score": 0.9}, {"label": "LABEL2", "score": 0.1}]
...     ]
... })
>>> DatasetForTextClassification.from_datasets(ds)
classmethod from_pandas(dataframe)#

Imports records from a pandas.DataFrame.

Columns that are not supported are ignored.

Parameters:

dataframe (DataFrame) โ€“ A pandas DataFrame from which to import the records.

Returns:

The imported records in a argilla Dataset.

Return type:

DatasetForTextClassification

class argilla.client.datasets.DatasetForTokenClassification(records=None)#

This Dataset contains TokenClassificationRecord records.

It allows you to export/import records into/from different formats, loop over the records, and access them by index.

Parameters:

records (Optional[List[TokenClassificationRecord]]) โ€“ A list of `TokenClassificationRecord`s.

Raises:

WrongRecordTypeError โ€“ When the record type in the provided list does not correspond to the dataset type.

Examples

>>> # Import/export records:
>>> import argilla as rg
>>> dataset = rg.DatasetForTokenClassification.from_pandas(my_dataframe)
>>> dataset.to_datasets()
>>>
>>> # Looping over the dataset:
>>> assert len(dataset) == 2
>>> for record in dataset:
...     print(record)
>>>
>>> # Passing in a list of records:
>>> import argilla as rg
>>> records = [
...     rg.TokenClassificationRecord(text="example", tokens=["example"]),
...     rg.TokenClassificationRecord(text="another example", tokens=["another", "example"]),
... ]
>>> dataset = rg.DatasetForTokenClassification(records)
>>>
>>> # Indexing into the dataset:
>>> dataset[0]
... rg.TokenClassificationRecord(text="example", tokens=["example"])
>>> dataset[0] = rg.TokenClassificationRecord(text="replace example", tokens=["replace", "example"])
classmethod from_datasets(dataset, text=None, id=None, tokens=None, tags=None, metadata=None)#

Imports records from a datasets.Dataset.

Columns that are not supported are ignored.

Parameters:
  • dataset (datasets.Dataset) โ€“ A datasets Dataset from which to import the records.

  • text (Optional[str]) โ€“ The field name used as record text. Default: None

  • id (Optional[str]) โ€“ The field name used as record id. Default: None

  • tokens (Optional[str]) โ€“ The field name used as record tokens. Default: None

  • tags (Optional[str]) โ€“ The field name used as record tags. Default: None

  • metadata (Optional[Union[str, List[str]]]) โ€“ The field name used as record metadata. Default: None

Returns:

The imported records in a argilla Dataset.

Return type:

DatasetForTokenClassification

Examples

>>> import datasets
>>> ds = datasets.Dataset.from_dict({
...     "text": ["my example"],
...     "tokens": [["my", "example"]],
...     "prediction": [
...         [{"label": "LABEL1", "start": 3, "end": 10, "score": 1.0}]
...     ]
... })
>>> DatasetForTokenClassification.from_datasets(ds)
classmethod from_pandas(dataframe)#

Imports records from a pandas.DataFrame.

Columns that are not supported are ignored.

Parameters:

dataframe (DataFrame) โ€“ A pandas DataFrame from which to import the records.

Returns:

The imported records in a argilla Dataset.

Return type:

DatasetForTokenClassification

argilla.client.datasets.read_datasets(dataset, task, **kwargs)#

Reads a datasets Dataset and returns a argilla Dataset

Columns not supported by the Record instance corresponding with the task are ignored.

Parameters:
  • dataset (datasets.Dataset) โ€“ Dataset to be read in.

  • task (Union[str, TaskType]) โ€“ Task for the dataset, one of: [โ€œTextClassificationโ€, โ€œTokenClassificationโ€, โ€œText2Textโ€].

  • **kwargs โ€“ Passed on to the task-specific DatasetFor*.from_datasets() method.

Returns:

A argilla dataset for the given task.

Return type:

Union[DatasetForTextClassification, DatasetForTokenClassification, DatasetForText2Text]

Examples

>>> # Read text classification records from a datasets Dataset
>>> import datasets
>>> ds = datasets.Dataset.from_dict({
...     "inputs": ["example"],
...     "prediction": [
...         [{"label": "LABEL1", "score": 0.9}, {"label": "LABEL2", "score": 0.1}]
...     ]
... })
>>> read_datasets(ds, task="TextClassification")
>>>
>>> # Read token classification records from a datasets Dataset
>>> ds = datasets.Dataset.from_dict({
...     "text": ["my example"],
...     "tokens": [["my", "example"]],
...     "prediction": [
...         [{"label": "LABEL1", "start": 3, "end": 10}]
...     ]
... })
>>> read_datasets(ds, task="TokenClassification")
>>>
>>> # Read text2text records from a datasets Dataset
>>> ds = datasets.Dataset.from_dict({
...     "text": ["my example"],
...     "prediction": [["mi ejemplo", "ejemplo mio"]]
... })
>>> # or
>>> ds = datasets.Dataset.from_dict({
...     "text": ["my example"],
...     "prediction": [[{"text": "mi ejemplo", "score": 0.9}]]
... })
>>> read_datasets(ds, task="Text2Text")
argilla.client.datasets.read_pandas(dataframe, task)#

Reads a pandas DataFrame and returns a argilla Dataset

Columns not supported by the Record instance corresponding with the task are ignored.

Parameters:
  • dataframe (DataFrame) โ€“ Dataframe to be read in.

  • task (Union[str, TaskType]) โ€“ Task for the dataset, one of: [โ€œTextClassificationโ€, โ€œTokenClassificationโ€, โ€œText2Textโ€]

Returns:

A argilla dataset for the given task.

Return type:

Union[DatasetForTextClassification, DatasetForTokenClassification, DatasetForText2Text]

Examples

>>> # Read text classification records from a pandas DataFrame
>>> import pandas as pd
>>> df = pd.DataFrame({
...     "inputs": ["example"],
...     "prediction": [
...         [("LABEL1", 0.9), ("LABEL2", 0.1)]
...     ]
... })
>>> read_pandas(df, task="TextClassification")
>>>
>>> # Read token classification records from a datasets Dataset
>>> df = pd.DataFrame({
...     "text": ["my example"],
...     "tokens": [["my", "example"]],
...     "prediction": [
...         [("LABEL1", 3, 10)]
...     ]
... })
>>> read_pandas(df, task="TokenClassification")
>>>
>>> # Read text2text records from a datasets Dataset
>>> df = pd.DataFrame({
...     "text": ["my example"],
...     "prediction": [["mi ejemplo", "ejemplo mio"]]
... })
>>> # or
>>> ds = pd.DataFrame({
...     "text": ["my example"],
...     "prediction": [[("mi ejemplo", 0.9)]]
... })
>>> read_pandas(df, task="Text2Text")

FeedbackDataset#

class argilla.client.feedback.dataset.FeedbackDataset(*, fields, questions, guidelines=None)#

Class to work with `FeedbackDataset`s either locally, or remotely (Argilla or HuggingFace Hub).

Parameters:
guidelines#

contains the guidelines for annotating the dataset.

fields#

contains the fields that will define the schema of the records in the dataset.

questions#

contains the questions that will be used to annotate the dataset.

records#

contains the records of the dataset if any. Otherwise it is an empty list.

argilla_id#

contains the id of the dataset in Argilla, if it has been uploaded (via self.push_to_argilla()). Otherwise, it is None.

Type:

Optional[uuid.UUID]

Raises:
  • TypeError โ€“ if guidelines is not a string.

  • TypeError โ€“ if fields is not a list of FieldSchema.

  • ValueError โ€“ if fields does not contain at least one required field.

  • TypeError โ€“ if questions is not a list of TextQuestion, RatingQuestion, LabelQuestion, and/or MultiLabelQuestion.

  • ValueError โ€“ if questions does not contain at least one required question.

Parameters:

Examples

>>> import argilla as rg
>>> rg.init(api_url="...", api_key="...")
>>> dataset = rg.FeedbackDataset(
...     fields=[
...         rg.TextField(name="text", required=True),
...         rg.TextField(name="label", required=True),
...     ],
...     questions=[
...         rg.TextQuestion(
...             name="question-1",
...             description="This is the first question",
...             required=True,
...         ),
...         rg.RatingQuestion(
...             name="question-2",
...             description="This is the second question",
...             required=True,
...             values=[1, 2, 3, 4, 5],
...         ),
...         rg.LabelQuestion(
...             name="question-3",
...             description="This is the third question",
...             required=True,
...             labels=["positive", "negative"],
...         ),
...         rg.MultiLabelQuestion(
...             name="question-4",
...             description="This is the fourth question",
...             required=True,
...             labels=["category-1", "category-2", "category-3"],
...         ),
...     ],
...     guidelines="These are the annotation guidelines.",
... )
>>> dataset.add_records(
...     [
...         rg.FeedbackRecord(
...             fields={"text": "This is the first record", "label": "positive"},
...             responses=[{"values": {"question-1": {"value": "This is the first answer"}, "question-2": {"value": 5}}}],
...             external_id="entry-1",
...         ),
...     ]
... )
>>> dataset.records
[FeedbackRecord(fields={"text": "This is the first record", "label": "positive"}, responses=[ResponseSchema(user_id=None, values={"question-1": ValueSchema(value="This is the first answer"), "question-2": ValueSchema(value=5), "question-3": ValueSchema(value="positive"), "question-4": ValueSchema(value=["category-1"])})], external_id="entry-1")]
>>> dataset.push_to_argilla(name="my-dataset", workspace="my-workspace")
>>> dataset.argilla_id
"..."
>>> dataset = rg.FeedbackDataset.from_argilla(argilla_id="...")
>>> dataset.records
[FeedbackRecord(fields={"text": "This is the first record", "label": "positive"}, responses=[ResponseSchema(user_id=None, values={"question-1": ValueSchema(value="This is the first answer"), "question-2": ValueSchema(value=5), "question-3": ValueSchema(value="positive"), "question-4": ValueSchema(value=["category-1"])})], external_id="entry-1")]
add_records(records)#

Adds the given records to the dataset, and stores them locally. If youโ€™re planning to add those records either to Argilla or HuggingFace, make sure to call push_to_argilla or push_to_huggingface, respectively, after adding the records.

Parameters:

records (Union[FeedbackRecord, Dict[str, Any], List[Union[FeedbackRecord, Dict[str, Any]]]]) โ€“ the records to add to the dataset. Can be a single record, a list of records or a dictionary with the fields of the record.

Raises:
  • ValueError โ€“ if the given records are an empty list.

  • ValueError โ€“ if the given records are neither: FeedbackRecord, list of FeedbackRecord, list of dictionaries as a record or dictionary as a record.

  • ValueError โ€“ if the given records do not match the expected schema.

Return type:

None

Examples

>>> import argilla as rg
>>> rg.init(api_url="...", api_key="...")
>>> dataset = rg.FeedbackDataset(
...     fields=[
...         rg.TextField(name="text", required=True),
...         rg.TextField(name="label", required=True),
...     ],
...     questions=[
...         rg.TextQuestion(
...             name="question-1",
...             description="This is the first question",
...             required=True,
...         ),
...         rg.RatingQuestion(
...             name="question-2",
...             description="This is the second question",
...             required=True,
...             values=[1, 2, 3, 4, 5],
...         ),
...         rg.LabelQuestion(
...             name="question-3",
...             description="This is the third question",
...             required=True,
...             labels=["positive", "negative"],
...         ),
...         rg.MultiLabelQuestion(
...             name="question-4",
...             description="This is the fourth question",
...             required=True,
...             labels=["category-1", "category-2", "category-3"],
...         ),
...     ],
...     guidelines="These are the annotation guidelines.",
... )
>>> dataset.add_records(
...     [
...         rg.FeedbackRecord(
...             fields={"text": "This is the first record", "label": "positive"},
...             responses=[{"values": {"question-1": {"value": "This is the first answer"}, "question-2": {"value": 5}, "question-3": {"value": "positive"}, "question-4": {"value": ["category-1"]}}}],
...             external_id="entry-1",
...         ),
...     ]
... )
>>> dataset.records
[FeedbackRecord(fields={"text": "This is the first record", "label": "positive"}, responses=[ResponseSchema(user_id=None, values={"question-1": ValueSchema(value="This is the first answer"), "question-2": ValueSchema(value=5), "question-3": ValueSchema(value="positive"), "question-4": ValueSchema(value=["category-1"])})], external_id="entry-1")]
fetch_records()#

Fetches the records from Argilla or HuggingFace and stores them locally.

If the dataset has not been saved in Argilla or HuggingFace, a warning will be raised and the current records will be returned instead.

Return type:

None

property fields: List[TextField]#

Returns the fields that define the schema of the records in the dataset.

format_as(format)#

Formats the FeedbackDataset as a datasets.Dataset object.

Parameters:

format (Literal['datasets']) โ€“ the format to use to format the FeedbackDataset. Currently supported formats are: datasets.

Returns:

The FeedbackDataset.records formatted as a datasets.Dataset object.

Raises:

ValueError โ€“ if the provided format is not supported.

Return type:

Dataset

Examples

>>> import argilla as rg
>>> rg.init(...)
>>> dataset = rg.FeedbackDataset.from_argilla(name="my-dataset")
>>> huggingface_dataset = dataset.format_as("datasets")
classmethod from_argilla(name=None, *, workspace=None, id=None, with_records=True)#

Retrieves an existing FeedbackDataset from Argilla (must have been pushed in advance).

Note that even no argument is mandatory, you must provide either the name, the combination of name and workspace, or the id, otherwise an error will be raised.

Parameters:
  • name (Optional[str]) โ€“ the name of the FeedbackDataset to retrieve from Argilla. Defaults to None.

  • workspace (Optional[str]) โ€“ the workspace of the FeedbackDataset to retrieve from Argilla. If not provided, the active workspace will be used.

  • id (Optional[str]) โ€“ the ID of the FeedbackDataset to retrieve from Argilla. Defaults to None.

  • with_records (bool) โ€“ whether to retrieve the records of the FeedbackDataset from Argilla. Defaults to True.

Returns:

The FeedbackDataset retrieved from Argilla.

Raises:
  • ValueError โ€“ if no FeedbackDataset with the provided name and workspace exists in Argilla.

  • ValueError โ€“ if no FeedbackDataset with the provided id exists in Argilla.

Return type:

FeedbackDataset

Examples

>>> import argilla as rg
>>> rg.init(...)
>>> dataset = rg.FeedbackDataset.from_argilla(name="my_dataset")
property guidelines: str#

Returns the guidelines for annotating the dataset.

iter(batch_size=250)#

Returns an iterator over the records in the dataset.

Parameters:

batch_size (Optional[int]) โ€“ the size of the batches to return. Defaults to 100.

Return type:

Iterator[List[FeedbackRecord]]

push_to_argilla(name=None, workspace=None, show_progress=False)#

Pushes the FeedbackDataset to Argilla. If the dataset has been previously pushed to Argilla, it will be updated with the new records.

Note that you may need to rg.init(โ€ฆ) with your Argilla credentials before calling this function, otherwise the default http://localhost:6900 will be used, which will fail if Argilla is not deployed locally.

Parameters:
  • name (Optional[str]) โ€“ the name of the dataset to push to Argilla. If not provided, the argilla_id will be used if the dataset has been previously pushed to Argilla.

  • workspace (Optional[Union[str, Workspace]]) โ€“ the workspace where to push the dataset to. If not provided, the active workspace will be used.

  • show_progress (bool) โ€“ the option to choose to show/hide tqdm progress bar while looping over records.

Return type:

None

property questions: List[Union[TextQuestion, RatingQuestion, LabelQuestion, MultiLabelQuestion, RankingQuestion]]#

Returns the questions that will be used to annotate the dataset.

property records: List[FeedbackRecord]#

Returns the all the records in the dataset.

unify_responses(question, strategy)#

The unify_responses function takes a question and a strategy as input and applies the strategy to unify the responses for that question.

Parameters:
  • the (question The question parameter can be either a string representing the name of) โ€“ question, or an instance of one of the question classes (LabelQuestion, MultiLabelQuestion, RatingQuestion, RankingQuestion).

  • unifying (strategy The strategy parameter is used to specify the strategy to be used for) โ€“ responses for a given question. It can be either a string or an instance of a strategy class.

  • question (Union[str, LabelQuestion, MultiLabelQuestion, RatingQuestion]) โ€“

  • strategy (Union[str, LabelQuestionStrategy, MultiLabelQuestionStrategy, RatingQuestionStrategy, RankingQuestionStrategy]) โ€“

Return type:

None

class argilla.client.feedback.schemas.FeedbackRecord(*, id=None, fields, metadata=None, responses=None, suggestions=None, external_id=None)#

Schema for the records of a FeedbackDataset in Argilla.

Parameters:
  • id (Optional[UUID]) โ€“ The ID of the record in Argilla. Defaults to None, and is automatically fulfilled internally once the record is pushed to Argilla.

  • fields (Dict[str, str]) โ€“ Fields that match the FeedbackDataset defined fields. So this attribute contains the actual information shown in the UI for each record, being the record itself.

  • metadata (Dict[str, Any]) โ€“ Metadata to be included to enrich the information for a given record. Note that the metadata is not shown in the UI so youโ€™ll just be able to see that programatically after pulling the records. Defaults to None.

  • responses (List[ResponseSchema]) โ€“ Responses given by either the current user, or one or a collection of users that must exist in Argilla. Each response corresponds to one of the FeedbackDataset questions, so the values should match the question type. Defaults to None.

  • external_id (Optional[str]) โ€“ The external ID of the record, which means that the user can specify this ID to identify the record no matter what the Argilla ID is. Defaults to None.

  • suggestions (Union[Tuple[SuggestionSchema], List[SuggestionSchema]]) โ€“

Examples

>>> from argilla.client.feedback.schemas import FeedbackRecord, ResponseSchema, ValueSchema
>>> FeedbackRecord(
...     fields={"text": "This is the first record", "label": "positive"},
...     metadata={"first": True, "nested": {"more": "stuff"}},
...     responses=[ # optional
...         ResponseSchema(
...             user_id="user-1",
...             values={
...                 "question-1": ValueSchema(value="This is the first answer"),
...                 "question-2": ValueSchema(value=5),
...             },
...             status="submitted",
...         ),
...     ],
...     suggestions=[ # optional
...         SuggestionSchema(
...            question_name="question-1",
...            type="model",
...            score=0.9,
...            value="This is the first suggestion",
...            agent="agent-1",
...         ),
...     external_id="entry-1",
... )
class argilla.client.feedback.schemas.FieldSchema(*, id=None, name, title=None, required=True, type=None, settings=None)#

A field schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the field.

  • title (Optional[str]) โ€“ The title of the field. Defaults to None.

  • required (bool) โ€“ Whether the field is required or not. Defaults to True.

  • id (Optional[UUID]) โ€“

  • type (Optional[Literal['text']]) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> field = rg.FieldSchema(
...     name="text",
...     title="Human prompt",
...     required=True
... )
class argilla.client.feedback.schemas.LabelQuestion(*, id=None, name, title=None, description=None, required=True, type='label_selection', settings=None, labels, visible_labels=20)#

A label question schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the question.

  • title (Optional[str]) โ€“ The title of the question. Defaults to None.

  • description (Optional[str]) โ€“ The description of the question. Defaults to None.

  • required (bool) โ€“ Whether the question is required or not. Defaults to True.

  • labels (Union[Dict[str, str],conlist(str)]) โ€“ The labels of the label question.

  • visible_labels (conint(ge=3)) โ€“ The number of visible labels of the label question. Defaults to 20. visible_labels=None implies that ALL the labels will be shown by default, which is not recommended if labels>20

  • id (Optional[UUID]) โ€“

  • type (Literal['label_selection']) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> question = rg.LabelQuestion(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True,
...     labels=["Yes", "No"],
...     visible_labels=None
... )
>>> # or use a dict
>>> question = rg.LabelQuestion(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True,
...     labels={"yes": "Yes", "no": "No"},
...     visible_labels=None
... )
class argilla.client.feedback.schemas.MultiLabelQuestion(*, id=None, name, title=None, description=None, required=True, type='multi_label_selection', settings=None, labels, visible_labels=20)#

A multi label question schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the question.

  • title (Optional[str]) โ€“ The title of the question. Defaults to None.

  • description (Optional[str]) โ€“ The description of the question. Defaults to None.

  • required (bool) โ€“ Whether the question is required or not. Defaults to True.

  • labels (Union[Dict[str, str],conlist(str)]) โ€“ The labels of the label question.

  • visible_labels (conint(ge=3)) โ€“ The number of visible labels of the label question. Defaults to 20. visible_labels=None implies that ALL the labels will be shown by default, which is not recommended if labels>20

  • id (Optional[UUID]) โ€“

  • type (Literal['multi_label_selection']) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> question = rg.MultiLabelQuestion(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True,
...     labels=["Yes", "No"],
...     visible_labels=None
... )
>>> # or use a dict
>>> question = rg.MultiLabelQuestion(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True,
...     labels={"yes": "Yes", "no": "No"},
...     visible_labels=None
... )
class argilla.client.feedback.schemas.QuestionSchema(*, id=None, name, title=None, description=None, required=True, type=None, settings=None)#

A question schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the question.

  • title (Optional[str]) โ€“ The title of the question. Defaults to None.

  • description (Optional[str]) โ€“ The description of the question. Defaults to None.

  • required (bool) โ€“ Whether the question is required or not. Defaults to True.

  • id (Optional[UUID]) โ€“

  • type (Optional[Literal['text', 'rating', 'label_selection', 'multi_label_selection', 'ranking']]) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> question = rg.QuestionSchema(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True
... )
class argilla.client.feedback.schemas.RankingQuestion(*, id=None, name, title=None, description=None, required=True, type='ranking', settings=None, values)#

Schema for the RankingQuestion question-type.

Parameters:
  • settings (Dict[str, Any]) โ€“ The settings for the question, including the type and options.

  • values (Union[ConstrainedListValue[str], Dict[str, str]]) โ€“ The values for the question, to be formatted and included as part of the settings.

  • id (Optional[UUID]) โ€“

  • name (str) โ€“

  • title (Optional[str]) โ€“

  • description (Optional[str]) โ€“

  • required (bool) โ€“

  • type (Literal['ranking']) โ€“

Examples

>>> import argilla as rg
>>> question = rg.RankingQuestion(
...     values=["Yes", "No"]
... )
RankingQuestion(
    settings={
        'type': 'ranking',
        'options': [
            {'value': 'Yes', 'text': 'Yes'},
            {'value': 'No', 'text': 'No'}
        ]
    },
    values=['Yes', 'No']
)
class argilla.client.feedback.schemas.RatingQuestion(*, id=None, name, title=None, description=None, required=True, type='rating', settings=None, values)#

A rating question schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the question.

  • title (Optional[str]) โ€“ The title of the question. Defaults to None.

  • description (Optional[str]) โ€“ The description of the question. Defaults to None.

  • required (bool) โ€“ Whether the question is required or not. Defaults to True.

  • values (List[int]) โ€“ The values of the rating question.

  • id (Optional[UUID]) โ€“

  • type (Literal['rating']) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> question = rg.RatingQuestion(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True,
...     values=[1, 2, 3, 4, 5]
... )
class argilla.client.feedback.schemas.TextField(*, id=None, name, title=None, required=True, type='text', settings=None, use_markdown=False)#

A text field schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the field.

  • title (Optional[str]) โ€“ The title of the field. Defaults to None.

  • required (bool) โ€“ Whether the field is required or not. Defaults to True.

  • use_markdown (bool) โ€“ Whether the field should use markdown or not. Defaults to False.

  • id (Optional[UUID]) โ€“

  • type (Literal['text']) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> field = rg.FieldSchema(
...     name="text",
...     title="Human prompt",
...     required=True,
...     use_markdown=True
... )
class argilla.client.feedback.schemas.TextQuestion(*, id=None, name, title=None, description=None, required=True, type='text', settings=None, use_markdown=False)#

A text question schema for a feedback dataset.

Parameters:
  • name (str) โ€“ The name of the question.

  • title (Optional[str]) โ€“ The title of the question. Defaults to None.

  • description (Optional[str]) โ€“ The description of the question. Defaults to None.

  • required (bool) โ€“ Whether the question is required or not. Defaults to True.

  • use_markdown (bool) โ€“ Whether the field should use markdown or not. Defaults to False.

  • id (Optional[UUID]) โ€“

  • type (Literal['text']) โ€“

  • settings (Dict[str, Any]) โ€“

Examples

>>> import argilla as rg
>>> question = rg.TextQuestion(
...     name="relevant",
...     title="Is the response relevant for the given prompt?",
...     description="Select all that apply",
...     required=True,
...     use_markdown=True
... )