Here we describe the available monitors in Argilla:

Base Monitor#

class argilla.monitoring.base.BaseMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

A base monitor class for easy task model monitoring



argilla dataset name


The portion of the data to be store in argilla. Default = 0.2


Return True if a record should be logged to argilla

Return type:



Stop consumers

class argilla.monitoring.base.DatasetRecordsConsumer(name, api, tags=None, metadata=None, buffer_size=10000, upload_size=256, upload_interval=1.0, retries=10, timeout=15, on_error=None)#

Consumes the records from the dataset queue.

  • name (str) โ€“

  • api (Argilla) โ€“

  • tags (Optional[dict]) โ€“

  • metadata (Optional[dict]) โ€“

  • buffer_size (int) โ€“


Upload the next batch of items, return whether successful.


Pause the consumer.


Runs the consumer.


Send records to the consumer


records (Iterable[Union[TextClassificationRecord, TokenClassificationRecord, Text2TextRecord, TextGenerationRecord]]) โ€“

exception argilla.monitoring.base.ModelNotSupportedError#


class argilla.monitoring.asgi.ArgillaLogHTTPMiddleware(api_endpoint, dataset, records_mapper, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, *args, **kwargs)#

A standard Starlette middleware that enables argilla logs for http prediction requests

class argilla.monitoring.asgi.CachedJsonRequest(scope, receive=<function empty_receive>, send=<function empty_send>)#

We must have a cached version of incoming requests since the request body cannot be read from middleware directly. See <> for more information

TODO Remove usage of CachedRequest when is released

  • scope (MutableMapping[str, Any]) โ€“

  • receive (Callable[[], Awaitable[MutableMapping[str, Any]]]) โ€“

  • send (Callable[[MutableMapping[str, Any]], Awaitable[None]]) โ€“

Framework Monitors#

argilla.monitoring.model_monitor.monitor(task_model, dataset, sample_rate=0.3, agent=None, log_interval=5)#

Automatically monitor (i.e. log) data fed through Transformer pipelines, spaCy models or flAIr taggers.

  • task_model (Union[Language, Pipeline, SequenceTagger]) โ€“ The spaCy Language, transformers Pipeline or flAIr SequenceTagger.

  • dataset (str) โ€“ The Argilla dataset to log data into.

  • sample_rate (float, optional) โ€“ The portion of processed data to log. Defaults to 0.3.

  • agent (Optional[str], optional) โ€“ The name of the logging agent. Defaults to None.

  • log_interval (float, optional) โ€“ The interval for uploading in seconds. Defaults to 5.


The monitor that acts equivalently

to the input task_model.

Return type:

Union[BaseMonitor, Language, Pipeline, SequenceTagger]

Transformers Monitor#

class argilla.monitoring._transformers.HuggingFaceMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#
class argilla.monitoring._transformers.TextClassificationMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

Configures monitoring over Hugging Face text classification pipelines

class argilla.monitoring._transformers.ZeroShotMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

spaCy Monitor#

class argilla.monitoring._spacy.SpacyNERMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

A spaCy Language wrapper for NLP NER monitoring in argilla

static doc2token_classification(doc, agent, metadata)#

Converts a spaCy Doc into a token classification record

  • doc (MissingType) โ€“ The spacy doc

  • agent (str) โ€“ Agent to use for the prediction_agent field. Could be the model path or model lang + model version

  • metadata (Optional[Dict[str, Any]]) โ€“ Passed on to the argilla.TokenClassificationRecord.

Return type:


Flair Monitor#

class argilla.monitoring._flair.FlairMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#