Monitoring#

Here we describe the available monitors in Argilla:

  • Base Monitor: Internal mechanism to queue and log monitored predictions

  • ArgillaLogHTTPMiddleware: Asgi middleware to monitor API endpoints

  • Framework Monitors: Monitors to wrap around common NLP inference frameworks

Base Monitor#

class argilla.monitoring.base.BaseMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

A base monitor class for easy task model monitoring

dataset:

argilla dataset name

sample_rate:

The portion of the data to store in argilla. Default = 0.2

is_record_accepted()#

Return True if a record should be logged to argilla

Return type

bool

shutdown()#

Stop consumers

class argilla.monitoring.base.DatasetRecordsConsumer(name, api, tags=None, metadata=None, buffer_size=10000, upload_size=256, upload_interval=1.0, retries=10, timeout=15, on_error=None)#

Consumes the records from the dataset queue.

Parameters
  • name (str) โ€“

  • api (argilla.client.client.Argilla) โ€“

  • tags (Optional[dict]) โ€“

  • metadata (Optional[dict]) โ€“

  • buffer_size (int) โ€“

log_next_batch()#

Upload the next batch of items, return whether successful.

pause()#

Pause the consumer.

run()#

Runs the consumer.

send(records)#

Send records to the consumer

Parameters

records (Iterable[Union[argilla.client.models.TextClassificationRecord, argilla.client.models.TokenClassificationRecord, argilla.client.models.Text2TextRecord, argilla.client.models.TextGenerationRecord]]) โ€“

exception argilla.monitoring.base.ModelNotSupportedError#

ArgillaLogHTTPMiddleware#

Framework Monitors#

argilla.monitoring.model_monitor.monitor(task_model, dataset, sample_rate=0.3, agent=None, log_interval=5)#

Automatically monitor (i.e. log) data fed through Transformer pipelines, spaCy models or flAIr taggers.

Parameters
  • task_model (Union[Language, Pipeline, SequenceTagger]) โ€“ The spaCy Language, transformers Pipeline or flAIr SequenceTagger.

  • dataset (str) โ€“ The Argilla dataset to log data into.

  • sample_rate (float, optional) โ€“ The portion of processed data to log. Defaults to 0.3.

  • agent (Optional[str], optional) โ€“ The name of the logging agent. Defaults to None.

  • log_interval (float, optional) โ€“ The interval for uploading in seconds. Defaults to 5.

Returns

The monitor that acts equivalently

to the input task_model.

Return type

Union[BaseMonitor, Language, Pipeline, SequenceTagger]

Transformers Monitor#

class argilla.monitoring._transformers.HuggingFaceMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#
class argilla.monitoring._transformers.TextClassificationMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

Configures monitoring over Hugging Face text classification pipelines

class argilla.monitoring._transformers.ZeroShotMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

spaCy Monitor#

class argilla.monitoring._spacy.SpacyNERMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#

A spaCy Language wrapper for NLP NER monitoring in argilla

static doc2token_classification(doc, agent, metadata)#

Converts a spaCy Doc into a token classification record

Parameters
  • doc (argilla.monitoring.types.MissingType) โ€“ The spacy doc

  • agent (str) โ€“ Agent to use for the prediction_agent field. Could be the model path or model lang + model version

  • metadata (Optional[Dict[str, Any]]) โ€“ Passed on to the argilla.TokenClassificationRecord.

Return type

argilla.client.models.TokenClassificationRecord

Flair Monitor#

class argilla.monitoring._flair.FlairMonitor(*args, api, dataset, sample_rate=1.0, log_interval=1.0, agent=None, tags=None, **kwargs)#