👮 Weak Supervision#

This guide gives you a brief introduction to weak supervision with Argilla.

Argilla currently supports weak supervision for multi-class and multi-label text classification use cases. Support for token classification (e.g., Named Entity Recognition) will be added soon.

Labeling workflow

Argilla weak supervision in a nutshell#

The recommended workflow for weak supervision is:

  • Log an unlabelled dataset into Argilla

  • Use the Annotate mode for hand- and/or bulk-labelling a validation set. This validation is key to measure the quality and performance of your rules. Additionally, you need to build a test set which is not used for defining rules. This test set will be used to measure the performance of your end model, as with any other supervised model.

  • Use the Define rules mode for evaluating and defining rules. Rules are defined with search queries (using ES query string DSL). Additionally, you can use the Python client methods to add, delete, or modify rules programmatically, making them available for refinement in the UI.

  • Use the Python client for reading rules, defining additional rules if needed, and train a label (for building a training set) or a downstream model (for building an end classifier).

The next sections cover the main components of this workflow.

Weak labeling using the UI#

Since version 0.8.0 you can find and define rules directly in the UI. The Define rules mode is found in the right side bar of the Dataset page.

Weak supervision from Python#

Doing weak supervision with Argilla is straightforward. Keeping the same spirit as other parts of the library, you can use any weak supervision library or method, such as Snorkel or Flyingsquid.

Argilla weak supervision support is built around two basic abstractions:

Rule#

A rule encodes an heuristic for labeling a record.

Heuristics can be defined using Elasticsearch’s queries:

plz = Rule(query="plz OR please", label="SPAM")

or with Python functions (similar to Snorkel’s labeling functions, which you can use as well):

def contains_http(record: rg.TextClassificationRecord) -> Optional[str]:
    if "http" in record.inputs["text"]:
        return "SPAM"

Besides textual features, Python labeling functions can exploit metadata features:

def author_channel(record: rg.TextClassificationRecord) -> Optional[str]:
    # the word channel appears in the comment author name
    if "channel" in record.metadata["author"]:
        return "SPAM"

A rule should either return a string value, that is a weak label, or a None type in case of abstention.

These rules can be:

  1. Defined using the no-code feature of the UI (see the Define rules mode reference).

  2. Rule objects can be created using Python as shown above. These objects can be either applied locally by developers (which might be interested for testing without overloading the server) or added to the dataset in the Argilla server, making these rules available from the UI.

  3. Python functions cannot be defined with the no-code feature and can only be applied locally but not added to the dataset in the Argilla server. Data teams can use these Python labelling functions to add extra heuristics before building a weakly labelled dataset. This functions should be used for heuristics which are not possible to define using ES queries.

Weak Labels#

Weak Labels objects bundle and apply a set of rules to the records of a Argilla dataset. Applying a rule to a record means assigning a weak label or abstaining.

This abstraction provides you with the building blocks for training and testing weak supervision “denoising”, “label” or even “end” models:

rules = [contains_http, author_channel]
weak_labels = WeakLabels(
    rules=rules,
    dataset="weak_supervision_yt"
)

# returns a summary of the applied rules
weak_labels.summary()

More information about these abstractions can be found in the Python Labeling module docs.

Built-in label models#

To make things even easier for you, we provide wrapper classes around the most common label models, that directly consume a WeakLabels object. This makes working with those models a breeze. Take a look at the list of built-in models in the labeling module docs.

Detailed Workflow#

A typical workflow to use weak supervision is:

  1. Create a Argilla dataset with your raw dataset. If you actually have some labelled data you can log it into the the same dataset.

  2. Define a set of weak labeling rules with the Rules definition mode in the UI or using the Python client add_rules method.

  3. Create a WeakLabels object and apply the rules using the Python client. You can load the rules from your dataset and add additional rules and labeling functions using Python. Typically, you’ll iterate between this step and step 2.

  4. Once you are satisfied with your weak labels, use the matrix of the WeakLabels instance with your library/method of choice to build a training set or even train a downstream text classification model.

This guide shows you an end-to-end example using Snorkel, Flyingsquid and Weasel. Let’s get started!

Example dataset#

We’ll be using a well-known dataset for weak supervision examples, the YouTube Spam Collection dataset, which is a binary classification task for detecting spam comments in Youtube videos.

[1]:
import pandas as pd

# load data
train_df = pd.read_csv("../../tutorials/notebooks/data/yt_comments_train.csv")
test_df = pd.read_csv("../../tutorials/notebooks/data/yt_comments_test.csv")

# preview data
train_df.head()

[1]:
Unnamed: 0 author date text label video
0 0 Alessandro leite 2014-11-05T22:21:36 pls http://www10.vakinha.com.br/VaquinhaE.aspx... -1.0 1
1 1 Salim Tayara 2014-11-02T14:33:30 if your like drones, plz subscribe to Kamal Ta... -1.0 1
2 2 Phuc Ly 2014-01-20T15:27:47 go here to check the views :3 -1.0 1
3 3 DropShotSk8r 2014-01-19T04:27:18 Came here to check the views, goodbye. -1.0 1
4 4 css403 2014-11-07T14:25:48 i am 2,126,492,636 viewer :D -1.0 1

1. Create a Argilla dataset with unlabelled data and test data#

Let’s load the train (non-labelled) and the test (containing labels) dataset.

[ ]:
import argilla as rg

# build records from the train dataset
records = [
    rg.TextClassificationRecord(
        text=row.text, metadata={"video": row.video, "author": row.author}
    )
    for i, row in train_df.iterrows()
]

# build records from the test dataset with annotation
labels = ["HAM", "SPAM"]
records += [
    rg.TextClassificationRecord(
        text=row.text,
        annotation=labels[row.label],
        metadata={"video": row.video, "author": row.author},
    )
    for i, row in test_df.iterrows()
]

# log records to Argilla
rg.log(records, name="weak_supervision_yt")

After this step, you have a fully browsable dataset available that you can access via the Argilla web app.

2. Define and manage rules#

Let’s now define some of the rules proposed in the tutorial Snorkel Intro Tutorial: Data Labeling.

Rules in Argilla can be defined and used in several ways, In particular: (1) using the UI, (2) using the Python client to add rules to the server, and (3) using the Python client to add additional rules locally, either using Python functions or Rule objects.

Define rules using the UI#

Rules can be defined directly with our web app in the Define rules mode and Elasticsearch’s query strings.

Afterward, you can conveniently load them into your notebook with the load_rules function.

Define rules using the Python client#

Rules can also be defined programmatically as shown below. Depending on your use case and team structure you can mix and match both interfaces (UI or Python). Depending on your workflow, you can decide wether to use the add_rules method to add them to the dataset, or just apply them locally (without adding them to the Argilla dataset).

Let’s see here some programmatic rules:

[ ]:
from argilla.labeling.text_classification import Rule, WeakLabels

#  rules defined as Elasticsearch queries
check_out = Rule(query="check out", label="SPAM")
plz = Rule(query="plz OR please", label="SPAM")
subscribe = Rule(query="subscribe", label="SPAM")
my = Rule(query="my", label="SPAM")
song = Rule(query="song", label="HAM")
love = Rule(query="love", label="HAM")

You can also define plain Python labeling functions:

[ ]:
import re

# rules defined as Python labeling functions
def contains_http(record: rg.TextClassificationRecord):
    if "http" in record.inputs["text"]:
        return "SPAM"


def short_comment(record: rg.TextClassificationRecord):
    return "HAM" if len(record.inputs["text"].split()) < 5 else None


def regex_check_out(record: rg.TextClassificationRecord):
    return (
        "SPAM" if re.search(r"check.*out", record.inputs["text"], flags=re.I) else None
    )

You can load your predefined rules and convert them to Rule instances, and add them to dataset

[ ]:
labeling_rules_df = pd.read_csv("../../_static/datasets/weak_supervision_tutorial/labeling_rules.csv")
[23]:
# preview labeling rules
labeling_rules_df.head()
[23]:
Unnamed: 0 query label
0 0 your SPAM
1 1 rich SPAM
2 2 film HAM
3 3 meeting HAM
4 4 help HAM
[ ]:
predefined_labeling_rules = []
for index, row in labeling_rules_df.iterrows():
    predefined_labeling_rules.append(
        Rule(row["query"], row["label"])
    )

3. Building and analyzing weak labels#

[ ]:
from argilla.labeling.text_classification import load_rules, add_rules, delete_rules

# bundle our rules in a list
rules = [
    check_out,
    plz,
    subscribe,
    my,
    song,
    love
]

labeling_functions = [
    contains_http,
    short_comment,
    regex_check_out
]

# add rules to dataset
add_rules(dataset="weak_supervision_yt", rules=rules)


# add the predefined rules loaded from external file
add_rules(dataset="weak_supervision_yt", rules=predefined_labeling_rules)

After the above step, the rules will be accesible in the weak_supervision_yt dataset.

[ ]:
# load all the rules available in the dataset including interactively defined in the UI
dataset_labeling_rules = load_rules(dataset="weak_supervision_yt")

# extend the labeling rules with labeling functions
dataset_labeling_rules.extend(labeling_functions)

# apply the final rules to the dataset
weak_labels = WeakLabels(dataset="weak_supervision_yt", rules=dataset_labeling_rules)
[27]:
# show some stats about the rules, see the `summary()` docstring for details
weak_labels.summary()
[27]:
label coverage annotated_coverage overlaps conflicts correct incorrect precision
check out {SPAM} 0.224401 0.176 0.224401 0.031590 44 0 1.000000
plz OR please {SPAM} 0.104575 0.088 0.098039 0.036492 22 0 1.000000
subscribe {SPAM} 0.101852 0.120 0.082244 0.031590 30 0 1.000000
my {SPAM} 0.192810 0.192 0.168845 0.062636 42 6 0.875000
song {HAM} 0.118192 0.172 0.070806 0.037037 34 9 0.790698
love {HAM} 0.090959 0.140 0.071351 0.034858 28 7 0.800000
your {SPAM} 0.052832 0.088 0.041939 0.019608 19 3 0.863636
rich {SPAM} 0.000545 0.000 0.000000 0.000000 0 0 NaN
film {} 0.000000 0.000 0.000000 0.000000 0 0 NaN
meeting {} 0.000000 0.000 0.000000 0.000000 0 0 NaN
help {HAM} 0.027778 0.036 0.023965 0.023965 0 9 0.000000
contains_http {SPAM} 0.106209 0.024 0.078431 0.055556 6 0 1.000000
short_comment {HAM} 0.245098 0.368 0.101307 0.064270 84 8 0.913043
regex_check_out {SPAM} 0.226580 0.180 0.226035 0.032135 45 0 1.000000
total {SPAM, HAM} 0.762527 0.880 0.458061 0.147059 354 42 0.893939

You can remove the rules which are wrong from the dataset

[ ]:
not_informative_rules = [
    Rule("rich", "SPAM"),
    Rule("film", "HAM"),
    Rule("meeting", "HAM")
]
[ ]:
from argilla.labeling.text_classification import delete_rules
delete_rules(dataset="weak_supervision_yt", rules=not_informative_rules)

You can update the rule:

help    {HAM}   0.027778    0.036   0.023965    0.023965    0   9   0.000000
[ ]:
help_rule = Rule("help", label="SPAM")
help_rule.update_at_dataset(dataset="weak_supervision_yt")

Lets load the rules again and apply weak labelling

[ ]:
final_rules = labeling_functions + load_rules(dataset="weak_supervision_yt")
[ ]:
weak_labels = WeakLabels(dataset="weak_supervision_yt", rules=final_rules)
[33]:
weak_labels.summary()
[33]:
label coverage annotated_coverage overlaps conflicts correct incorrect precision
contains_http {SPAM} 0.106209 0.024 0.078431 0.049020 6 0 1.000000
short_comment {HAM} 0.245098 0.368 0.101307 0.064270 84 8 0.913043
regex_check_out {SPAM} 0.226580 0.180 0.226035 0.027778 45 0 1.000000
check out {SPAM} 0.224401 0.176 0.224401 0.027778 44 0 1.000000
plz OR please {SPAM} 0.104575 0.088 0.098039 0.023420 22 0 1.000000
subscribe {SPAM} 0.101852 0.120 0.082244 0.025054 30 0 1.000000
my {SPAM} 0.192810 0.192 0.168845 0.050654 42 6 0.875000
song {HAM} 0.118192 0.172 0.070806 0.037037 34 9 0.790698
love {HAM} 0.090959 0.140 0.071351 0.034858 28 7 0.800000
your {SPAM} 0.052832 0.088 0.041939 0.015795 19 3 0.863636
help {SPAM} 0.027778 0.036 0.023965 0.003813 9 0 1.000000
total {SPAM, HAM} 0.761983 0.880 0.458061 0.126906 363 33 0.916667

4. Using the weak labels#

At this step you have at least two options:

  1. Use the weak labels for training a “denoising” or label model to build a less noisy training set. Highly popular options for this are Snorkel or Flyingsquid. After this step, you can train a downstream model with the “clean” labels.

  2. Use the weak labels directly with recent “end-to-end” (e.g., Weasel) or joint models (e.g., COSINE).

Let’s see some examples:

A simple majority vote#

As a first example we will show you, how to use the WeakLabels object together with a simple majority vote model, which is arguably the most straightforward label model. On a per-record basis, it simply counts the votes for each label returned by the rules, and takes the majority vote. Argilla provides a neat implementation of this logic in its MajorityVoter class.

[ ]:
from argilla.labeling.text_classification import MajorityVoter

# instantiate the majority vote label model by simply providing the weak labels object
majority_model = MajorityVoter(weak_labels)

In contrast to the other label models we will discuss further down, the majority voter does not need to be fitted. You can directly check its performance by simply calling its score() method.

[35]:
# check its performance
print(majority_model.score(output_str=True))

              precision    recall  f1-score   support

        SPAM       0.99      0.93      0.96       102
         HAM       0.94      0.99      0.96       108

    accuracy                           0.96       210
   macro avg       0.96      0.96      0.96       210
weighted avg       0.96      0.96      0.96       210

An accuracy of 0.96 seems surprisingly high, but you need to keep in mind that we simply excluded the records from the evaluation, for which the model abstained (that is a tie in the votes or no votes at all). So let’s account for this and correct the accuracy by assuming the model performs like a random classifier for these abstained records:

\(accuracy_c = frac_{non} \times accuracy + frac_{abs} \times accuracy_{random}\)

where \(frac_{non}\) is the fraction of non-abstained records and \(frac_{abs}\) the fraction of abstained records.

[ ]:
# calculate fractions using the support metric (see above)
frac_non = 200 / len(weak_labels.annotation())
frac_abs = 1 - (200 / len(weak_labels.annotation()))

# accuracy without abstentions: 0.96; accuracy of random classifier: 0.5
print("accuracy_c:", frac_non * 0.96 + frac_abs * 0.5)
# accuracy_c: 0.868

As we will see further down, an accuracy of 0.868 is still a very decent baseline.

Note

To get a noisy estimate of the corrected accuracy, you can also set the “tie_break_policy” argument: majority_model.score(..., tie_break_policy="random").

When predicting weak labels to train a down-stream model, however, you probably want to discard the abstentions. Calling the predict() method on the majority voter, excludes the abstentions by default and only returns records without annotations. These are normally used to build a training set for a downstream model.

You can quickly explore the predicted records with Argilla, before building a training set for training a downstream text classifier. This step is useful for validation, manual revision, or defining score thresholds for accepting labels from your label model (for example, only considering labels with a score greater then 0.8.)

[ ]:
# get your training records with the predictions of the label model
records_for_training = majority_model.predict()

# optional: log the records to a new dataset in Argilla
rg.log(records_for_training, name="majority_voter_results")

# extract training data
training_data = pd.DataFrame(
    [{"text": rec.text, "label": rec.prediction[0][0]} for rec in records_for_training]
)

[38]:
# preview training data
training_data

[38]:
text label
0 http://www.rtbf.be/tv/emission/detail_the-voic... SPAM
1 http://www.ermail.pl/dolacz/V3VeYGIN CLICK ht... SPAM
2 Perfect! &lt;3 HAM
3 Check out Melbourne shuffle, everybody! SPAM
4 Check out my videos guy! :) Hope you guys had ... SPAM
... ... ...
1048 Great song HAM
1049 subscribe HAM
1050 LoL HAM
1051 Love this song HAM
1052 LOVE THE WAY YOU LIE ..&quot; HAM

1053 rows × 2 columns

Label model with Snorkel#

Snorkel’s label model is by far the most popular option for using weak supervision, and Argilla provides built-in support for it. Using Snorkel with Argilla’s WeakLabels is as simple as:

[ ]:
%pip install snorkel -qqq
[ ]:
from argilla.labeling.text_classification import Snorkel

# we pass our WeakLabels instance to our Snorkel label model
snorkel_model = Snorkel(weak_labels)

# we fit the model
snorkel_model.fit(lr=0.001, n_epochs=50)

Note

The Snorkel label model is not suited for multi-label classification tasks and does not support them.

When fitting the snorkel model, we recommend performing a quick grid search for the learning rate lr and the number of epochs n_epochs.

[41]:
# we check its performance
print(snorkel_model.score(output_str=True))

              precision    recall  f1-score   support

        SPAM       0.93      0.93      0.93       106
         HAM       0.94      0.94      0.94       114

    accuracy                           0.94       220
   macro avg       0.94      0.94      0.94       220
weighted avg       0.94      0.94      0.94       220

At first sight, the model seems to perform worse than the majority vote baseline. However, let’s again correct the accuracy for the abstentions.

[ ]:
# calculate fractions using the support metric (see above)
frac_non = 209 / len(weak_labels.annotation())
frac_abs = 1 - (209 / len(weak_labels.annotation()))

# accuracy without abstentions: 0.95; accuracy of random classifier: 0.5
print("accuracy_c:", frac_non * 0.95 + frac_abs * 0.5)
# accuracy_c: 0.8761999999999999

Now we can see that with an accuracy of 0.876, its performance over the whole test set is actually slightly better.

After fitting your label model, you can quickly explore its predictions, before building a training set for training a downstream text classifier. This step is useful for validation, manual revision, or defining score thresholds for accepting labels from your label model (for example, only considering labels with a score greater then 0.8.)

[ ]:
# get your training records with the predictions of the label model
records_for_training = snorkel_model.predict()

# optional: log the records to a new dataset in Argilla
rg.log(records_for_training, name="snorkel_results")

# extract training data
training_data = pd.DataFrame(
    [{"text": rec.text, "label": rec.prediction[0][0]} for rec in records_for_training]
)

[44]:
# preview training data
training_data
[44]:
text label
0 http://www.rtbf.be/tv/emission/detail_the-voic... SPAM
1 http://www.ermail.pl/dolacz/V3VeYGIN CLICK ht... SPAM
2 Perfect! &lt;3 HAM
3 Check out Melbourne shuffle, everybody! SPAM
4 Facebook account HACK!! http://hackfbaccountl... HAM
... ... ...
1174 Great song HAM
1175 subscribe HAM
1176 LoL HAM
1177 Love this song HAM
1178 LOVE THE WAY YOU LIE ..&quot; HAM

1179 rows × 2 columns

Note

For an example of how to use the WeakLabels object with Snorkel’s raw LabelModel class, you can check out the WeakLabels reference.

Label model with FlyingSquid#

FlyingSquid is a powerful method developed by Hazy Research, a research group from Stanford behind ground-breaking work on programmatic data labeling, including Snorkel. FlyingSquid uses a closed-form solution for fitting the label model with great speed gains and similar performance. Just like for Snorkel, Argilla provides built-in support for FlyingSquid, too.

[ ]:
%pip install flyingsquid pgmpy -qqq
[ ]:
from argilla.labeling.text_classification import FlyingSquid

# we pass our WeakLabels instance to our FlyingSquid label model
flyingsquid_model = FlyingSquid(weak_labels)

# we fit the model
flyingsquid_model.fit()

Note

The FlyingSquid label model is not suited for multi-label classification tasks and does not support them.

[47]:
# we check its performance
print(flyingsquid_model.score(output_str=True))

              precision    recall  f1-score   support

        SPAM       0.92      0.93      0.93       106
         HAM       0.94      0.92      0.93       114

    accuracy                           0.93       220
   macro avg       0.93      0.93      0.93       220
weighted avg       0.93      0.93      0.93       220

Again, let’s correct the accuracy for the abstentions.

[ ]:
# calculate fractions using the support metric (see above)
frac_non = 209 / len(weak_labels.annotation())
frac_abs = 1 - (209 / len(weak_labels.annotation()))

# accuracy without abstentions: 0.93; accuracy of random classifier: 0.5
print("accuracy_c:", frac_non * 0.93 + frac_abs * 0.5)
# accuracy_c: 0.85948

Here, it really seems that with an accuracy of 0.859, the performance over the whole test set is actually slightly worse than the baseline of the majority vote.

After fitting your label model, you can quickly explore its predictions, before building a training set for training a downstream text classifier. This step is useful for validation, manual revision, or defining score thresholds for accepting labels from your label model (for example, only considering labels with a score greater then 0.8.)

[ ]:
# get your training records with the predictions of the label model
records_for_training = flyingsquid_model.predict()

# log the records to a new dataset in Argilla
rg.log(records_for_training, name="flyingsquid_results")

# extract training data
training_data = pd.DataFrame(
    [{"text": rec.text, "label": rec.prediction[0][0]} for rec in records_for_training]
)
[50]:
# preview training data
training_data
[50]:
text label
0 http://www.rtbf.be/tv/emission/detail_the-voic... SPAM
1 http://www.ermail.pl/dolacz/V3VeYGIN CLICK ht... SPAM
2 Perfect! &lt;3 HAM
3 Check out Melbourne shuffle, everybody! SPAM
4 Facebook account HACK!! http://hackfbaccountl... SPAM
... ... ...
1174 Great song HAM
1175 subscribe HAM
1176 LoL HAM
1177 Love this song HAM
1178 LOVE THE WAY YOU LIE ..&quot; HAM

1179 rows × 2 columns

Joint Model with Weasel#

Weasel lets you train downstream models end-to-end using directly weak labels. In contrast to Snorkel or FlyingSquid, which are two-stage approaches, Weasel is a one-stage method that jointly trains the label and the end model at the same time. For more details check out the End-to-End Weak Supervision paper presented at NeurIPS 2021.

In this guide we will show you, how you can train a Hugging Face transformers model directly with weak labels using Weasel. Since Weasel uses PyTorch Lightning for the training, some basic knowledge of PyTorch is helpful, but not strictly necessary.

Let’s start with installing the Weasel python package:

[53]:
!python -m pip install git+https://github.com/autonlab/weasel#egg=weasel[all]

The first step is to obtain our weak labels. For this we use the same rules and data set as in the examples above (Snorkel and FlyingSquid).

[ ]:
# obtain our weak labels
weak_labels = WeakLabels(rules=rules, dataset="weak_supervision_yt")

In a second step we instantiate our end model, which in our case will be a pre-trained transformer from the Hugging Face Hub. Here we choose the small ELECTRA model by Google that shows excellent performance given its moderate number of parameters. Due to its size, you can fine-tune it on your CPU within a reasonable amount of time.

[ ]:
from weasel.models.downstream_models.transformers import Transformers

# instantiate our transformers end model
end_model = Transformers("google/electra-small-discriminator", num_labels=2)

With our end-model at hand, we can now instantiate the Weasel model. Apart from the end-model, it also includes a neural encoder that tries to estimate latent labels.

[ ]:
from weasel.models import Weasel

# instantiate our weasel end-to-end model
weasel = Weasel(
    end_model=end_model,
    num_LFs=len(weak_labels.rules),
    n_classes=2,
    encoder={"hidden_dims": [32, 10]},
    optim_encoder={"name": "adam", "lr": 1e-4},
    optim_end_model={"name": "adam", "lr": 5e-5},
)

Afterwards, we wrap our data in the TransformersDataModule, so that Weasel and PyTorch Lightning can work with it. In this step we also tokenize the data. Here we need to be careful to use the corresponding tokenizer to our end model.

[ ]:
from transformers import AutoTokenizer
from weasel.datamodules.transformers_datamodule import (
    TransformersDataModule,
    TransformersCollator,
)

# tokenizer for our transformers end model
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")

# tokenize train and test data
X_train = [
    tokenizer(rec.text, truncation=True)
    for rec in weak_labels.records(has_annotation=False)
]
X_test = [
    tokenizer(rec.text, truncation=True)
    for rec in weak_labels.records(has_annotation=True)
]

# instantiate data module
datamodule = TransformersDataModule(
    label_matrix=weak_labels.matrix(has_annotation=False),
    X_train=X_train,
    collator=TransformersCollator(tokenizer),
    X_test=X_test,
    Y_test=weak_labels.annotation(),
    batch_size=8,
)

Now we have everything ready to start the training of our Weasel model. For the training process, Weasel relies on the excellent PyTorch Lightning Trainer. It provides tons of options and features to optimize the training process, but the defaults below should give you reasonable results. Keep in mind that you are fine-tuning a full-blown transformer model, albeit a small one.

[ ]:
import pytorch_lightning as pl

# instantiate the pytorch-lightning trainer
trainer = pl.Trainer(
    gpus=0,  # >= 1 to use GPU(s)
    max_epochs=2,
    logger=None,
    callbacks=[pl.callbacks.ModelCheckpoint(monitor="Val/accuracy", mode="max")],
)

# fit the model end-to-end
trainer.fit(
    model=weasel,
    datamodule=datamodule,
)

After the training we can call the Trainer.test method to check the final performance. The model should achieve a test accuracy of around 0.94.

[ ]:
trainer.test()
# {'accuracy': 0.94, ...}

To use the model for inference, you can either use its predict method:

[ ]:
# Example text for the inference
text = "In my head this is like 2 years ago.. Time FLIES"

# Get predictions for the example text
predicted_probs, predicted_label = weasel.predict(tokenizer(text, return_tensors="pt"))

# Map predicted int to label
weak_labels.int2label[int(predicted_label)]  # HAM

Or you can instantiate one of the popular transformers pipelines, providing directly the end-model and the tokenizer:

[ ]:
from transformers import pipeline

# modify the id2label mapping of the model
weasel.end_model.model.config.id2label = weak_labels.int2label

# create transformers pipeline
classifier = pipeline(
    "text-classification", model=weasel.end_model.model, tokenizer=tokenizer
)

# use pipeline for predictions
classifier(text)  # [{'label': 'HAM', 'score': 0.6110987663269043}]