Notebooks#
The notebook reference guide for Argilla tutorials.
This section contains:
- π Backup and version Argilla
Datasets
usingDVC
- π Run Argilla with a Transformer in an active learning loop and a free GPU in your browser
- Initial setup on Google Colab
- Install Elastic Search
- Start the Argilla localhost in a terminal
- Create a public link to Argilla localhost with ngrok
- Log data to argilla and start your active learning loop with small-text
- Start annotating in the browser via the ngrok link
- Extract annotated data for downstream use
- Summary
- πΎ Monitor FastAPI model endpoints
- πΊοΈ Add bias-equality features to datasets with
disaggregators
- π‘ Build and evaluate a zero-shot sentiment classifier with GPT-3
- π¨ Label data with semantic search and Sentence Transformers
- πΈ Bulk Labeling Multimodal Data
- 𧱠Augment weak supervision rules with Sentence Transformers
- Introduction
- Running Argilla
- Setup
- Detailed Workflow
- The dataset
- 1. Create a Argilla dataset with unlabelled data and test data
- 2. Defining rules
- 3. Building and analyzing weak labels
- 4. Using the weak labels
- 5. Extending the weak labels
- 6. Training a downstream model
- Summary
- Appendix: Visualize changes
- Appendix: Optimizing the thresholds
- Appendix: Plot extension
- π« Zero-shot and few-shot classification with SetFit
- π Multi-label text classification with weak supervision
- π° Train a text classifier with weak supervision
- ποΈ Assign records to your annotation team
- π©Ή Delete labels from a Token or Text Classification dataset
- π« Evaluate a zero-shot NER with Flair
- π Train a NER model with
skweak
- π« Explore and analyze
spaCy
NER predictions - π§ Find label errors with cleanlab
- π₯ Compare Text Classification Models
- π΅οΈββοΈ Analize predictions with explainability methods
- π§Ό Clean labels using your modelβs loss
- # π€ Fine-tunning a NER model with BERT for Beginners
- ## Introduction
- ## Running Argilla
- ## Setup
- ## π Exploring our dataset
- ## β³ Preprocessing the data
- ## π Fine-tunning the model
- πβοΈ Summary
- Text classification active learning with
classy-classification
- π€ Text Classification active learning with ModAL
- π€― Few-shot classification with SetFit
- π€ Train a sentiment classifier with SetFit
- π Text Classification active learning with small-text
- π·οΈ Fine-tune a sentiment classifier with your own data
- Introduction
- Running Argilla
- Preliminaries
- 1. Run the pre-trained model over the dataset and log the predictions
- 2. Explore and label data with the pretrained model
- 3. Fine-tune the pre-trained model
- 4. Testing the fine-tuned model
- 5. Run our fine-tuned model over the dataset and log the predictions
- 6. Explore and label data with the fine-tuned model
- 7. Fine-tuning with the extended training dataset
- Summary
- πΈοΈ Train a summarization model with Unstructured and Transformers