Amazing! You have gone through the entire session to understand how we can use the various features provided by the Transformers API (Hugging Face).
Let’s take a quick look at all the learnings:
- You have learnt how a pipeline() works and what goes behind it.
- You have learnt how a how a tokenization pipeline works and transforms the raw input into numerical representations.
- You have learnt how to download a tokenizer that is understandable by the desired model.
- You also have understood the different components of the tokenized output and their utility.
- You have set up a tokenizer and a model together to get from text to predictions.
- You have fine-tuned a BERT model for a custom use-case of Quora question-pair similarity.
It is highly recommended that you go through the official documentation of Transformers to understand the other features of it and play around with it.
In the next segment, apply all the learnings of this session to solve the graded questions. All the best!