This repository showcases how to leverage the power of PyTorch and Hugging Face Transformers to predict class labels for input text. Using a pre-trained language model, we tokenize the text and obtain its numerical representations in the form of PyTorch tensors.
Model: pre-trained model from Hugging Face's Transformers library - Roberta model for sequence classification.
Dataset: AG news from Hugging Face https://huggingface.co/datasets/ag_news