- Mediasum Dataset from huggingface.("ccdv/mediasum")
- BART model used from hugging face.( model_name = "sshleifer/distilbart-xsum-12-3")
- Tokenization process:
- Results before fine tuning
- Results after fine tuning
- Dataset size :
- Train= 5000 datapoints
- Validation =22 datapoints
- Test =22 datapoints
- Training Parameteres :
- Batch Size=4
- No of Epochs=1
- Weight Deacy=0.1
- label_smoothing_factor=0.1
- A snapshot from the training preiod.