ntedgi / albert-fine-tuning-squad-2.0

Fine-tuning a Transformer albert model for Question Answering on Squad 2.0

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

the Albert Model for question answering fine tuning squad 2.0

links:

How I Run This Project Locally ?

  • clone this repository

  • Just Want to Play ?

    • Run : bash install.sh
    • Run : bash run.sh
  • request:

    • End-Point : GET http://localhost:8000/predict
    • Params : { question : String , paragraphs : Array}
  • result:

    • Type : Json
    • Result : {result : [paragraphIndex:Int , Answer:String]
  • example:

curl -X GET \  http://localhost:8000/predict \ -H 'Content-Type: text/plain' \ -d '{"question":"Who was Jim Henson?", "paragraphs":[ {"id": 1, "text": "has a nice car. Jim Henson was a nice puppet."} , {"id":2,"text": "All the 2023(GoF) design patterns implemented inJavascript.Jim Henson was a monkey king"}]}'

{ "result": [ [ 1, "a nice puppet" ], [ 2, "a monkey king" ] ] }

About

Fine-tuning a Transformer albert model for Question Answering on Squad 2.0

License:MIT License


Languages

Language:Python 94.4%Language:Shell 5.6%