This is a small app I built using HuggingFace Transformers and FastAPI to perform text classification using the pre-trained DistilBERT model. I mostly relied on the excellent tutorial by Venelin to build this (ref 1). I made a few key changes to his approach:
- Used pre-trained model instead of fine-tuning
- Used
requirement.txt
for pip instead of using pipenv - Did not use a lot of extra code style packages
How to use?
- install torch for your hardware
pip install requirements.txt
uvicorn DistilRoBERTa.api:app
orbash bin/run_server
Then make your API call:
http POST http://127.0.0.1:8000/classify text="Pre-trained j-hartmann/emotion-english-distilroberta-base seems to work quite well!"
You'll get an output like:
{
"probabilities": {
"anger": 0.007748342119157314,
"disgust": 0.0022821975871920586,
"fear": 0.0021107119973748922,
"joy": 0.27118009328842163,
"neutral": 0.6292678713798523,
"sadness": 0.005368099547922611,
"surprise": 0.08204267174005508
}
}
- docker
- ??