Whisper Tiny Llm Lingo
Fine-tuned Whisper model based on openai/whisper-tiny.
Training Results
| Metric | Base Model | Fine-tuned |
|---|---|---|
| WER | 15.38% | 15.98% |
Note: WER increased by 0.59% compared to base model
Training Details
- Base Model: openai/whisper-tiny
- Training Dataset: Trelis/run-a-kokoro-text-to-speech-server
- Train Loss: 0.4569
- Training Time: 26 seconds
Training Plot
Inference
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model="Trelis/whisper-tiny-llm-lingo")
result = asr("path/to/audio.wav")
print(result["text"])
Training Logs
Full training logs are available in training_log.txt.
Fine-tuned using Trelis Studio
- Downloads last month
- 63
Model tree for Trelis/whisper-tiny-llm-lingo
Base model
openai/whisper-tiny