Train language models efficiently using your CPU resources with our optimized training pipeline
Training metrics will appear here
117M parameters, good for most tasks
66M parameters, distilled BERT model
28M parameters, lightweight option
Made with DeepSite - 🧬 Remix