Groq

LLM (Large Language Model) · LLM
FREEMIUM 🔥 TRENDING
Groq provides ultra-fast AI inference using custom LPU chips. Get responses at 300+ tokens per second — the fastest publicly available AI inference.

User Reviews

No reviews yet. Be the first to leave a review!

Leave a Review

SHARE THIS TOOL

TOOL INFO

Category
LLM
Model Type
LLM (Large Language Model)
Pricing
FREEMIUM
Views
3
Added
Mar 23, 2026