This model was trained on a large reasoning dataset derived from Pony Alpha (an early checkpoint of GLM-5).
🧬 Datasets
TeichAI/Pony-Alpha-15k
🏗 Base Model
LiquidAI/LFM2.5-1.2B-Thinking
⚡ Use Cases
- Coding
- Science
- Deep research
∑ Dataset Stats
- Cost: $0 USD
- Total tokens (input + output): 43.3M
Sampling Parameters
Liquid AI recommends the following sampling parameters:
| Setting | Value |
|---|---|
| temperature | 0.05 |
| top_k | 50 |
| repeat_penalty | 1.05 |
Included files
| File | Quantization |
|---|---|
LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-gguf-f16.gguf |
f16 |
LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-gguf-q4_0.gguf |
q4_0 |
LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-gguf-q4_K_S.gguf |
q4_K_S |
LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-gguf-q5_K_S.gguf |
q5_K_S |
LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-gguf-q6_K.gguf |
q6_K |
LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-gguf-q8_0.gguf |
q8_0 |
Notes
- Filenames follow the pattern
base-gguf-<quant>.gguf. - Quantizations included:
f16,q4_0,q4_K_S,q5_K_S,q6_K,q8_0.
- Downloads last month
- 259
Hardware compatibility
Log In
to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for TeichAI/LFM2.5-1.2B-Thinking-Pony-Alpha-Distill-GGUF
Base model
LiquidAI/LFM2.5-1.2B-Base
Finetuned
LiquidAI/LFM2.5-1.2B-Thinking