Step-3.5-Flash REAP 128B-A11B — MLX Mixed 4/6-bit
MLX mixed-precision quantized version of lkevincc0/Step-3.5-Flash-REAP-128B-A11B for efficient local inference on Apple Silicon.
- Quantization: Mixed 4/6-bit — v_proj and down_proj at 6-bit, all other weights at 4-bit (group size 64, affine mode)
- Architecture: Step-3.5 SMoE — 45 layers, 173 routed experts (REAP-pruned), 8 active per token, shared expert
- Parameters: 128B total, 11B active per token
- Context: 262K tokens
- Size: ~68 GB
- Pruning: ~40% of experts removed via REAP (Router Expert Activation Pruning)
Usage
from mlx_lm import load, generate
model, tokenizer = load("shieldstackllc/Step-3.5-Flash-REAP-128B-A11B-mlx-mixed-4-6")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)
Or with vMLX for native macOS inference.
About
Step-3.5-Flash is a large Mixture-of-Experts language model by StepFun AI. This variant was pruned by lkevincc0 using REAP (Router Expert Activation Pruning), reducing the expert count from the original to 173 while maintaining strong performance. The mixed-precision MLX quantization preserves higher fidelity on critical attention and feed-forward projections by using 6-bit for v_proj and down_proj layers.
Made for vMLX
This model was converted and optimized for vMLX — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.
Credits
- Base model: stepfun-ai/Step-3.5-Flash by StepFun AI
- REAP pruning: lkevincc0/Step-3.5-Flash-REAP-128B-A11B by lkevincc0
- MLX conversion: vMLX — Run AI locally on Mac. No compromises.
Contact
For questions, issues, or collaboration: admin@vmlx.net
- Downloads last month
- -
4-bit
Model tree for shieldstackllc/Step-3.5-Flash-REAP-128B-A11B-mlx-mixed-4-6
Base model
stepfun-ai/Step-3.5-Flash