vMLX

Step-3.5-Flash REAP 128B-A11B — MLX Mixed 4/6-bit

MLX mixed-precision quantized version of lkevincc0/Step-3.5-Flash-REAP-128B-A11B for efficient local inference on Apple Silicon.

  • Quantization: Mixed 4/6-bit — v_proj and down_proj at 6-bit, all other weights at 4-bit (group size 64, affine mode)
  • Architecture: Step-3.5 SMoE — 45 layers, 173 routed experts (REAP-pruned), 8 active per token, shared expert
  • Parameters: 128B total, 11B active per token
  • Context: 262K tokens
  • Size: ~68 GB
  • Pruning: ~40% of experts removed via REAP (Router Expert Activation Pruning)

Usage

from mlx_lm import load, generate

model, tokenizer = load("shieldstackllc/Step-3.5-Flash-REAP-128B-A11B-mlx-mixed-4-6")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)

Or with vMLX for native macOS inference.

About

Step-3.5-Flash is a large Mixture-of-Experts language model by StepFun AI. This variant was pruned by lkevincc0 using REAP (Router Expert Activation Pruning), reducing the expert count from the original to 173 while maintaining strong performance. The mixed-precision MLX quantization preserves higher fidelity on critical attention and feed-forward projections by using 6-bit for v_proj and down_proj layers.

Made for vMLX

This model was converted and optimized for vMLX — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.

Credits

Contact

For questions, issues, or collaboration: admin@vmlx.net

Downloads last month
-
Safetensors
Model size
121B params
Tensor type
BF16
·
U32
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shieldstackllc/Step-3.5-Flash-REAP-128B-A11B-mlx-mixed-4-6

Quantized
(2)
this model