survival-expert-qwen-32b-gguf

This is a GGUF conversion of sunkencity/survival-expert-qwen-32b, which is a LoRA fine-tuned version of Qwen/Qwen2.5-32B-Instruct.

Model Details

  • Base Model: Qwen/Qwen2.5-32B-Instruct
  • Fine-tuned Model: sunkencity/survival-expert-qwen-32b
  • Training: Supervised Fine-Tuning (SFT) with TRL
  • Format: GGUF (for llama.cpp, Ollama, LM Studio, etc.)

Available Quantizations

File Quant Size Description Use Case
survival-expert-qwen-32b-f16.gguf F16 ~1GB Full precision Best quality, slower
survival-expert-qwen-32b-q8_0.gguf Q8_0 ~500MB 8-bit High quality
survival-expert-qwen-32b-q5_k_m.gguf Q5_K_M ~350MB 5-bit medium Good quality, smaller
survival-expert-qwen-32b-q4_k_m.gguf Q4_K_M ~300MB 4-bit medium Recommended - good balance

Usage

With llama.cpp

# Download model
huggingface-cli download sunkencity/survival-expert-qwen-32b-gguf survival-expert-qwen-32b-q4_k_m.gguf

# Run with llama.cpp
./llama-cli -m survival-expert-qwen-32b-q4_k_m.gguf -p "Your prompt here"

With Ollama

  1. Create a Modelfile:
FROM ./survival-expert-qwen-32b-q4_k_m.gguf
  1. Create the model:
ollama create my-model -f Modelfile
ollama run my-model

With LM Studio

  1. Download the .gguf file
  2. Import into LM Studio
  3. Start chatting!

License

Inherits the license from the base model: Qwen/Qwen2.5-32B-Instruct

Citation

@misc{survival_expert_qwen_32b_gguf,
  author = {sunkencity},
  title = {survival-expert-qwen-32b-gguf},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/sunkencity/survival-expert-qwen-32b-gguf}
}

Converted to GGUF format using llama.cpp

Downloads last month
103
GGUF
Model size
33B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sunkencity/survival-expert-qwen-32b-gguf

Base model

Qwen/Qwen2.5-32B
Quantized
(146)
this model