survival-expert-qwen-32b-gguf
This is a GGUF conversion of sunkencity/survival-expert-qwen-32b, which is a LoRA fine-tuned version of Qwen/Qwen2.5-32B-Instruct.
Model Details
- Base Model: Qwen/Qwen2.5-32B-Instruct
- Fine-tuned Model: sunkencity/survival-expert-qwen-32b
- Training: Supervised Fine-Tuning (SFT) with TRL
- Format: GGUF (for llama.cpp, Ollama, LM Studio, etc.)
Available Quantizations
| File | Quant | Size | Description | Use Case |
|---|---|---|---|---|
| survival-expert-qwen-32b-f16.gguf | F16 | ~1GB | Full precision | Best quality, slower |
| survival-expert-qwen-32b-q8_0.gguf | Q8_0 | ~500MB | 8-bit | High quality |
| survival-expert-qwen-32b-q5_k_m.gguf | Q5_K_M | ~350MB | 5-bit medium | Good quality, smaller |
| survival-expert-qwen-32b-q4_k_m.gguf | Q4_K_M | ~300MB | 4-bit medium | Recommended - good balance |
Usage
With llama.cpp
# Download model
huggingface-cli download sunkencity/survival-expert-qwen-32b-gguf survival-expert-qwen-32b-q4_k_m.gguf
# Run with llama.cpp
./llama-cli -m survival-expert-qwen-32b-q4_k_m.gguf -p "Your prompt here"
With Ollama
- Create a
Modelfile:
FROM ./survival-expert-qwen-32b-q4_k_m.gguf
- Create the model:
ollama create my-model -f Modelfile
ollama run my-model
With LM Studio
- Download the
.gguffile - Import into LM Studio
- Start chatting!
License
Inherits the license from the base model: Qwen/Qwen2.5-32B-Instruct
Citation
@misc{survival_expert_qwen_32b_gguf,
author = {sunkencity},
title = {survival-expert-qwen-32b-gguf},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/sunkencity/survival-expert-qwen-32b-gguf}
}
Converted to GGUF format using llama.cpp
- Downloads last month
- 103
Hardware compatibility
Log In
to add your hardware
4-bit
5-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support