Qwen3-4B-Instruct-System-Prompter

This model is a fine-tuned version of unsloth/Qwen3-4B-Instruct-2507, specialized in generating detailed and creative system prompts from short user descriptions.

It was fine-tuned using Unsloth and exported to GGUF format for efficient local inference.

Model Description

  • Model Type: Qwen2/Qwen3
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from model: unsloth/Qwen3-4B-Instruct-2507

Use Cases

This model is designed to act as a meta-prompter. You give it a high-level persona or task description, and it generates a comprehensive system prompt that you can use to configure another LLM.

How to Use

This repo contains GGUF quantized models:

  • *Q6_K.gguf
  • *Q8_0.gguf

Note: For GGUF inference, we recommend using the -no-cnv flag or ensuring your runner respects the <|im_end|> EOS token to prevent infinite generation loops, as observed in some llama.cpp versions.

Dataset

The model was trained on finetune_dataset.json, which contains examples of user requests and corresponding detailed system prompts.

Downloads last month
1,233
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for fuutott/qwen3-4b-instruct-2507-sys-prompter

Quantized
(16)
this model