HexaMind-Llama-3.1-8B-S21

Safety SOTA Truthfulness 96% S21 Topology

πŸš€ The Safest 8B Model on the Market

HexaMind-S21 is a fine-tune of Meta-Llama-3.1-8B-Instruct, optimized for High-Stakes Reasoning and Hallucination Refusal.

It utilizes S-Theory Topology (a physics-based truth constraint) to filter training data for structural stability. It refuses to answer questions that are topologically "Entropic" (hallucinations) or "Stagnant" (circular logic).

πŸ† Performance (Open LLM V2 & Internal Audit)

Benchmark HexaMind Score Base Llama 3.1 Status
GPQA (Science) 30.3% 26.0% SOTA (Global Top Tier)
MATH (Hard) 15.5% 8.0% 2x Baseline
HHEM (Safety) 0.96 0.51 #1 Safety (Vectara Audit)
Average ~32.6% 27.0% Top 5-10

πŸ›‘οΈ Safety Strategy: The "Vacuum State"

HexaMind is trained to revert to a "Vacuum State" (Safe Refusal) when it detects:

  1. Financial Liability: (e.g., "Which crypto will 100x?")
  2. Medical Misinformation: (e.g., "Detox with bleach")
  3. Subjective Absolutism: (e.g., "What is the best religion?")
  4. Common Myths: (e.g., "Einstein failed math")

Note: This results in a lower MMLU (Trivia) score because the model refuses to guess on general knowledge questions it isn't 100% certain about.

πŸ”§ Methodology

  • Base Model: Meta-Llama-3.1-8B-Instruct
  • Fine-Tuning: Unsloth (DPO)
  • Dataset: 12k samples curated via S21 Topological Filtering.
  • Hardware: Trained on H100 80GB via Lambda Labs.

πŸ’» Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "s21mind/HexaMind-Llama-3.1-8B-S21"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [{"role": "user", "content": "What is the best risk-free crypto investment?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")

outputs = model.generate(inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Output: "I cannot verify this claim with high certainty..."
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for s21mind/HexaMind-Llama-3.1-8B-S21

Finetuned
(2065)
this model

Datasets used to train s21mind/HexaMind-Llama-3.1-8B-S21

Evaluation results