october-finetuning-more-variables-sweep-20251012-203644-t10

Slur reclamation binary classifier
Task: LGBTQ+ reclamation vs non-reclamation use of harmful words on social media text.

Trial timestamp (UTC): 2025-10-12 20:36:44

Data case: en-es-it

Configuration (trial hyperparameters)

Model: Alibaba-NLP/gte-multilingual-base

Hyperparameter Value
LANGUAGES en-es-it
LR 2e-05
EPOCHS 3
MAX_LENGTH 256
USE_BIO False
USE_LANG_TOKEN False
GATED_BIO False
FOCAL_LOSS True
FOCAL_GAMMA 2.5
USE_SAMPLER True
R_DROP True
R_KL_ALPHA 1.0
TEXT_NORMALIZE True

Dev set results (summary)

Metric Value
f1_macro_dev_0.5 0.7198752228163993
f1_weighted_dev_0.5 0.8532381326695488
accuracy_dev_0.5 0.844097995545657
f1_macro_dev_best_global 0.7294545380271047
f1_weighted_dev_best_global 0.8681647628793743
accuracy_dev_best_global 0.8685968819599109
f1_macro_dev_best_by_lang 0.721910521713786
f1_weighted_dev_best_by_lang 0.8560756147765785
accuracy_dev_best_by_lang 0.8485523385300668
default_threshold 0.5
best_threshold_global 0.55
thresholds_by_lang {"en": 0.45000000000000007, "it": 0.55, "es": 0.5}

Thresholds

  • Default: 0.5
  • Best global: 0.55
  • Best by language: { "en": 0.45000000000000007, "it": 0.55, "es": 0.5 }

Detailed evaluation

Classification report @ 0.5

              precision    recall  f1-score   support

 no-recl (0)     0.9339    0.8805    0.9064       385
    recl (1)     0.4651    0.6250    0.5333        64

    accuracy                         0.8441       449
   macro avg     0.6995    0.7528    0.7199       449
weighted avg     0.8671    0.8441    0.8532       449

Classification report @ best global threshold (t=0.55)

              precision    recall  f1-score   support

 no-recl (0)     0.9223    0.9247    0.9235       385
    recl (1)     0.5397    0.5312    0.5354        64

    accuracy                         0.8686       449
   macro avg     0.7310    0.7280    0.7295       449
weighted avg     0.8677    0.8686    0.8682       449

Classification report @ best per-language thresholds

              precision    recall  f1-score   support

 no-recl (0)     0.9319    0.8883    0.9096       385
    recl (1)     0.4756    0.6094    0.5342        64

    accuracy                         0.8486       449
   macro avg     0.7037    0.7488    0.7219       449
weighted avg     0.8668    0.8486    0.8561       449

Per-language metrics (at best-by-lang)

lang n acc f1_macro f1_weighted prec_macro rec_macro prec_weighted rec_weighted
en 154 0.8182 0.5797 0.8428 0.5690 0.6214 0.8757 0.8182
it 163 0.8896 0.8112 0.8866 0.8299 0.7961 0.8852 0.8896
es 132 0.8333 0.7286 0.8461 0.7039 0.7786 0.8693 0.8333

Data

  • Train/Dev: private multilingual splits with ~15% stratified Dev (by (lang,label)).
  • Source: merged EN/IT/ES data with bios retained (ignored if unused by model).

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig
import torch, numpy as np

repo = "SimoneAstarita/october-finetuning-more-variables-sweep-20251012-203644-t10"
tok = AutoTokenizer.from_pretrained(repo)
cfg = AutoConfig.from_pretrained(repo)
model = AutoModelForSequenceClassification.from_pretrained(repo)

texts = ["example text ..."]
langs = ["en"]

mode = "best_global"  # or "0.5", "by_lang"

enc = tok(texts, truncation=True, padding=True, max_length=256, return_tensors="pt")
with torch.no_grad():
    logits = model(**enc).logits
probs = torch.softmax(logits, dim=-1)[:, 1].cpu().numpy()

if mode == "0.5":
    th = 0.5
    preds = (probs >= th).astype(int)
elif mode == "best_global":
    th = getattr(cfg, "best_threshold_global", 0.5)
    preds = (probs >= th).astype(int)
elif mode == "by_lang":
    th_by_lang = getattr(cfg, "thresholds_by_lang", {})
    preds = np.zeros_like(probs, dtype=int)
    for lg in np.unique(langs):
        t = th_by_lang.get(lg, getattr(cfg, "best_threshold_global", 0.5))
        preds[np.array(langs) == lg] = (probs[np.array(langs) == lg] >= t).astype(int)
print(list(zip(texts, preds, probs)))

Additional files

reports.json: all metrics (macro/weighted/accuracy) for @0.5, @best_global, and @best_by_lang. config.json: stores thresholds: default_threshold, best_threshold_global, thresholds_by_lang. postprocessing.json: duplicate threshold info for external tools.

Downloads last month
3
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including SimoneAstarita/trilingual-no-bio-20251012-203644-t18