discussion_title
stringlengths 15
149
| discussion_url
stringlengths 55
178
| discussion_topic_id
int64 11.3k
169k
| discussion_category
int64 2
69
| discussion_created_at
stringdate 2021-11-01 15:54:32
2025-10-25 07:31:09
| thread
listlengths 3
20
| question
stringlengths 77
20.5k
| solution
stringlengths 24
23.2k
|
|---|---|---|---|---|---|---|---|
Cannot import name ‘Wav2Vec2Processor’
|
https://discuss.huggingface.co/t/cannot-import-name-wav2vec2processor/163992
| 163,992
| 9
|
2025-07-21T19:42:48.894000Z
|
[
{
"id": 234190,
"name": "Kausheya Roy",
"username": "rimoKR",
"avatar_template": "/user_avatar/discuss.huggingface.co/rimokr/{size}/51043_2.png",
"created_at": "2025-07-21T19:42:48.969Z",
"cooked": "<p>I am trying to use the <code>facebook/data2vec-audio-base-960h</code> model.<br>\nAs per their model card, this is how to load the model:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"> from transformers import Wav2Vec2Processor, Data2VecForCTC\n\n processor = Wav2Vec2Processor.from_pretrained(\"facebook/data2vec-audio-base-960h\")\n model = Data2VecForCTC.from_pretrained(\"facebook/data2vec-audio-base-960h\")\n</code></pre>\n<p>But I am getting this error:</p>\n<pre><code class=\"lang-auto\">ImportError Traceback (most recent call last)\n/tmp/ipython-input-11-2185350118.py in <cell line: 0>()\n----> 1 from transformers import Wav2Vec2Processor, Data2VecForCTC\n 2 \n 3 processor = Wav2Vec2Processor.from_pretrained(\"facebook/data2vec-audio-base-960h\")\n 4 model = Data2VecForCTC.from_pretrained(\"facebook/data2vec-audio-base-960h\")\n\nImportError: cannot import name 'Wav2Vec2Processor' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)\n</code></pre>\n<p>I looked up at stack-overflow: It suggested upgrading the Transformers version.<br>\nI did that :</p>\n<ol>\n<li>My current Transformers version is 4.53.2</li>\n<li>That did not fix. I even upgraded sentence-transformers to 5.0.0</li>\n<li>I restarted my session in google colab<br>\nNone of them worked.. even tried lowering the version of transformers, but It leads to further dependency conflicts.<br>\nPlz help.</li>\n</ol>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-21T19:42:48.969Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 146,
"reads": 6,
"readers_count": 5,
"score": 646.2,
"yours": false,
"topic_id": 163992,
"topic_slug": "cannot-import-name-wav2vec2processor",
"display_username": "Kausheya Roy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99310,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-import-name-wav2vec2processor/163992/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 234223,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-22T02:08:39.792Z",
"cooked": "<p>It seems that <a href=\"https://github.com/huggingface/transformers/issues/16952\">the previous sample on the web was incorrect</a>, and now it works on my Colab.</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">!pip install -U transformers accelerate huggingface_hub[hf_xet]\n\n#from transformers import Wav2Vec2Processor, Data2VecForCTC\nfrom transformers import Wav2Vec2Processor, Data2VecAudioForCTC\n\nprocessor = Wav2Vec2Processor.from_pretrained(\"facebook/data2vec-audio-base-960h\")\n#model = Data2VecForCTC.from_pretrained(\"facebook/data2vec-audio-base-960h\")\nmodel = Data2VecAudioForCTC.from_pretrained(\"facebook/data2vec-audio-base-960h\")\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-22T02:08:39.792Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 4,
"readers_count": 3,
"score": 35.8,
"yours": false,
"topic_id": 163992,
"topic_slug": "cannot-import-name-wav2vec2processor",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/issues/16952",
"internal": false,
"reflection": false,
"title": "cannot import name 'Data2VecForCTC' from 'transformers' · Issue #16952 · huggingface/transformers · GitHub",
"clicks": 14
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-import-name-wav2vec2processor/163992/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 234388,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-22T14:08:56.176Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-07-22T14:08:56.176Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 5.4,
"yours": false,
"topic_id": 163992,
"topic_slug": "cannot-import-name-wav2vec2processor",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-import-name-wav2vec2processor/163992/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am trying to use the <code>facebook/data2vec-audio-base-960h</code> model.<br>
As per their model card, this is how to load the model:</p>
<pre data-code-wrap="python"><code class="lang-python"> from transformers import Wav2Vec2Processor, Data2VecForCTC
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
</code></pre>
<p>But I am getting this error:</p>
<pre><code class="lang-auto">ImportError Traceback (most recent call last)
/tmp/ipython-input-11-2185350118.py in <cell line: 0>()
----> 1 from transformers import Wav2Vec2Processor, Data2VecForCTC
2
3 processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h")
4 model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
ImportError: cannot import name 'Wav2Vec2Processor' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
</code></pre>
<p>I looked up at stack-overflow: It suggested upgrading the Transformers version.<br>
I did that :</p>
<ol>
<li>My current Transformers version is 4.53.2</li>
<li>That did not fix. I even upgraded sentence-transformers to 5.0.0</li>
<li>I restarted my session in google colab<br>
None of them worked.. even tried lowering the version of transformers, but It leads to further dependency conflicts.<br>
Plz help.</li>
</ol>
|
<p>It seems that <a href="https://github.com/huggingface/transformers/issues/16952">the previous sample on the web was incorrect</a>, and now it works on my Colab.</p>
<pre data-code-wrap="py"><code class="lang-py">!pip install -U transformers accelerate huggingface_hub[hf_xet]
#from transformers import Wav2Vec2Processor, Data2VecForCTC
from transformers import Wav2Vec2Processor, Data2VecAudioForCTC
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h")
#model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
</code></pre>
|
How long does image generation with black-forest-labs/FLUX.1-dev take?
|
https://discuss.huggingface.co/t/how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take/163940
| 163,940
| 13
|
2025-07-21T10:56:50.269000Z
|
[
{
"id": 234126,
"name": "Dent Black",
"username": "RTQAQ",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/r/59ef9b/{size}.png",
"created_at": "2025-07-21T10:56:50.358Z",
"cooked": "<p>I run below code on a RTX 3090 with Ryzen 9 7900X and 128 GB RAM. So generating a single 512x512 image takes 20 minutes.<br>\nIs that normal? I read that it just should take seconds.</p>\n<pre><code class=\"lang-auto\">import torch\nfrom diffusers import FluxPipeline\nimport sys\nimport time\n\nstart = time.time()\nprint(\"CUDA available:\", torch.cuda.is_available())\nprint(\"Device:\", torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"CPU\")\n\npipe = FluxPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\nprompt = \"a wolf running\"\n\nimages_ = pipe(\n prompt,\n # width=1920,\n # height=1088,\n width=512,\n height=512,\n guidance_scale=3.5,\n num_inference_steps=50,\n max_sequence_length=512,\n generator=torch.Generator(device=\"cuda\").manual_seed(0)\n).images\n\nfor i, image in enumerate(images_):\n image.save(\"flux-dev\" + str(i) + \".png\")\n\nend = time.time()\nprint(f\"Generation took {time.time() - start:.2f} seconds\")\n</code></pre>\n<p>Cuda is 12.1, PYthon is 3.10<br>\nPackages (installed version | lastest version):</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th>GitPython</th>\n<th>3.1.44</th>\n<th>3.1.44</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>MarkupSafe</td>\n<td>2.1.5</td>\n<td>3.0.2</td>\n</tr>\n<tr>\n<td>PyYAML</td>\n<td>6.0.2</td>\n<td>6.0.2</td>\n</tr>\n<tr>\n<td>accelerate</td>\n<td>1.9.0</td>\n<td>1.9.0</td>\n</tr>\n<tr>\n<td>aiofiles</td>\n<td>23.2.1</td>\n<td>24.1.0</td>\n</tr>\n<tr>\n<td>altair</td>\n<td>5.5.0</td>\n<td>5.5.0</td>\n</tr>\n<tr>\n<td>annotated-types</td>\n<td>0.7.0</td>\n<td>0.7.0</td>\n</tr>\n<tr>\n<td>anyio</td>\n<td>4.9.0</td>\n<td>4.9.0</td>\n</tr>\n<tr>\n<td>attrs</td>\n<td>25.3.0</td>\n<td>25.3.0</td>\n</tr>\n<tr>\n<td>blinker</td>\n<td>1.9.0</td>\n<td>1.9.0</td>\n</tr>\n<tr>\n<td>cachetools</td>\n<td>6.1.0</td>\n<td>6.1.0</td>\n</tr>\n<tr>\n<td>certifi</td>\n<td>2025.7.14</td>\n<td>2025.7.14</td>\n</tr>\n<tr>\n<td>charset-normalizer</td>\n<td>3.4.2</td>\n<td>3.4.2</td>\n</tr>\n<tr>\n<td>click</td>\n<td>8.2.1</td>\n<td>8.2.1</td>\n</tr>\n<tr>\n<td>colorama</td>\n<td>0.4.6</td>\n<td>0.4.6</td>\n</tr>\n<tr>\n<td>diffusers</td>\n<td>0.34.0</td>\n<td>0.34.0</td>\n</tr>\n<tr>\n<td>einops</td>\n<td>0.8.1</td>\n<td>0.8.1</td>\n</tr>\n<tr>\n<td>exceptiongroup</td>\n<td>1.3.0</td>\n<td>1.3.0</td>\n</tr>\n<tr>\n<td>fastapi</td>\n<td>0.116.1</td>\n<td>0.116.1</td>\n</tr>\n<tr>\n<td>ffmpy</td>\n<td>0.6.0</td>\n<td>0.6.0</td>\n</tr>\n<tr>\n<td>filelock</td>\n<td>3.18.0</td>\n<td>3.18.0</td>\n</tr>\n<tr>\n<td>fire</td>\n<td>0.7.0</td>\n<td>0.7.0</td>\n</tr>\n<tr>\n<td>flux</td>\n<td>0.0.post58+g1371b2b</td>\n<td>1.3.5</td>\n</tr>\n<tr>\n<td>fsspec</td>\n<td>2025.7.0</td>\n<td>2025.7.0</td>\n</tr>\n<tr>\n<td>gitdb</td>\n<td>4.0.12</td>\n<td>4.0.12</td>\n</tr>\n<tr>\n<td>gradio</td>\n<td>5.13.2</td>\n<td>5.38.0</td>\n</tr>\n<tr>\n<td>gradio-client</td>\n<td>1.6.0</td>\n<td>1.11.0</td>\n</tr>\n<tr>\n<td>h11</td>\n<td>0.16.0</td>\n<td>0.16.0</td>\n</tr>\n<tr>\n<td>httpcore</td>\n<td>1.0.9</td>\n<td>1.0.9</td>\n</tr>\n<tr>\n<td>httpx</td>\n<td>0.28.1</td>\n<td>0.28.1</td>\n</tr>\n<tr>\n<td>huggingface-hub</td>\n<td>0.33.4</td>\n<td>0.33.4</td>\n</tr>\n<tr>\n<td>idna</td>\n<td>3.10</td>\n<td>3.10</td>\n</tr>\n<tr>\n<td>importlib-metadata</td>\n<td>8.7.0</td>\n<td>8.7.0</td>\n</tr>\n<tr>\n<td>invisible-watermark</td>\n<td>0.2.0</td>\n<td>0.2.0</td>\n</tr>\n<tr>\n<td>jinja2</td>\n<td>3.1.6</td>\n<td>3.1.6</td>\n</tr>\n<tr>\n<td>jsonschema</td>\n<td>4.25.0</td>\n<td>4.25.0</td>\n</tr>\n<tr>\n<td>jsonschema-specifications</td>\n<td>2025.4.1</td>\n<td>2025.4.1</td>\n</tr>\n<tr>\n<td>markdown-it-py</td>\n<td>3.0.0</td>\n<td>3.0.0</td>\n</tr>\n<tr>\n<td>mdurl</td>\n<td>0.1.2</td>\n<td>0.1.2</td>\n</tr>\n<tr>\n<td>mpmath</td>\n<td>1.3.0</td>\n<td>1.3.0</td>\n</tr>\n<tr>\n<td>narwhals</td>\n<td>1.48.0</td>\n<td>1.48.0</td>\n</tr>\n<tr>\n<td>networkx</td>\n<td>3.4.2</td>\n<td>3.5</td>\n</tr>\n<tr>\n<td>numpy</td>\n<td>2.2.6</td>\n<td>2.3.1</td>\n</tr>\n<tr>\n<td>opencv-python</td>\n<td>4.12.0.88</td>\n<td>4.12.0.88</td>\n</tr>\n<tr>\n<td>orjson</td>\n<td>3.11.0</td>\n<td>3.11.0</td>\n</tr>\n<tr>\n<td>packaging</td>\n<td>25.0</td>\n<td>25.0</td>\n</tr>\n<tr>\n<td>pandas</td>\n<td>2.3.1</td>\n<td>2.3.1</td>\n</tr>\n<tr>\n<td>pillow</td>\n<td>11.3.0</td>\n<td>11.3.0</td>\n</tr>\n<tr>\n<td>pip</td>\n<td>25.1.1</td>\n<td>25.1.1</td>\n</tr>\n<tr>\n<td>protobuf</td>\n<td>6.31.1</td>\n<td>6.31.1</td>\n</tr>\n<tr>\n<td>psutil</td>\n<td>7.0.0</td>\n<td>7.0.0</td>\n</tr>\n<tr>\n<td>pyarrow</td>\n<td>21.0.0</td>\n<td>21.0.0</td>\n</tr>\n<tr>\n<td>pydantic</td>\n<td>2.11.7</td>\n<td>2.11.7</td>\n</tr>\n<tr>\n<td>pydantic-core</td>\n<td>2.33.2</td>\n<td></td>\n</tr>\n<tr>\n<td>pydeck</td>\n<td>0.9.1</td>\n<td>0.9.1</td>\n</tr>\n<tr>\n<td>pydub</td>\n<td>0.25.1</td>\n<td>0.25.1</td>\n</tr>\n<tr>\n<td>pygments</td>\n<td>2.19.2</td>\n<td>2.19.2</td>\n</tr>\n<tr>\n<td>python-dateutil</td>\n<td>2.9.0.post0</td>\n<td>2.9.0.post0</td>\n</tr>\n<tr>\n<td>python-multipart</td>\n<td>0.0.20</td>\n<td>0.0.20</td>\n</tr>\n<tr>\n<td>pytz</td>\n<td>2025.2</td>\n<td>2025.2</td>\n</tr>\n<tr>\n<td>pywavelets</td>\n<td>1.8.0</td>\n<td>1.8.0</td>\n</tr>\n<tr>\n<td>referencing</td>\n<td>0.36.2</td>\n<td>0.36.2</td>\n</tr>\n<tr>\n<td>regex</td>\n<td>2024.11.6</td>\n<td>2024.11.6</td>\n</tr>\n<tr>\n<td>requests</td>\n<td>2.32.4</td>\n<td>2.32.4</td>\n</tr>\n<tr>\n<td>rich</td>\n<td>14.0.0</td>\n<td>14.0.0</td>\n</tr>\n<tr>\n<td>rpds-py</td>\n<td>0.26.0</td>\n<td>0.26.0</td>\n</tr>\n<tr>\n<td>ruff</td>\n<td>0.6.8</td>\n<td>0.12.4</td>\n</tr>\n<tr>\n<td>safehttpx</td>\n<td>0.1.6</td>\n<td>0.1.6</td>\n</tr>\n<tr>\n<td>safetensors</td>\n<td>0.5.3</td>\n<td>0.5.3</td>\n</tr>\n<tr>\n<td>semantic-version</td>\n<td>2.10.0</td>\n<td>2.10.0</td>\n</tr>\n<tr>\n<td>sentencepiece</td>\n<td>0.2.0</td>\n<td>0.2.0</td>\n</tr>\n<tr>\n<td>setuptools</td>\n<td>57.4.0</td>\n<td>80.9.0</td>\n</tr>\n<tr>\n<td>shellingham</td>\n<td>1.5.4</td>\n<td>1.5.4</td>\n</tr>\n<tr>\n<td>six</td>\n<td>1.17.0</td>\n<td>1.17.0</td>\n</tr>\n<tr>\n<td>smmap</td>\n<td>5.0.2</td>\n<td>6.0.0</td>\n</tr>\n<tr>\n<td>sniffio</td>\n<td>1.3.1</td>\n<td>1.3.1</td>\n</tr>\n<tr>\n<td>starlette</td>\n<td>0.47.2</td>\n<td>0.47.2</td>\n</tr>\n<tr>\n<td>streamlit</td>\n<td>1.47.0</td>\n<td>1.47.0</td>\n</tr>\n<tr>\n<td>streamlit-drawable-canvas</td>\n<td>0.9.3</td>\n<td>0.9.3</td>\n</tr>\n<tr>\n<td>streamlit-keyup</td>\n<td>0.3.0</td>\n<td>0.3.0</td>\n</tr>\n<tr>\n<td>sympy</td>\n<td>1.13.1</td>\n<td>1.14.0</td>\n</tr>\n<tr>\n<td>tenacity</td>\n<td>9.1.2</td>\n<td>9.1.2</td>\n</tr>\n<tr>\n<td>termcolor</td>\n<td>3.1.0</td>\n<td>3.1.0</td>\n</tr>\n<tr>\n<td>tokenizers</td>\n<td>0.21.2</td>\n<td>0.21.2</td>\n</tr>\n<tr>\n<td>toml</td>\n<td>0.10.2</td>\n<td>0.10.2</td>\n</tr>\n<tr>\n<td>tomlkit</td>\n<td>0.13.3</td>\n<td>0.13.3</td>\n</tr>\n<tr>\n<td>torch</td>\n<td>2.5.1+cu121</td>\n<td>2.7.1</td>\n</tr>\n<tr>\n<td>torchaudio</td>\n<td>2.5.1+cu121</td>\n<td>2.7.1</td>\n</tr>\n<tr>\n<td>torchvision</td>\n<td>0.20.1+cu121</td>\n<td>0.22.1</td>\n</tr>\n<tr>\n<td>tornado</td>\n<td>6.5.1</td>\n<td>6.5.1</td>\n</tr>\n<tr>\n<td>tqdm</td>\n<td>4.67.1</td>\n<td>4.67.1</td>\n</tr>\n<tr>\n<td>transformers</td>\n<td>4.53.2</td>\n<td>4.53.2</td>\n</tr>\n<tr>\n<td>typer</td>\n<td>0.16.0</td>\n<td>0.16.0</td>\n</tr>\n<tr>\n<td>typing-extensions</td>\n<td>4.14.1</td>\n<td>4.14.1</td>\n</tr>\n<tr>\n<td>typing-inspection</td>\n<td>0.4.1</td>\n<td>0.4.1</td>\n</tr>\n<tr>\n<td>tzdata</td>\n<td>2025.2</td>\n<td>2025.2</td>\n</tr>\n<tr>\n<td>urllib3</td>\n<td>2.5.0</td>\n<td>2.5.0</td>\n</tr>\n<tr>\n<td>uvicorn</td>\n<td>0.35.0</td>\n<td>0.35.0</td>\n</tr>\n<tr>\n<td>watchdog</td>\n<td>6.0.0</td>\n<td>6.0.0</td>\n</tr>\n<tr>\n<td>websockets</td>\n<td>14.2</td>\n<td>15.0.1</td>\n</tr>\n<tr>\n<td>zipp</td>\n<td>3.23.0</td>\n<td>3.23.0</td>\n</tr>\n</tbody>\n</table>\n</div>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-21T10:57:48.991Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 30,
"reads": 5,
"readers_count": 4,
"score": 161,
"yours": false,
"topic_id": 163940,
"topic_slug": "how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take",
"display_username": "Dent Black",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99930,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take/163940/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 234132,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-21T11:50:18.479Z",
"cooked": "<blockquote>\n<p>on a RTX 3090 with Ryzen 9 7900X and 128 GB RAM. So generating a single 512x512 image takes 20 minutes.<br>\nIs that normal?</p>\n</blockquote>\n<p>Yeah. With that code, FLUX is loaded into VRAM or RAM in a 16-bit state without quantization, requiring approximately 36 GB or more. Since VRAM is insufficient, it cannot be utilized effectively, resulting in lengthy inference times. Therefore,</p>\n<ol>\n<li><a href=\"https://huggingface.co/docs/diffusers/main/en/optimization/memory\">Reduce VRAM consumption by quantizing</a> and store the entire model in VRAM to accelerate processing</li>\n<li>Then optimize performance using other methods</li>\n</ol>\n<p>Quantization is at least necessary. For 4-bit quantization methods, I recommend BitsAndBytes for ease of use or TorchAO for speed.<br>\n<a href=\"https://github.com/huggingface/diffusers/pull/9453\">While there were various limitations when using <code>LoRA</code> in the past, these should be largely resolved now</a>.</p>\n<p>Optimization methods for FLUX:</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/6/e/6e62b357f1a0d21f5fcdcabdbe701fdfddfa6a0d.webp\" class=\"site-icon\" data-dominant-color=\"EE4C2C\" width=\"32\" height=\"32\">\n\n <a href=\"https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/\" target=\"_blank\" rel=\"noopener\">pytorch.org</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/\" target=\"_blank\" rel=\"noopener\">torch.compile and Diffusers: A Hands-On Guide to Peak Performance – PyTorch</a></h3>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/diffusers/main/en/optimization/para_attn\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/diffusers/main/en/optimization/para_attn\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/2/725f3ba0d5cc1761eed1c544dd7101393d1e4909_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F7F5EF\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/diffusers/main/en/optimization/para_attn\" target=\"_blank\" rel=\"noopener\">ParaAttention</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/blog/diffusers-quantization\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/blog/diffusers-quantization\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/345;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/f/1fecbf363fdf0857bde88d724aa6c838038e64e7_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"2F1DD2\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://huggingface.co/blog/diffusers-quantization\" target=\"_blank\" rel=\"noopener\">Exploring Quantization Backends in Diffusers</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-21T11:50:18.479Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 163940,
"topic_slug": "how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/diffusers/main/en/optimization/memory",
"internal": false,
"reflection": false,
"title": "Reduce memory usage",
"clicks": 3
},
{
"url": "https://huggingface.co/blog/diffusers-quantization",
"internal": false,
"reflection": false,
"title": "Exploring Quantization Backends in Diffusers",
"clicks": 2
},
{
"url": "https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
},
{
"url": "https://huggingface.co/docs/diffusers/main/en/optimization/para_attn",
"internal": false,
"reflection": false,
"title": "ParaAttention",
"clicks": 0
},
{
"url": "https://github.com/huggingface/diffusers/pull/9453",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take/163940/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 234174,
"name": "Dent Black",
"username": "RTQAQ",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/r/59ef9b/{size}.png",
"created_at": "2025-07-21T17:08:50.224Z",
"cooked": "<p>Thanks for the answer. I could reduce the runtime from 20 min to 2min.<br>\nDo you see any possible improvements with my code?<br>\nI adjusted the code to:</p>\n<pre><code class=\"lang-auto\">import torch\nfrom diffusers import FluxPipeline, DiffusionPipeline\nimport time, os\nfrom diffusers.quantizers import PipelineQuantizationConfig\nfrom datetime import datetime\n\nstart = time.time()\n\ntorch._dynamo.config.capture_dynamic_output_shape_ops = True\n\n# quantize\npipeline_quant_config = PipelineQuantizationConfig(\n quant_backend=\"bitsandbytes_4bit\",\n quant_kwargs={\"load_in_4bit\": True, \"bnb_4bit_quant_type\": \"nf4\", \"bnb_4bit_compute_dtype\": torch.bfloat16},\n components_to_quantize=[\"transformer\", \"text_encoder_2\"],\n)\npipeline = DiffusionPipeline.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n quantization_config=pipeline_quant_config,\n torch_dtype=torch.bfloat16,\n).to(\"cuda\")\n\n# compile\npipeline.transformer.to(memory_format=torch.channels_last)\n\nprompt = \"a wolf running\" \n\nimages_ = pipeline(\n prompt,\n width=1920,\n height=1088,\n # width=64,\n # height=64,\n guidance_scale=3.5,\n num_inference_steps=50,\n max_sequence_length=512,\n generator=torch.Generator(device=\"cuda\").manual_seed(0)).images\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-21T17:08:50.224Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 20.6,
"yours": false,
"topic_id": 163940,
"topic_slug": "how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take",
"display_username": "Dent Black",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99930,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take/163940/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 234207,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-21T23:40:01.842Z",
"cooked": "<p>There are no major issues, so I think you can proceed by adding optimization methods based on that.</p>\n<p>The specific optimization methods available will <em>vary depending on the OS and GPU</em>, so there’s no one-size-fits-all solution. For example, on Windows, there are a few methods that don’t work outside of WSL2…</p>\n<p>Since the model is FLUX for this project, I recommend the ParaAttention-based optimization mentioned earlier. That alone can significantly speed things up even with a single GPU.</p>\n<p>Additionally, combining TorchAO with torch.compile can also improve performance. TorchAO is PyTorch’s official quantization method, so it’s generally fast. However, it’s still a bit unstable in terms of behavior, and selecting the right quantization method requires some knowledge, so it may require some trial and error.<img src=\"https://emoji.discourse-cdn.com/apple/sweat_smile.png?v=14\" title=\":sweat_smile:\" class=\"emoji\" alt=\":sweat_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">import torch\nfrom diffusers import FluxPipeline, DiffusionPipeline\nimport time, os\nfrom diffusers.quantizers import PipelineQuantizationConfig\nfrom datetime import datetime\n\nstart = time.time()\n\ntorch._dynamo.config.capture_dynamic_output_shape_ops = True\n\n# quantize\npipeline_quant_config = PipelineQuantizationConfig(\n quant_backend=\"bitsandbytes_4bit\",\n quant_kwargs={\"load_in_4bit\": True, \"bnb_4bit_quant_type\": \"nf4\", \"bnb_4bit_compute_dtype\": torch.bfloat16},\n components_to_quantize=[\"transformer\", \"text_encoder_2\"],\n)\npipeline = DiffusionPipeline.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n quantization_config=pipeline_quant_config,\n torch_dtype=torch.bfloat16,\n).to(\"cuda\")\n\n# compile\npipeline.transformer.to(memory_format=torch.channels_last)\npipeline.enable_model_cpu_offload() # more memory efficient way\n#pipeline.transformer.compile_repeated_blocks(fullgraph=True, dynamic=True) # if you want to compile it\n\nprompt = \"a wolf running\" \n\nimages_ = pipeline(\n prompt,\n width=1920,\n height=1088,\n # width=64,\n # height=64,\n guidance_scale=3.5,\n num_inference_steps=50,\n max_sequence_length=512,\n generator=torch.Generator(device=\"cuda\").manual_seed(0)).images\n</code></pre>\n<h3><a name=\"p-234207-optimization-guides-other-than-those-listed-above-1\" class=\"anchor\" href=\"#p-234207-optimization-guides-other-than-those-listed-above-1\"></a>Optimization guides other than those listed above</h3>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/fp16\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/fp16\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/2/725f3ba0d5cc1761eed1c544dd7101393d1e4909_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F7F5EF\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/fp16\" target=\"_blank\" rel=\"noopener\">Accelerate inference</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/speed-memory-optims?offloading=model%2BCPU%2Boffloading\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/speed-memory-optims?offloading=model%2BCPU%2Boffloading\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/2/725f3ba0d5cc1761eed1c544dd7101393d1e4909_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F7F5EF\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/speed-memory-optims?offloading=model%2BCPU%2Boffloading\" target=\"_blank\" rel=\"noopener\">Compile and offloading quantized models</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<p><a href=\"https://github.com/sayakpaul/diffusers-torchao\" class=\"inline-onebox\">GitHub - sayakpaul/diffusers-torchao: End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).</a> (The method you are using for quantization is the new specification for Diffusers, but this document can be useful as a reference for benchmarking and other considerations)</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-21T23:40:55.036Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 163940,
"topic_slug": "how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/sayakpaul/diffusers-torchao",
"internal": false,
"reflection": false,
"title": "GitHub - sayakpaul/diffusers-torchao: End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8 training).",
"clicks": 0
},
{
"url": "https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/fp16",
"internal": false,
"reflection": false,
"title": "Accelerate inference",
"clicks": 0
},
{
"url": "https://huggingface.co/docs/diffusers/v0.34.0/en/optimization/speed-memory-optims?offloading=model%2BCPU%2Boffloading",
"internal": false,
"reflection": false,
"title": "Compile and offloading quantized models",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take/163940/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 234359,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-22T11:40:53.070Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-07-22T11:40:53.070Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 163940,
"topic_slug": "how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-long-does-image-generation-with-black-forest-labs-flux-1-dev-take/163940/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I run below code on a RTX 3090 with Ryzen 9 7900X and 128 GB RAM. So generating a single 512x512 image takes 20 minutes.<br>
Is that normal? I read that it just should take seconds.</p>
<pre><code class="lang-auto">import torch
from diffusers import FluxPipeline
import sys
import time
start = time.time()
print("CUDA available:", torch.cuda.is_available())
print("Device:", torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU")
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "a wolf running"
images_ = pipe(
prompt,
# width=1920,
# height=1088,
width=512,
height=512,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator(device="cuda").manual_seed(0)
).images
for i, image in enumerate(images_):
image.save("flux-dev" + str(i) + ".png")
end = time.time()
print(f"Generation took {time.time() - start:.2f} seconds")
</code></pre>
<p>Cuda is 12.1, PYthon is 3.10<br>
Packages (installed version | lastest version):</p>
<div class="md-table">
<table>
<thead>
<tr>
<th>GitPython</th>
<th>3.1.44</th>
<th>3.1.44</th>
</tr>
</thead>
<tbody>
<tr>
<td>MarkupSafe</td>
<td>2.1.5</td>
<td>3.0.2</td>
</tr>
<tr>
<td>PyYAML</td>
<td>6.0.2</td>
<td>6.0.2</td>
</tr>
<tr>
<td>accelerate</td>
<td>1.9.0</td>
<td>1.9.0</td>
</tr>
<tr>
<td>aiofiles</td>
<td>23.2.1</td>
<td>24.1.0</td>
</tr>
<tr>
<td>altair</td>
<td>5.5.0</td>
<td>5.5.0</td>
</tr>
<tr>
<td>annotated-types</td>
<td>0.7.0</td>
<td>0.7.0</td>
</tr>
<tr>
<td>anyio</td>
<td>4.9.0</td>
<td>4.9.0</td>
</tr>
<tr>
<td>attrs</td>
<td>25.3.0</td>
<td>25.3.0</td>
</tr>
<tr>
<td>blinker</td>
<td>1.9.0</td>
<td>1.9.0</td>
</tr>
<tr>
<td>cachetools</td>
<td>6.1.0</td>
<td>6.1.0</td>
</tr>
<tr>
<td>certifi</td>
<td>2025.7.14</td>
<td>2025.7.14</td>
</tr>
<tr>
<td>charset-normalizer</td>
<td>3.4.2</td>
<td>3.4.2</td>
</tr>
<tr>
<td>click</td>
<td>8.2.1</td>
<td>8.2.1</td>
</tr>
<tr>
<td>colorama</td>
<td>0.4.6</td>
<td>0.4.6</td>
</tr>
<tr>
<td>diffusers</td>
<td>0.34.0</td>
<td>0.34.0</td>
</tr>
<tr>
<td>einops</td>
<td>0.8.1</td>
<td>0.8.1</td>
</tr>
<tr>
<td>exceptiongroup</td>
<td>1.3.0</td>
<td>1.3.0</td>
</tr>
<tr>
<td>fastapi</td>
<td>0.116.1</td>
<td>0.116.1</td>
</tr>
<tr>
<td>ffmpy</td>
<td>0.6.0</td>
<td>0.6.0</td>
</tr>
<tr>
<td>filelock</td>
<td>3.18.0</td>
<td>3.18.0</td>
</tr>
<tr>
<td>fire</td>
<td>0.7.0</td>
<td>0.7.0</td>
</tr>
<tr>
<td>flux</td>
<td>0.0.post58+g1371b2b</td>
<td>1.3.5</td>
</tr>
<tr>
<td>fsspec</td>
<td>2025.7.0</td>
<td>2025.7.0</td>
</tr>
<tr>
<td>gitdb</td>
<td>4.0.12</td>
<td>4.0.12</td>
</tr>
<tr>
<td>gradio</td>
<td>5.13.2</td>
<td>5.38.0</td>
</tr>
<tr>
<td>gradio-client</td>
<td>1.6.0</td>
<td>1.11.0</td>
</tr>
<tr>
<td>h11</td>
<td>0.16.0</td>
<td>0.16.0</td>
</tr>
<tr>
<td>httpcore</td>
<td>1.0.9</td>
<td>1.0.9</td>
</tr>
<tr>
<td>httpx</td>
<td>0.28.1</td>
<td>0.28.1</td>
</tr>
<tr>
<td>huggingface-hub</td>
<td>0.33.4</td>
<td>0.33.4</td>
</tr>
<tr>
<td>idna</td>
<td>3.10</td>
<td>3.10</td>
</tr>
<tr>
<td>importlib-metadata</td>
<td>8.7.0</td>
<td>8.7.0</td>
</tr>
<tr>
<td>invisible-watermark</td>
<td>0.2.0</td>
<td>0.2.0</td>
</tr>
<tr>
<td>jinja2</td>
<td>3.1.6</td>
<td>3.1.6</td>
</tr>
<tr>
<td>jsonschema</td>
<td>4.25.0</td>
<td>4.25.0</td>
</tr>
<tr>
<td>jsonschema-specifications</td>
<td>2025.4.1</td>
<td>2025.4.1</td>
</tr>
<tr>
<td>markdown-it-py</td>
<td>3.0.0</td>
<td>3.0.0</td>
</tr>
<tr>
<td>mdurl</td>
<td>0.1.2</td>
<td>0.1.2</td>
</tr>
<tr>
<td>mpmath</td>
<td>1.3.0</td>
<td>1.3.0</td>
</tr>
<tr>
<td>narwhals</td>
<td>1.48.0</td>
<td>1.48.0</td>
</tr>
<tr>
<td>networkx</td>
<td>3.4.2</td>
<td>3.5</td>
</tr>
<tr>
<td>numpy</td>
<td>2.2.6</td>
<td>2.3.1</td>
</tr>
<tr>
<td>opencv-python</td>
<td>4.12.0.88</td>
<td>4.12.0.88</td>
</tr>
<tr>
<td>orjson</td>
<td>3.11.0</td>
<td>3.11.0</td>
</tr>
<tr>
<td>packaging</td>
<td>25.0</td>
<td>25.0</td>
</tr>
<tr>
<td>pandas</td>
<td>2.3.1</td>
<td>2.3.1</td>
</tr>
<tr>
<td>pillow</td>
<td>11.3.0</td>
<td>11.3.0</td>
</tr>
<tr>
<td>pip</td>
<td>25.1.1</td>
<td>25.1.1</td>
</tr>
<tr>
<td>protobuf</td>
<td>6.31.1</td>
<td>6.31.1</td>
</tr>
<tr>
<td>psutil</td>
<td>7.0.0</td>
<td>7.0.0</td>
</tr>
<tr>
<td>pyarrow</td>
<td>21.0.0</td>
<td>21.0.0</td>
</tr>
<tr>
<td>pydantic</td>
<td>2.11.7</td>
<td>2.11.7</td>
</tr>
<tr>
<td>pydantic-core</td>
<td>2.33.2</td>
<td></td>
</tr>
<tr>
<td>pydeck</td>
<td>0.9.1</td>
<td>0.9.1</td>
</tr>
<tr>
<td>pydub</td>
<td>0.25.1</td>
<td>0.25.1</td>
</tr>
<tr>
<td>pygments</td>
<td>2.19.2</td>
<td>2.19.2</td>
</tr>
<tr>
<td>python-dateutil</td>
<td>2.9.0.post0</td>
<td>2.9.0.post0</td>
</tr>
<tr>
<td>python-multipart</td>
<td>0.0.20</td>
<td>0.0.20</td>
</tr>
<tr>
<td>pytz</td>
<td>2025.2</td>
<td>2025.2</td>
</tr>
<tr>
<td>pywavelets</td>
<td>1.8.0</td>
<td>1.8.0</td>
</tr>
<tr>
<td>referencing</td>
<td>0.36.2</td>
<td>0.36.2</td>
</tr>
<tr>
<td>regex</td>
<td>2024.11.6</td>
<td>2024.11.6</td>
</tr>
<tr>
<td>requests</td>
<td>2.32.4</td>
<td>2.32.4</td>
</tr>
<tr>
<td>rich</td>
<td>14.0.0</td>
<td>14.0.0</td>
</tr>
<tr>
<td>rpds-py</td>
<td>0.26.0</td>
<td>0.26.0</td>
</tr>
<tr>
<td>ruff</td>
<td>0.6.8</td>
<td>0.12.4</td>
</tr>
<tr>
<td>safehttpx</td>
<td>0.1.6</td>
<td>0.1.6</td>
</tr>
<tr>
<td>safetensors</td>
<td>0.5.3</td>
<td>0.5.3</td>
</tr>
<tr>
<td>semantic-version</td>
<td>2.10.0</td>
<td>2.10.0</td>
</tr>
<tr>
<td>sentencepiece</td>
<td>0.2.0</td>
<td>0.2.0</td>
</tr>
<tr>
<td>setuptools</td>
<td>57.4.0</td>
<td>80.9.0</td>
</tr>
<tr>
<td>shellingham</td>
<td>1.5.4</td>
<td>1.5.4</td>
</tr>
<tr>
<td>six</td>
<td>1.17.0</td>
<td>1.17.0</td>
</tr>
<tr>
<td>smmap</td>
<td>5.0.2</td>
<td>6.0.0</td>
</tr>
<tr>
<td>sniffio</td>
<td>1.3.1</td>
<td>1.3.1</td>
</tr>
<tr>
<td>starlette</td>
<td>0.47.2</td>
<td>0.47.2</td>
</tr>
<tr>
<td>streamlit</td>
<td>1.47.0</td>
<td>1.47.0</td>
</tr>
<tr>
<td>streamlit-drawable-canvas</td>
<td>0.9.3</td>
<td>0.9.3</td>
</tr>
<tr>
<td>streamlit-keyup</td>
<td>0.3.0</td>
<td>0.3.0</td>
</tr>
<tr>
<td>sympy</td>
<td>1.13.1</td>
<td>1.14.0</td>
</tr>
<tr>
<td>tenacity</td>
<td>9.1.2</td>
<td>9.1.2</td>
</tr>
<tr>
<td>termcolor</td>
<td>3.1.0</td>
<td>3.1.0</td>
</tr>
<tr>
<td>tokenizers</td>
<td>0.21.2</td>
<td>0.21.2</td>
</tr>
<tr>
<td>toml</td>
<td>0.10.2</td>
<td>0.10.2</td>
</tr>
<tr>
<td>tomlkit</td>
<td>0.13.3</td>
<td>0.13.3</td>
</tr>
<tr>
<td>torch</td>
<td>2.5.1+cu121</td>
<td>2.7.1</td>
</tr>
<tr>
<td>torchaudio</td>
<td>2.5.1+cu121</td>
<td>2.7.1</td>
</tr>
<tr>
<td>torchvision</td>
<td>0.20.1+cu121</td>
<td>0.22.1</td>
</tr>
<tr>
<td>tornado</td>
<td>6.5.1</td>
<td>6.5.1</td>
</tr>
<tr>
<td>tqdm</td>
<td>4.67.1</td>
<td>4.67.1</td>
</tr>
<tr>
<td>transformers</td>
<td>4.53.2</td>
<td>4.53.2</td>
</tr>
<tr>
<td>typer</td>
<td>0.16.0</td>
<td>0.16.0</td>
</tr>
<tr>
<td>typing-extensions</td>
<td>4.14.1</td>
<td>4.14.1</td>
</tr>
<tr>
<td>typing-inspection</td>
<td>0.4.1</td>
<td>0.4.1</td>
</tr>
<tr>
<td>tzdata</td>
<td>2025.2</td>
<td>2025.2</td>
</tr>
<tr>
<td>urllib3</td>
<td>2.5.0</td>
<td>2.5.0</td>
</tr>
<tr>
<td>uvicorn</td>
<td>0.35.0</td>
<td>0.35.0</td>
</tr>
<tr>
<td>watchdog</td>
<td>6.0.0</td>
<td>6.0.0</td>
</tr>
<tr>
<td>websockets</td>
<td>14.2</td>
<td>15.0.1</td>
</tr>
<tr>
<td>zipp</td>
<td>3.23.0</td>
<td>3.23.0</td>
</tr>
</tbody>
</table>
</div>
|
<blockquote>
<p>on a RTX 3090 with Ryzen 9 7900X and 128 GB RAM. So generating a single 512x512 image takes 20 minutes.<br>
Is that normal?</p>
</blockquote>
<p>Yeah. With that code, FLUX is loaded into VRAM or RAM in a 16-bit state without quantization, requiring approximately 36 GB or more. Since VRAM is insufficient, it cannot be utilized effectively, resulting in lengthy inference times. Therefore,</p>
<ol>
<li><a href="https://huggingface.co/docs/diffusers/main/en/optimization/memory">Reduce VRAM consumption by quantizing</a> and store the entire model in VRAM to accelerate processing</li>
<li>Then optimize performance using other methods</li>
</ol>
<p>Quantization is at least necessary. For 4-bit quantization methods, I recommend BitsAndBytes for ease of use or TorchAO for speed.<br>
<a href="https://github.com/huggingface/diffusers/pull/9453">While there were various limitations when using <code>LoRA</code> in the past, these should be largely resolved now</a>.</p>
<p>Optimization methods for FLUX:</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/6/e/6e62b357f1a0d21f5fcdcabdbe701fdfddfa6a0d.webp" class="site-icon" data-dominant-color="EE4C2C" width="32" height="32">
<a href="https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/" target="_blank" rel="noopener">pytorch.org</a>
</header>
<article class="onebox-body">
<h3><a href="https://pytorch.org/blog/torch-compile-and-diffusers-a-hands-on-guide-to-peak-performance/" target="_blank" rel="noopener">torch.compile and Diffusers: A Hands-On Guide to Peak Performance – PyTorch</a></h3>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/diffusers/main/en/optimization/para_attn">
<header class="source">
<a href="https://huggingface.co/docs/diffusers/main/en/optimization/para_attn" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/2/725f3ba0d5cc1761eed1c544dd7101393d1e4909_2_690x372.png" class="thumbnail" data-dominant-color="F7F5EF" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/diffusers/main/en/optimization/para_attn" target="_blank" rel="noopener">ParaAttention</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/blog/diffusers-quantization">
<header class="source">
<a href="https://huggingface.co/blog/diffusers-quantization" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/345;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/f/1fecbf363fdf0857bde88d724aa6c838038e64e7_2_690x345.png" class="thumbnail" data-dominant-color="2F1DD2" width="690" height="345"></div>
<h3><a href="https://huggingface.co/blog/diffusers-quantization" target="_blank" rel="noopener">Exploring Quantization Backends in Diffusers</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Open port for space to connect to PostgreSQL
|
https://discuss.huggingface.co/t/open-port-for-space-to-connect-to-postgresql/29938
| 29,938
| 24
|
2023-01-18T09:09:42.252000Z
|
[
{
"id": 55116,
"name": null,
"username": "anon86412018",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/a698b9/{size}.png",
"created_at": "2023-01-18T09:09:42.333Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/chris-rannou\">@chris-rannou</a>,</p>\n<p>Could you open the port <code>5432</code> for this space: <a href=\"https://huggingface.co/spaces/vnghia/defi-ai-2022\" class=\"inline-onebox\">Defi Ai 2022 - a Hugging Face Space by vnghia</a> as I need to connect to a PostgreSQL database ?</p>\n<p>Thank you very much !</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-18T09:09:42.333Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1239,
"reads": 67,
"readers_count": 66,
"score": 6193.4,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": null,
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/vnghia/defi-ai-2022",
"internal": false,
"reflection": false,
"title": "Defi Ai 2022 - a Hugging Face Space by vnghia",
"clicks": 47
},
{
"url": "https://discuss.huggingface.co/t/open-port-9243-on-spaces-to-connect-to-elasticsearch/38699",
"internal": true,
"reflection": true,
"title": "Open Port 9243 on Spaces to Connect to ElasticSearch",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/gprc-on-spaces/152803/3",
"internal": true,
"reflection": true,
"title": "gPRC on Spaces 🥹",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/problem-summary-hugging-face-space-running-but-line-webhook-verification-fails-with-no-logs/158468/2",
"internal": true,
"reflection": true,
"title": "Problem Summary: Hugging Face Space Running, but Line Webhook Verification Fails with No Logs",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 14210,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/1",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55140,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-01-18T15:56:29.757Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a> are you sure your DB service is running at <code>34.155.175.170:5432</code>? if you’re trying to access the DB from space, you don’t need that port to be open, however on your Space log it states timeout trying to reach your db server</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-18T15:56:29.757Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 66,
"readers_count": 65,
"score": 23.2,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55141,
"name": null,
"username": "anon86412018",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/a698b9/{size}.png",
"created_at": "2023-01-18T16:13:59.033Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/radames\">@radames</a>, I am quite sure my DB service is running at <code>34.155.175.170:5432</code> because the same code works on my machine. It is a Google Cloud SQL instance (I already opened the DB to every IP and port by <code>0.0.0.0/0</code> on GCP side), maybe that is the reason why I have this error ?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-18T16:13:59.033Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 62,
"readers_count": 61,
"score": 42.4,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": null,
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 14210,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55152,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-01-18T19:29:57.267Z",
"cooked": "<p>ok you’re right, you might need outgoing port access, currently only 80 and 443, we’ll get back to you soon.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-18T19:29:57.267Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 60,
"readers_count": 59,
"score": 32,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 14210,
"username": "anon86412018",
"name": null,
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/a698b9/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 55227,
"name": "Christophe Rannou",
"username": "chris-rannou",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/c/7feea3/{size}.png",
"created_at": "2023-01-19T15:42:29.545Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a>,</p>\n<p>Port 5432 is now open.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-19T15:42:29.545Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 58,
"readers_count": 57,
"score": 61.6,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Christophe Rannou",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 6211,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/5",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55241,
"name": null,
"username": "anon86412018",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/a698b9/{size}.png",
"created_at": "2023-01-19T19:13:27.400Z",
"cooked": "<p>hmmm, unfortuntately, I still can not access to my DB instance. I also add a command to check if the DB is ready by <code>pg_isready</code>. And I found that when building the image, the connection is fine, but it failed while the space is running.</p>\n<p>You can see the log here: <a href=\"https://huggingface.co/spaces/vnghia/defi-ai-2022?logs=build\" class=\"inline-onebox\">Defi Ai 2022 - a Hugging Face Space by vnghia</a></p>\n<p>Do the port need to be opened twice for building and running or there is something else ?</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-19T19:13:27.400Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 56,
"readers_count": 55,
"score": 21.2,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": null,
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/vnghia/defi-ai-2022?logs=build",
"internal": false,
"reflection": false,
"title": "Defi Ai 2022 - a Hugging Face Space by vnghia",
"clicks": 11
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 14210,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55259,
"name": "Hyoung-Kyu Song",
"username": "deepkyu",
"avatar_template": "/user_avatar/discuss.huggingface.co/deepkyu/{size}/19615_2.png",
"created_at": "2023-01-20T04:56:13.139Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a> ,</p>\n<p>I had a similar issue when integrating my Hugging Face Space with my AWS instance.<br>\nI later found that Hugging Face Space only approves for the privileged port, which is below 1024.<br>\nI think this is for security reason, and I suggest that you change your SQL server port open with privileged port.</p>\n<p>For now, I switched the service port to 80, but I remembered that it is fine if the port number is below 1024.</p>\n<p>Ref for my previous issue:</p><aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"14468\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img loading=\"lazy\" alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/t/e495f1/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/is-there-a-way-to-call-external-grpc-service/14468\">Is there a way to call external gRPC service?</a> <a class=\"badge-category__wrapper \" href=\"/c/spaces/24\"><span data-category-id=\"24\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category to ask any questions about Spaces or to share your work.\"><span class=\"badge-category__name\">Spaces</span></span></a>\n </div>\n <blockquote>\n I was planning to deploy a demo on Huggingface Space, but ran into a bit of an issue. \nSo, my demo partially depends on a gRPC service that I have deployed on an AWS instance. When I tried test running it, it just timed out with “Failed to pick subchannel” so I am guessing that there is an issue when trying to call a remote gRPC service from Huggingface Space. When I tested my demo by having the same setup in my local computer as I did in Huggingface Space, I had no issues. I also checked to see…\n </blockquote>\n</aside>\n",
"post_number": 7,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-20T04:57:23.852Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 20,
"reads": 51,
"readers_count": 50,
"score": 110.2,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Hyoung-Kyu Song",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/is-there-a-way-to-call-external-grpc-service/14468",
"internal": true,
"reflection": false,
"title": "Is there a way to call external gRPC service?",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 8000,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55283,
"name": null,
"username": "anon86412018",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/a698b9/{size}.png",
"created_at": "2023-01-20T10:49:14.149Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/deepkyu\">@deepkyu</a> I dont think so because <a class=\"mention\" href=\"/u/chris-rannou\">@chris-rannou</a> has already opened the port and my code can connect to the database while building the Docker image but not while running. I am suspecting there are some bugs with the Docker space <img src=\"https://emoji.discourse-cdn.com/apple/confused.png?v=12\" title=\":confused:\" class=\"emoji\" alt=\":confused:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 8,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-20T10:49:14.149Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 40,
"readers_count": 39,
"score": 8,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": null,
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 14210,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55297,
"name": "Hyoung-Kyu Song",
"username": "deepkyu",
"avatar_template": "/user_avatar/discuss.huggingface.co/deepkyu/{size}/19615_2.png",
"created_at": "2023-01-20T13:49:37.288Z",
"cooked": "<p><a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a><br>\nOh I see. that’s also one of weird situations…</p>\n<p>From my experience, I concluded that there were some outbound policies in Hugging Face Space server which blocks unprivileged ports. At that time, my docker container at my AWS instance communicates well from other servers’ request except the HF Space.</p>\n<p>I’m sorry for not being helpful tho.<br>\nHope it works out <img src=\"https://emoji.discourse-cdn.com/apple/+1.png?v=12\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 9,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-20T13:49:37.288Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 40,
"readers_count": 39,
"score": 38,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Hyoung-Kyu Song",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/404-error-with-flask-space/161020/2",
"internal": true,
"reflection": true,
"title": "404 Error with Flask Space",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 8000,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/9",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 55302,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-01-20T14:24:12.742Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a> and <a class=\"mention\" href=\"/u/deepkyu\">@deepkyu</a> , we’ve changed the rules and we’ll enable 5432, 27017 in addition to 80, 443. Sorry <a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a> I don’t think it’s in prod yet. I’ll ping you here. Thanks</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-20T14:24:12.742Z",
"reply_count": 1,
"reply_to_post_number": 9,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 40,
"readers_count": 39,
"score": 63,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/open-5432-port-to-connect-to-postgresql-for-langfuse-app/149230/2",
"internal": true,
"reflection": true,
"title": "Open 5432 port to connect to PostgreSQL for langfuse app",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/10",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 8000,
"username": "deepkyu",
"name": "Hyoung-Kyu Song",
"avatar_template": "/user_avatar/discuss.huggingface.co/deepkyu/{size}/19615_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 55313,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-01-20T18:10:02.058Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/anon86412018\">@anon86412018</a> it should be fixed now, thanks for the patience</p>",
"post_number": 11,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-20T18:10:02.058Z",
"reply_count": 0,
"reply_to_post_number": 10,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 35,
"readers_count": 34,
"score": 37,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/11",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 55315,
"name": null,
"username": "anon86412018",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/a698b9/{size}.png",
"created_at": "2023-01-20T18:25:31.779Z",
"cooked": "<p>Thank you very much !</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-20T18:25:31.779Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 34,
"readers_count": 33,
"score": 21.8,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": null,
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 14210,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/12",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 67686,
"name": "Karim Foda",
"username": "kmfoda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png",
"created_at": "2023-05-03T10:21:11.201Z",
"cooked": "<p>Hey <a class=\"mention\" href=\"/u/radames\">@radames</a> thanks for opening up 5432. I’m hoping to use ElasticSearch (<code>9243</code>) and Papertrail logging (<code>45454</code>) for my app. Would it be possible to open up those 2 ports as well in addition to <code>5432</code>?</p>",
"post_number": 13,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-04T16:00:03.164Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 30,
"readers_count": 29,
"score": 51,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Karim Foda",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 298,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/13",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 67928,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-05-04T16:54:20.585Z",
"cooked": "<p>the ports 5432, 9200 and 45454 are now open</p>",
"post_number": 14,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-04T16:54:20.585Z",
"reply_count": 1,
"reply_to_post_number": 13,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 28,
"readers_count": 27,
"score": 15.6,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/14",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 298,
"username": "kmfoda",
"name": "Karim Foda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 67929,
"name": "Karim Foda",
"username": "kmfoda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png",
"created_at": "2023-05-04T16:55:38.679Z",
"cooked": "<p>Sorry my apologies I mean 9243 not 9200. I believe that’s the port Elastic uses. Thanks so much!</p>",
"post_number": 15,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-04T16:55:38.679Z",
"reply_count": 1,
"reply_to_post_number": 14,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 29,
"readers_count": 28,
"score": 15.8,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Karim Foda",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 298,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/15",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 67930,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-05-04T16:57:24.180Z",
"cooked": "<p>I see, I guess the default ES port is 9200 and it’s been open already, could you change it on your app?</p>",
"post_number": 16,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-04T16:57:24.180Z",
"reply_count": 1,
"reply_to_post_number": 15,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 30,
"readers_count": 29,
"score": 21,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/16",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 298,
"username": "kmfoda",
"name": "Karim Foda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 67934,
"name": "Karim Foda",
"username": "kmfoda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png",
"created_at": "2023-05-04T17:34:34.265Z",
"cooked": "<p>Ah we’re running our app on <a href=\"http://elastic.co/\" rel=\"noopener nofollow ugc\">elastic.co</a> and that’s the port they gave us unfortunately. I think it might be quite tricky for us to change the port, it’ll also have a bit of downstream impact on all our other services which we’d have to factor in.</p>",
"post_number": 17,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-04T17:34:34.265Z",
"reply_count": 1,
"reply_to_post_number": 16,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 29,
"readers_count": 28,
"score": 30.8,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Karim Foda",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://elastic.co/",
"internal": false,
"reflection": false,
"title": "Elastic Observability and Security — built on Elasticsearch | Elastic",
"clicks": 11
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 298,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/17",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 68064,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-05-05T16:54:03.492Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/kmfoda\">@kmfoda</a> , the requested ports are open now, please try it again. Thanks</p>",
"post_number": 18,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-05T16:54:03.492Z",
"reply_count": 0,
"reply_to_post_number": 17,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 27,
"readers_count": 26,
"score": 10.4,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/18",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 298,
"username": "kmfoda",
"name": "Karim Foda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 68070,
"name": "Karim Foda",
"username": "kmfoda",
"avatar_template": "/user_avatar/discuss.huggingface.co/kmfoda/{size}/42122_2.png",
"created_at": "2023-05-05T18:01:45.239Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/radames\">@radames</a>, amazing that worked now! Thank you very much for your help!</p>",
"post_number": 19,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-05-05T18:01:45.239Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 27,
"readers_count": 26,
"score": 40.4,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Karim Foda",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 298,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/19",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 234263,
"name": "Notionhive AI",
"username": "notionhive-ai",
"avatar_template": "/user_avatar/discuss.huggingface.co/notionhive-ai/{size}/51497_2.png",
"created_at": "2025-07-22T06:51:20.965Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/radames\">@radames</a>, is there any way to open the port 587 for mail SMTP and 443 port to communicate through telegram?</p>",
"post_number": 20,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-07-22T06:51:20.965Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 25.8,
"yours": false,
"topic_id": 29938,
"topic_slug": "open-port-for-space-to-connect-to-postgresql",
"display_username": "Notionhive AI",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99997,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/open-port-for-space-to-connect-to-postgresql/29938/20",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>Hi <a class="mention" href="/u/chris-rannou">@chris-rannou</a>,</p>
<p>Could you open the port <code>5432</code> for this space: <a href="https://huggingface.co/spaces/vnghia/defi-ai-2022" class="inline-onebox">Defi Ai 2022 - a Hugging Face Space by vnghia</a> as I need to connect to a PostgreSQL database ?</p>
<p>Thank you very much !</p>
|
<p>hi <a class="mention" href="/u/anon86412018">@anon86412018</a> it should be fixed now, thanks for the patience</p>
|
Recommendations for ML courses
|
https://discuss.huggingface.co/t/recommendations-for-ml-courses/163811
| 163,811
| 5
|
2025-07-20T11:40:24.641000Z
|
[
{
"id": 233967,
"name": "Anisimov",
"username": "kaguya3222",
"avatar_template": "/user_avatar/discuss.huggingface.co/kaguya3222/{size}/51401_2.png",
"created_at": "2025-07-20T11:40:24.705Z",
"cooked": "<p>Hey there ! I am Maksym, Frontend Engineer. I have 5 years of experience and working mostly with TypeScript/Frontend frameworks. I am familiar with other languages (C, C++) from the university program. I am interested in learning basic ML to complete Hugging Face LLM Course.</p>\n<p>Any recommendations here with what should I start?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-20T11:40:24.705Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 12,
"reads": 9,
"readers_count": 8,
"score": 101.8,
"yours": false,
"topic_id": 163811,
"topic_slug": "recommendations-for-ml-courses",
"display_username": "Anisimov",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99851,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/recommendations-for-ml-courses/163811/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233983,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-20T14:05:26.387Z",
"cooked": "<p>Hi.<br>\nPython is essential. However, you don’t necessarily need to study Python specifically; if you understand C, you should be able to use Python by looking up syntax and functions as needed. The course should not use many of the more complex Python syntaxes. (After all, Python’s strengths lie in its readability and abundance of libraries…)<br>\nYou can start right away without any issues.</p>\n<p>Additionally, for actual API usage or running WebGPU in a browser, there are JavaScript libraries available.</p>\n<p>If you want to learn the theoretical background, there are other resources available, but the LLM course alone covers a significant portion of the material.</p>\n<h3><a name=\"p-233983-some-resources-1\" class=\"anchor\" href=\"#p-233983-some-resources-1\"></a>Some resources</h3>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://triton-lang.org/main/getting-started/tutorials/index.html\">\n <header class=\"source\">\n\n <a href=\"https://triton-lang.org/main/getting-started/tutorials/index.html\" target=\"_blank\" rel=\"noopener\">triton-lang.org</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://triton-lang.org/main/getting-started/tutorials/index.html\" target=\"_blank\" rel=\"noopener\">Tutorials — Triton documentation</a></h3>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/NielsRogge/Transformers-Tutorials\">\n <header class=\"source\">\n\n <a href=\"https://github.com/NielsRogge/Transformers-Tutorials\" target=\"_blank\" rel=\"noopener\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/f/bf2593beb3f81247ee557e674fd468b67ae69a03_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"EEEAE8\">\n\n <h3><a href=\"https://github.com/NielsRogge/Transformers-Tutorials\" target=\"_blank\" rel=\"noopener\">GitHub - NielsRogge/Transformers-Tutorials: This repository contains demos I made with the...</a></h3>\n\n <p><span class=\"github-repo-description\">This repository contains demos I made with the Transformers library by HuggingFace.</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/mlabonne/llm-course\">\n <header class=\"source\">\n\n <a href=\"https://github.com/mlabonne/llm-course\" target=\"_blank\" rel=\"noopener\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/0/102e79dc760c40907715c7f491e3976ba9568d9e_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"F1F1F2\">\n\n <h3><a href=\"https://github.com/mlabonne/llm-course\" target=\"_blank\" rel=\"noopener\">GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs)...</a></h3>\n\n <p><span class=\"github-repo-description\">Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/ArturoNereu/AI-Study-Group\">\n <header class=\"source\">\n\n <a href=\"https://github.com/ArturoNereu/AI-Study-Group\" target=\"_blank\" rel=\"noopener\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/f/2/f2e4d458b22ac9c7c1979389668fe99b4c8a97a4_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"F8F6F7\">\n\n <h3><a href=\"https://github.com/ArturoNereu/AI-Study-Group\" target=\"_blank\" rel=\"noopener\">GitHub - ArturoNereu/AI-Study-Group: Resources to learn AI</a></h3>\n\n <p><span class=\"github-repo-description\">Resources to learn AI</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-20T14:05:26.387Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 9,
"readers_count": 8,
"score": 21.8,
"yours": false,
"topic_id": 163811,
"topic_slug": "recommendations-for-ml-courses",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/mlabonne/llm-course",
"internal": false,
"reflection": false,
"title": "GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.",
"clicks": 5
},
{
"url": "https://github.com/ArturoNereu/AI-Study-Group",
"internal": false,
"reflection": false,
"title": "GitHub - ArturoNereu/AI-Study-Group: Resources to learn AI",
"clicks": 4
},
{
"url": "https://triton-lang.org/main/getting-started/tutorials/index.html",
"internal": false,
"reflection": false,
"title": "Tutorials — Triton documentation",
"clicks": 1
},
{
"url": "https://github.com/NielsRogge/Transformers-Tutorials",
"internal": false,
"reflection": false,
"title": "GitHub - NielsRogge/Transformers-Tutorials: This repository contains demos I made with the Transformers library by HuggingFace.",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/recommendations-for-ml-courses/163811/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233989,
"name": "Anisimov",
"username": "kaguya3222",
"avatar_template": "/user_avatar/discuss.huggingface.co/kaguya3222/{size}/51401_2.png",
"created_at": "2025-07-20T14:24:42.104Z",
"cooked": "<p>Thanks a lot!</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-20T14:24:42.104Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 163811,
"topic_slug": "recommendations-for-ml-courses",
"display_username": "Anisimov",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99851,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/recommendations-for-ml-courses/163811/3",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 234048,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-21T02:25:23.946Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-21T02:25:23.946Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 163811,
"topic_slug": "recommendations-for-ml-courses",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/recommendations-for-ml-courses/163811/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hey there ! I am Maksym, Frontend Engineer. I have 5 years of experience and working mostly with TypeScript/Frontend frameworks. I am familiar with other languages (C, C++) from the university program. I am interested in learning basic ML to complete Hugging Face LLM Course.</p>
<p>Any recommendations here with what should I start?</p>
|
<p>Hi.<br>
Python is essential. However, you don’t necessarily need to study Python specifically; if you understand C, you should be able to use Python by looking up syntax and functions as needed. The course should not use many of the more complex Python syntaxes. (After all, Python’s strengths lie in its readability and abundance of libraries…)<br>
You can start right away without any issues.</p>
<p>Additionally, for actual API usage or running WebGPU in a browser, there are JavaScript libraries available.</p>
<p>If you want to learn the theoretical background, there are other resources available, but the LLM course alone covers a significant portion of the material.</p>
<h3><a name="p-233983-some-resources-1" class="anchor" href="#p-233983-some-resources-1"></a>Some resources</h3>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://triton-lang.org/main/getting-started/tutorials/index.html">
<header class="source">
<a href="https://triton-lang.org/main/getting-started/tutorials/index.html" target="_blank" rel="noopener">triton-lang.org</a>
</header>
<article class="onebox-body">
<h3><a href="https://triton-lang.org/main/getting-started/tutorials/index.html" target="_blank" rel="noopener">Tutorials — Triton documentation</a></h3>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox githubrepo" data-onebox-src="https://github.com/NielsRogge/Transformers-Tutorials">
<header class="source">
<a href="https://github.com/NielsRogge/Transformers-Tutorials" target="_blank" rel="noopener">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/f/bf2593beb3f81247ee557e674fd468b67ae69a03_2_690x344.png" class="thumbnail" data-dominant-color="EEEAE8">
<h3><a href="https://github.com/NielsRogge/Transformers-Tutorials" target="_blank" rel="noopener">GitHub - NielsRogge/Transformers-Tutorials: This repository contains demos I made with the...</a></h3>
<p><span class="github-repo-description">This repository contains demos I made with the Transformers library by HuggingFace.</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox githubrepo" data-onebox-src="https://github.com/mlabonne/llm-course">
<header class="source">
<a href="https://github.com/mlabonne/llm-course" target="_blank" rel="noopener">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/0/102e79dc760c40907715c7f491e3976ba9568d9e_2_690x344.png" class="thumbnail" data-dominant-color="F1F1F2">
<h3><a href="https://github.com/mlabonne/llm-course" target="_blank" rel="noopener">GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs)...</a></h3>
<p><span class="github-repo-description">Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox githubrepo" data-onebox-src="https://github.com/ArturoNereu/AI-Study-Group">
<header class="source">
<a href="https://github.com/ArturoNereu/AI-Study-Group" target="_blank" rel="noopener">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/f/2/f2e4d458b22ac9c7c1979389668fe99b4c8a97a4_2_690x344.png" class="thumbnail" data-dominant-color="F8F6F7">
<h3><a href="https://github.com/ArturoNereu/AI-Study-Group" target="_blank" rel="noopener">GitHub - ArturoNereu/AI-Study-Group: Resources to learn AI</a></h3>
<p><span class="github-repo-description">Resources to learn AI</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Are there any recommendation tutorials on how to train a LLM via colab?
|
https://discuss.huggingface.co/t/are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab/163714
| 163,714
| 5
|
2025-07-19T13:14:57.472000Z
|
[
{
"id": 233836,
"name": "bun",
"username": "siusonedu",
"avatar_template": "/user_avatar/discuss.huggingface.co/siusonedu/{size}/51369_2.png",
"created_at": "2025-07-19T13:14:57.532Z",
"cooked": "<p>I have been asking a few AI on how to do it, seems like the code they provided would give execution errors.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-19T13:21:14.185Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 15,
"reads": 7,
"readers_count": 6,
"score": 81.4,
"yours": false,
"topic_id": 163714,
"topic_slug": "are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab",
"display_username": "bun",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99788,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab/163714/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233850,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-19T13:53:53.109Z",
"cooked": "<p>I recommend trying the LLM course. It basically uses Colab. Of course, if you have a good GPU, you can do it locally…</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/learn/llm-course/en/chapter3/3\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/learn/llm-course/en/chapter3/3\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/a/7a25697b8573a1036fe8481acad6b2dcbbe7fb35_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F2F0EB\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/learn/llm-course/en/chapter3/3\" target=\"_blank\" rel=\"noopener\">Fine-tuning a model with the Trainer API - Hugging Face LLM Course</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/transformers/en/notebooks\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/transformers/en/notebooks\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/0/70d0e152f7d3fc4f2893b87211cdf6d62d6e763b_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F5F3ED\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/transformers/en/notebooks\" target=\"_blank\" rel=\"noopener\">🤗 Transformers Notebooks</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/1/313719dba549638a0ae69da63f3d588560e4fc97_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"EEEEED\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face\" target=\"_blank\" rel=\"noopener\">Fine-Tuning Your First Large Language Model (LLM) with PyTorch and Hugging Face</a></h3>\n\n <p>A Blog post by Daniel Voigt Godoy on Hugging Face</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-19T13:53:53.109Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 163714,
"topic_slug": "are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/learn/llm-course/en/chapter3/3",
"internal": false,
"reflection": false,
"title": "Fine-tuning a model with the Trainer API - Hugging Face LLM Course",
"clicks": 3
},
{
"url": "https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face",
"internal": false,
"reflection": false,
"title": "Fine-Tuning Your First Large Language Model (LLM) with PyTorch and Hugging Face",
"clicks": 1
},
{
"url": "https://huggingface.co/docs/transformers/en/notebooks",
"internal": false,
"reflection": false,
"title": "🤗 Transformers Notebooks",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab/163714/2",
"reactions": [
{
"id": "white_check_mark",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233923,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-20T04:01:51.141Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-07-20T04:01:51.141Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 163714,
"topic_slug": "are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/are-there-any-recommendation-tutorials-on-how-to-train-a-llm-via-colab/163714/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have been asking a few AI on how to do it, seems like the code they provided would give execution errors.</p>
|
<p>I recommend trying the LLM course. It basically uses Colab. Of course, if you have a good GPU, you can do it locally…</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/learn/llm-course/en/chapter3/3">
<header class="source">
<a href="https://huggingface.co/learn/llm-course/en/chapter3/3" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/a/7a25697b8573a1036fe8481acad6b2dcbbe7fb35_2_690x372.png" class="thumbnail" data-dominant-color="F2F0EB" width="690" height="372"></div>
<h3><a href="https://huggingface.co/learn/llm-course/en/chapter3/3" target="_blank" rel="noopener">Fine-tuning a model with the Trainer API - Hugging Face LLM Course</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/transformers/en/notebooks">
<header class="source">
<a href="https://huggingface.co/docs/transformers/en/notebooks" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/0/70d0e152f7d3fc4f2893b87211cdf6d62d6e763b_2_690x372.png" class="thumbnail" data-dominant-color="F5F3ED" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/transformers/en/notebooks" target="_blank" rel="noopener">🤗 Transformers Notebooks</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face">
<header class="source">
<a href="https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/1/313719dba549638a0ae69da63f3d588560e4fc97_2_690x372.png" class="thumbnail" data-dominant-color="EEEEED" width="690" height="372"></div>
<h3><a href="https://huggingface.co/blog/dvgodoy/fine-tuning-llm-hugging-face" target="_blank" rel="noopener">Fine-Tuning Your First Large Language Model (LLM) with PyTorch and Hugging Face</a></h3>
<p>A Blog post by Daniel Voigt Godoy on Hugging Face</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Inconsistent GPT2Model results between transformers versions
|
https://discuss.huggingface.co/t/inconsistent-gpt2model-results-between-transformers-versions/163484
| 163,484
| 6
|
2025-07-17T16:01:05.497000Z
|
[
{
"id": 233493,
"name": "Wenzhong Zhao",
"username": "Wenzhong2005",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/b3f665/{size}.png",
"created_at": "2025-07-17T16:01:05.596Z",
"cooked": "<p>We fine-tuned the GPT2Model (distilgpt2) some time ago. The exact same GPT2 model produces different outputs for the exact same input after the upgrading. Therefore, after applying a classification head (linear layer) on top of GPT-2 output, we got different scores for the same input. It seems to me that the masked portion of the model output changed, while the unmasked portion stays the same. In the past upgrade, we have seen the default value for the attn_implementation changed from “eager” to “sdpa”. See <a href=\"https://discuss.huggingface.co/t/gpt2model-model-output-inconsistency-between-different-transformers-versions/146833\">my previous topic</a>. Due to tool vulnerability issues, we have to upgrade transformers 4.52.3 or above. This time, I already specified attn_implementation=“eager”, I still got different results after the upgrade. Can anyone help to point to what’s changed?</p>\n<p>The code to reproduce the results:<br>\nimport torch<br>\nimport tokenizers<br>\nimport transformers<br>\nfrom transformers import GPT2Model, GPT2Tokenizer</p>\n<p><span class=\"hashtag-raw\">#Sample</span> input<br>\ntokenizer = GPT2Tokenizer.from_pretrained(‘distilgpt2’)<br>\ntokenizer.pad_token = tokenizer.eos_token<br>\ntokenizer.padding_side = ‘left’</p>\n<p>text = ‘DAVID DAVIS’<br>\nmodel_inputs = tokenizer(text, padding=‘max_length’, max_length=12, truncation=True, return_tensors=‘pt’)<br>\ninput_ids, attention_mask = model_inputs[‘input_ids’],model_inputs[‘attention_mask’]<br>\nprint(‘input_ids:’, input_ids)<br>\nprint(‘mask:’, attention_mask)</p>\n<p><span class=\"hashtag-raw\">#Load</span> GPT-2 Model<br>\nmodel = GPT2Model.from_pretrained(‘distilgpt2’, attn_implementation=“eager”)</p>\n<p><span class=\"hashtag-raw\">#Run</span> model<br>\nmodel.eval()<br>\nwith torch.no_grad():<br>\noutputs = model(input_ids=input_ids, attention_mask=attention_mask)</p>\n<p>last_hidden_state = outputs.last_hidden_state<br>\nprint(last_hidden_state)</p>\n<p>Here are the 2 requirements.txt files and model outputs:<br>\nBefore:<br>\ntorch==2.6.0<br>\ntransformers==4.50.0<br>\nhuggingface_hub==0.33.4</p>\n<p>input_ids: tensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 5631, 11008, 42274, 1797]])<br>\nmask: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])<br>\nModel output: tensor([[[-3.1153e-01, 1.1569e-01, 2.4667e-02, …, -1.6813e-01, -1.9119e-01, -4.2739e-02],<br>\n[-8.7119e-01, 2.1186e-04, 5.6834e-01, …, -1.1233e-01, -4.8243e-01, 4.7066e-02],<br>\n[-7.1241e-01, -4.7743e-02, 5.6767e-01, …, 1.0435e-02, -4.7335e-01, 2.1707e-04],<br>\n…,<br>\n[-1.3753e+00, 2.9666e-01, 5.7950e-01, …, -6.4851e-01, -1.2977e+00, -8.4751e-02],<br>\n[-1.2291e+00, 1.6299e-01, 4.4637e-01, …, -5.1411e-01, -6.0615e-01, 4.3908e-01],<br>\n[-1.3633e+00, 8.3929e-02, 5.4821e-01, …, -5.7178e-01, -6.4784e-01, 4.6220e-01]]])</p>\n<p>After:<br>\ntorch==2.6.0<br>\ntransformers==4.52.3<br>\nhuggingface_hub==0.33.4</p>\n<p>input_ids: tensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 5631, 11008, 42274, 1797]])<br>\nmask: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])<br>\nModel output: tensor([[[-0.0724, 0.4212, 0.0130, …, -0.1462, 0.1229, -0.0698],<br>\n[-0.0360, 0.4466, -0.0973, …, -0.0136, 0.1273, -0.0545],<br>\n[ 0.0104, 0.3948, -0.0099, …, 0.0273, 0.1091, -0.0364],<br>\n…,<br>\n[-1.3753, 0.2967, 0.5795, …, -0.6485, -1.2978, -0.0848],<br>\n[-1.2291, 0.1630, 0.4464, …, -0.5141, -0.6062, 0.4391],<br>\n[-1.3633, 0.0839, 0.5482, …, -0.5718, -0.6479, 0.4622]]])</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-17T16:21:41.101Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 9,
"readers_count": 8,
"score": 56.8,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "Wenzhong Zhao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/gpt2model-model-output-inconsistency-between-different-transformers-versions/146833",
"internal": true,
"reflection": false,
"title": "GPT2Model model output inconsistency between different transformers versions",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22921,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233561,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-18T00:03:07.980Z",
"cooked": "<p>Although not mentioned in the release notes, <a href=\"https://github.com/huggingface/transformers/commits/main/src/transformers/models/gpt2/modeling_gpt2.py\">it appears that the implementation of masks and attention has been significantly changed</a>…</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-18T00:03:07.980Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/commits/main/src/transformers/models/gpt2/modeling_gpt2.py",
"internal": false,
"reflection": false,
"title": "History for src/transformers/models/gpt2/modeling_gpt2.py - huggingface/transformers · GitHub",
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233563,
"name": "Wenzhong Zhao",
"username": "Wenzhong2005",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/b3f665/{size}.png",
"created_at": "2025-07-18T00:30:57.149Z",
"cooked": "<p><a class=\"mention\" href=\"/u/john6666\">@John6666</a> thanks for the response. I figured that the latest version has the correct implementation for masks and attention: both from padded to non-padded tokens and other way around. I think we better to use the latest version to rebuild the fine-tuned model in the long term. However, for security reasons we need to upgrade it now, and the performance impact is too big to be ignored. Are there any workaround on this issue?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-18T00:43:10.026Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "Wenzhong Zhao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22921,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 233574,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-18T03:03:36.358Z",
"cooked": "<p>Since we can get the same output by using the same code, there are two options: simply download the old version of the source code and replace it, or fork Transformers and revert only the specific changes.</p>\n<p>Another option is a monkey patch like the one below. I haven’t confirmed whether it works or not…</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"># full_monkey_patch_gpt2_mask.py\n\nimport torch\nfrom transformers import GPT2Model, GPT2Tokenizer\nfrom transformers.modeling_attn_mask_utils import AttentionMaskConverter\n\n# ─── 1. Legacy v4.50.0 mask helpers ───────────────────────────────────────────\n# Copied from https://raw.githubusercontent.com/huggingface/transformers/v4.50.0/.../modeling_attn_mask_utils.py\n\ndef old_expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: int = None):\n bsz, src_len = mask.size()\n tgt_len = tgt_len if tgt_len is not None else src_len\n expanded = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)\n inv = 1.0 - expanded\n return inv.masked_fill(inv.to(torch.bool), torch.finfo(dtype).min)\n\ndef old_to_causal_4d(\n attention_mask: torch.Tensor,\n input_shape: tuple[int, int],\n inputs_embeds: torch.Tensor,\n past_key_values_length: int,\n sliding_window: int | None = None,\n):\n # Reconstruct converter usage from v4.50.0\n converter = AttentionMaskConverter(is_causal=True, sliding_window=sliding_window)\n key_value_length = input_shape[-1] + past_key_values_length\n if attention_mask is not None and attention_mask.dim() == 2:\n return converter.to_4d(\n attention_mask,\n input_shape[-1],\n key_value_length=key_value_length,\n dtype=inputs_embeds.dtype,\n )\n return converter.to_causal_4d(\n input_shape[0],\n input_shape[-1],\n key_value_length,\n dtype=inputs_embeds.dtype,\n device=inputs_embeds.device,\n )\n\n# ─── 2. Monkey-patch the new converter ────────────────────────────────────────\n# This forces Transformers ≥ 4.51 to use our old logic instead of the refactored one\n\nAttentionMaskConverter._expand_mask = staticmethod(old_expand_mask)\nAttentionMaskConverter.to_causal_4d = staticmethod(old_to_causal_4d)\nAttentionMaskConverter.to_4d = staticmethod(lambda mask, qlen, key_value_length=None, dtype=None: \n old_expand_mask(mask, dtype, tgt_len=qlen))\n\n# Prevent SDPA from dropping masks on trivial sequences:\nAttentionMaskConverter._ignore_causal_mask_sdpa = staticmethod(lambda *args, **kwargs: False)\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-18T03:03:36.358Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233717,
"name": "Wenzhong Zhao",
"username": "Wenzhong2005",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/b3f665/{size}.png",
"created_at": "2025-07-18T17:37:08.676Z",
"cooked": "<p>Thanks <a class=\"mention\" href=\"/u/john6666\">@John6666</a>. Tried the above monkey patch you provided, but it does not change the model output.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-18T17:37:08.676Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "Wenzhong Zhao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22921,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 233758,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-18T23:47:31.304Z",
"cooked": "<p>As a last resort, <a href=\"https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/models/gpt2/modeling_gpt2.py\">downloading this file and saving it locally should allow you to import the old version of <code>GPT2Model</code></a>. Compared to forking and reversing committing, this method is slightly less consistent, but it has the advantage of not being affected by version updates.<br>\nThe <code>import</code> statements at the beginning can be rewritten to suit your environment.</p>\n<p>Additionally, you could simply copy and paste the code from the old version, define the <code>GPT2Model</code> class, and use it. Since the modules are designed to have minimal dependencies on each other, the implementation should not be too difficult.<br>\nIf we decide to use <code>AutoModel</code>, there will be an extra step, but if we only use <code>GPT2Model</code>, defining the class is all that’s needed.</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-19T00:14:51.296Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 6,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/models/gpt2/modeling_gpt2.py",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/models/gpt2/modeling_gpt2.py at v4.50.0 · huggingface/transformers · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233790,
"name": "Wenzhong Zhao",
"username": "Wenzhong2005",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/b3f665/{size}.png",
"created_at": "2025-07-19T03:25:05.274Z",
"cooked": "<p>Thanks <a class=\"mention\" href=\"/u/john6666\">@John6666</a> This is a good recommendation. We had a workaround with a slightly lower version v4.51.3 which still satisfies our security requirements. So it is fine for now.</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-07-19T03:25:05.274Z",
"reply_count": 0,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "Wenzhong Zhao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22921,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 233861,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-19T15:26:01.130Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 8,
"post_type": 3,
"posts_count": 8,
"updated_at": "2025-07-19T15:26:01.130Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 163484,
"topic_slug": "inconsistent-gpt2model-results-between-transformers-versions",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inconsistent-gpt2model-results-between-transformers-versions/163484/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>We fine-tuned the GPT2Model (distilgpt2) some time ago. The exact same GPT2 model produces different outputs for the exact same input after the upgrading. Therefore, after applying a classification head (linear layer) on top of GPT-2 output, we got different scores for the same input. It seems to me that the masked portion of the model output changed, while the unmasked portion stays the same. In the past upgrade, we have seen the default value for the attn_implementation changed from “eager” to “sdpa”. See <a href="https://discuss.huggingface.co/t/gpt2model-model-output-inconsistency-between-different-transformers-versions/146833">my previous topic</a>. Due to tool vulnerability issues, we have to upgrade transformers 4.52.3 or above. This time, I already specified attn_implementation=“eager”, I still got different results after the upgrade. Can anyone help to point to what’s changed?</p>
<p>The code to reproduce the results:<br>
import torch<br>
import tokenizers<br>
import transformers<br>
from transformers import GPT2Model, GPT2Tokenizer</p>
<p><span class="hashtag-raw">#Sample</span> input<br>
tokenizer = GPT2Tokenizer.from_pretrained(‘distilgpt2’)<br>
tokenizer.pad_token = tokenizer.eos_token<br>
tokenizer.padding_side = ‘left’</p>
<p>text = ‘DAVID DAVIS’<br>
model_inputs = tokenizer(text, padding=‘max_length’, max_length=12, truncation=True, return_tensors=‘pt’)<br>
input_ids, attention_mask = model_inputs[‘input_ids’],model_inputs[‘attention_mask’]<br>
print(‘input_ids:’, input_ids)<br>
print(‘mask:’, attention_mask)</p>
<p><span class="hashtag-raw">#Load</span> GPT-2 Model<br>
model = GPT2Model.from_pretrained(‘distilgpt2’, attn_implementation=“eager”)</p>
<p><span class="hashtag-raw">#Run</span> model<br>
model.eval()<br>
with torch.no_grad():<br>
outputs = model(input_ids=input_ids, attention_mask=attention_mask)</p>
<p>last_hidden_state = outputs.last_hidden_state<br>
print(last_hidden_state)</p>
<p>Here are the 2 requirements.txt files and model outputs:<br>
Before:<br>
torch==2.6.0<br>
transformers==4.50.0<br>
huggingface_hub==0.33.4</p>
<p>input_ids: tensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 5631, 11008, 42274, 1797]])<br>
mask: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])<br>
Model output: tensor([[[-3.1153e-01, 1.1569e-01, 2.4667e-02, …, -1.6813e-01, -1.9119e-01, -4.2739e-02],<br>
[-8.7119e-01, 2.1186e-04, 5.6834e-01, …, -1.1233e-01, -4.8243e-01, 4.7066e-02],<br>
[-7.1241e-01, -4.7743e-02, 5.6767e-01, …, 1.0435e-02, -4.7335e-01, 2.1707e-04],<br>
…,<br>
[-1.3753e+00, 2.9666e-01, 5.7950e-01, …, -6.4851e-01, -1.2977e+00, -8.4751e-02],<br>
[-1.2291e+00, 1.6299e-01, 4.4637e-01, …, -5.1411e-01, -6.0615e-01, 4.3908e-01],<br>
[-1.3633e+00, 8.3929e-02, 5.4821e-01, …, -5.7178e-01, -6.4784e-01, 4.6220e-01]]])</p>
<p>After:<br>
torch==2.6.0<br>
transformers==4.52.3<br>
huggingface_hub==0.33.4</p>
<p>input_ids: tensor([[50256, 50256, 50256, 50256, 50256, 50256, 50256, 50256, 5631, 11008, 42274, 1797]])<br>
mask: tensor([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])<br>
Model output: tensor([[[-0.0724, 0.4212, 0.0130, …, -0.1462, 0.1229, -0.0698],<br>
[-0.0360, 0.4466, -0.0973, …, -0.0136, 0.1273, -0.0545],<br>
[ 0.0104, 0.3948, -0.0099, …, 0.0273, 0.1091, -0.0364],<br>
…,<br>
[-1.3753, 0.2967, 0.5795, …, -0.6485, -1.2978, -0.0848],<br>
[-1.2291, 0.1630, 0.4464, …, -0.5141, -0.6062, 0.4391],<br>
[-1.3633, 0.0839, 0.5482, …, -0.5718, -0.6479, 0.4622]]])</p>
|
<p>As a last resort, <a href="https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/models/gpt2/modeling_gpt2.py">downloading this file and saving it locally should allow you to import the old version of <code>GPT2Model</code></a>. Compared to forking and reversing committing, this method is slightly less consistent, but it has the advantage of not being affected by version updates.<br>
The <code>import</code> statements at the beginning can be rewritten to suit your environment.</p>
<p>Additionally, you could simply copy and paste the code from the old version, define the <code>GPT2Model</code> class, and use it. Since the modules are designed to have minimal dependencies on each other, the implementation should not be too difficult.<br>
If we decide to use <code>AutoModel</code>, there will be an extra step, but if we only use <code>GPT2Model</code>, defining the class is all that’s needed.</p>
|
I made a thing and have no idea what to do now
|
https://discuss.huggingface.co/t/i-made-a-thing-and-have-no-idea-what-to-do-now/163372
| 163,372
| 5
|
2025-07-17T04:37:54.825000Z
|
[
{
"id": 233329,
"name": "Glen Bradley",
"username": "glenbradley",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/g/c2a13f/{size}.png",
"created_at": "2025-07-17T04:37:54.887Z",
"cooked": "<p>I have developed a method for AI to parse ethics algorithmically.</p>\n<p>Ethics should be open source. I have been developing this in a silo for 12 months, this is my first-ever software project, in the 12 months since I started this journey at “Hello world,” I have not managed to have a meaningful conversation with anyone about this, either from lack of interest, lack of understanding, or hostility because I’m not actually a software developer, and I would genuinely appreciate human feedback on this project, good bad and ugly. Is there an appropriate subforum to post this? Thank you so much!</p>\n<aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/GlenABradley/EthicalAITestbed\">\n <header class=\"source\">\n\n <a href=\"https://github.com/GlenABradley/EthicalAITestbed\" target=\"_blank\" rel=\"noopener nofollow ugc\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8198602812d674395b5cfa7b9f4c9bd11a1e826_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"F1F1F4\">\n\n <h3><a href=\"https://github.com/GlenABradley/EthicalAITestbed\" target=\"_blank\" rel=\"noopener nofollow ugc\">GitHub - GlenABradley/EthicalAITestbed: This is Ethics for AI. Not guardrails, actual...</a></h3>\n\n <p><span class=\"github-repo-description\">This is Ethics for AI. Not guardrails, actual ethics.</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-17T04:37:54.887Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 10,
"reads": 15,
"readers_count": 14,
"score": 68,
"yours": false,
"topic_id": 163372,
"topic_slug": "i-made-a-thing-and-have-no-idea-what-to-do-now",
"display_username": "Glen Bradley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/GlenABradley/EthicalAITestbed",
"internal": false,
"reflection": false,
"title": "GitHub - GlenABradley/EthicalAITestbed: This is Ethics for AI. Not guardrails, actual ethics.",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99577,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/i-made-a-thing-and-have-no-idea-what-to-do-now/163372/1",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233429,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-17T13:36:47.294Z",
"cooked": "<p>Hugging Face Discord has a dedicated channel for AI ethics.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-17T13:36:47.294Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 22.2,
"yours": false,
"topic_id": 163372,
"topic_slug": "i-made-a-thing-and-have-no-idea-what-to-do-now",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/i-made-a-thing-and-have-no-idea-what-to-do-now/163372/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233542,
"name": "Glen Bradley",
"username": "glenbradley",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/g/c2a13f/{size}.png",
"created_at": "2025-07-17T21:28:21.212Z",
"cooked": "<p>Thank you. I am brand new and don’t know my way around yet. I appreciate your help.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-17T21:28:21.212Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 9,
"readers_count": 8,
"score": 16.8,
"yours": false,
"topic_id": 163372,
"topic_slug": "i-made-a-thing-and-have-no-idea-what-to-do-now",
"display_username": "Glen Bradley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99577,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/i-made-a-thing-and-have-no-idea-what-to-do-now/163372/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 233644,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-18T09:29:16.259Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-18T09:29:16.259Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 6.2,
"yours": false,
"topic_id": 163372,
"topic_slug": "i-made-a-thing-and-have-no-idea-what-to-do-now",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/i-made-a-thing-and-have-no-idea-what-to-do-now/163372/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have developed a method for AI to parse ethics algorithmically.</p>
<p>Ethics should be open source. I have been developing this in a silo for 12 months, this is my first-ever software project, in the 12 months since I started this journey at “Hello world,” I have not managed to have a meaningful conversation with anyone about this, either from lack of interest, lack of understanding, or hostility because I’m not actually a software developer, and I would genuinely appreciate human feedback on this project, good bad and ugly. Is there an appropriate subforum to post this? Thank you so much!</p>
<aside class="onebox githubrepo" data-onebox-src="https://github.com/GlenABradley/EthicalAITestbed">
<header class="source">
<a href="https://github.com/GlenABradley/EthicalAITestbed" target="_blank" rel="noopener nofollow ugc">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8198602812d674395b5cfa7b9f4c9bd11a1e826_2_690x344.png" class="thumbnail" data-dominant-color="F1F1F4">
<h3><a href="https://github.com/GlenABradley/EthicalAITestbed" target="_blank" rel="noopener nofollow ugc">GitHub - GlenABradley/EthicalAITestbed: This is Ethics for AI. Not guardrails, actual...</a></h3>
<p><span class="github-repo-description">This is Ethics for AI. Not guardrails, actual ethics.</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
<p>Hugging Face Discord has a dedicated channel for AI ethics.</p>
|
Pipeline vs model.generate()
|
https://discuss.huggingface.co/t/pipeline-vs-model-generate/26203
| 26,203
| 5
|
2022-11-16T22:12:08.333000Z
|
[
{
"id": 49588,
"name": "Zeke John",
"username": "Z3K3",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/z/a3d4f5/{size}.png",
"created_at": "2022-11-16T22:12:08.404Z",
"cooked": "<p>I want to know whats the difference between using the Pipeline() function to generate a result Vs using the model.generate() function to generate a result, which one is faster? Which one is more accurate? Which one is more consistently giving out good responses? And what is the main difference between them. I am sorry if this sounds like a dumb question i am just wondering which method i should use to generate ML predictions for Summarization, and want to know the Pros/Cons of each of them.</p>\n<p>Thanks in advance</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 12,
"updated_at": "2022-11-16T22:12:08.404Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 14510,
"reads": 448,
"readers_count": 447,
"score": 72499.6,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Zeke John",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 7
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 8150,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/1",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 6
},
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 7,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 49611,
"name": "Niels Rogge",
"username": "nielsr",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png",
"created_at": "2022-11-17T08:01:47.700Z",
"cooked": "<p>Hi,</p>\n<p>The <a href=\"https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines\">pipeline() API</a> is created mostly for people who don’t care too much about the details of the underlying process, for people who just want to use a machine learning model without having to implement several details like pre- and postprocessing themselves. The pipeline API is created such that you get an easy-to-use abstraction over any ML model, which is great for inference. The <a href=\"https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines#transformers.SummarizationPipeline\">SummarizationPipeline</a> for instance uses generate() behind the scenes.</p>\n<p>On the other hand, if you do care about the details, then it’s recommended to generate text yourself by calling <a href=\"https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate\">generate()</a> yourself and implement pre-and postprocessing yourself.</p>\n<p>Also note that any text generation pipeline does provide a <a href=\"https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/pipelines/text2text_generation.py#L138\" rel=\"noopener nofollow ugc\">generate_kwargs</a> argument, which means that technically you can forward any of the keyword arguments that generate() supports to the pipeline as well.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 12,
"updated_at": "2022-11-17T08:01:47.700Z",
"reply_count": 3,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 272,
"reads": 441,
"readers_count": 440,
"score": 1688.2,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Niels Rogge",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate",
"internal": false,
"reflection": false,
"title": "Generation",
"clicks": 594
},
{
"url": "https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/pipelines/text2text_generation.py#L138",
"internal": false,
"reflection": false,
"title": "transformers/text2text_generation.py at 94b3f544a1f5e04b78d87a2ae32a7ac252e22e31 · huggingface/transformers · GitHub",
"clicks": 275
},
{
"url": "https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines",
"internal": false,
"reflection": false,
"title": "Pipelines",
"clicks": 275
},
{
"url": "https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines#transformers.SummarizationPipeline",
"internal": false,
"reflection": false,
"title": "Pipelines",
"clicks": 130
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 15
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 205,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 12
},
{
"id": "+1",
"type": "emoji",
"count": 2
},
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 15,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 49670,
"name": "Zeke John",
"username": "Z3K3",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/z/a3d4f5/{size}.png",
"created_at": "2022-11-17T17:40:09.038Z",
"cooked": "<p>Thank you for this response <a href=\"https://discuss.huggingface.co/u/nielsr\">nielsr</a>. This was what I wanted to know.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 12,
"updated_at": "2022-11-17T17:40:09.038Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 122,
"reads": 419,
"readers_count": 418,
"score": 683.8,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Zeke John",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 8150,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 205,
"username": "nielsr",
"name": "Niels Rogge",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 84585,
"name": "Saptarshi Sengupta",
"username": "Saptarshi7",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/9e8a1a/{size}.png",
"created_at": "2023-08-16T21:45:20.578Z",
"cooked": "<p>Hello,</p>\n<p>So I tested both recently and found a very peculiar behavior under similar parameter values. This was using Galactica’s 1.3B variant</p>\n<pre><code class=\"lang-auto\">from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed\nimport torch\n\ncheckpoint = \"facebook/galactica-1.3b\"\n\ntokenizer = AutoTokenizer.from_pretrained(checkpoint, padding_side=\"left\") \nmodel = AutoModelForCausalLM.from_pretrained(checkpoint)\nmodel.to('cuda')\ngenerator = pipeline('text-generation', model=model, tokenizer=tokenizer, device=0)\n\n#With pipeline\nset_seed(42)\ngenerator(['Is this', 'What is the matter'], renormalize_logits=True, do_sample=True, use_cache=True, max_new_tokens=10)\n\n#With model.generate()\ndevice=torch.device('cuda',0)\nmodel.to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(checkpoint, padding_side=\"left\")\ntokenizer.pad_token = tokenizer.eos_token = '<pad>'\n\ntokenized_prompts = tokenizer(['Is this', 'What is the matter'], padding=True, return_tensors='pt')\nset_seed(42)\nmodel_op = model.generate(input_ids=tokenized_prompts['input_ids'].to(device),\n attention_mask=tokenized_prompts['attention_mask'].to(device),\n renormalize_logits=False, do_sample=True,\n use_cache=True, max_new_tokens=10)\ntokenizer.batch_decode(model_op, skip_special_tokens=True)\n</code></pre>\n<p>Here is the result with each,</p>\n<pre><code class=\"lang-auto\">[{'generated_text': 'Is this method for dealing with multiple objects?\\n\\n\\n'}],\n [{'generated_text': 'What is the matter density of a star whose radius is equal to '}]\n................\n['Is this method for dealing with multiple objects?\\n\\n\\n',\n 'What is the matter of this, I know that it isn’t']\n</code></pre>\n<p>As we can see, both methods are producing different outputs, even under the same settings. However, the first generation for each method seems to be the same & I tried it for a bunch of other prompts. That being said if we turn off do_sample i.e.</p>\n<blockquote>\n<p>do_sample = False (greedy decoding)</p>\n</blockquote>\n<p>then, we get the same results. Thus, I believe this is related to the sampling method being employed which is producing different results. Does anyone have any thoughts on this?</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 12,
"updated_at": "2023-08-16T21:45:20.578Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 534,
"reads": 351,
"readers_count": 350,
"score": 2775.2,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Saptarshi Sengupta",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 26605,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "clap",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 105523,
"name": "Niels Rogge",
"username": "nielsr",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png",
"created_at": "2023-12-25T20:59:13.271Z",
"cooked": "<p>Hi,</p>\n<p>Well, sampling is exactly causing randomness <img src=\"https://emoji.discourse-cdn.com/apple/smiley.png?v=12\" title=\":smiley:\" class=\"emoji\" alt=\":smiley:\" loading=\"lazy\" width=\"20\" height=\"20\"> you can set a seed to get reproducabile results even when using sampling:</p>\n<pre><code class=\"lang-auto\">from transformers import set_seed\nset_seed(42)\n</code></pre>\n<p>Refer to the <a href=\"https://huggingface.co/blog/how-to-generate\">generate blog post</a> for more details.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 12,
"updated_at": "2023-12-25T20:59:13.271Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 94,
"reads": 207,
"readers_count": 206,
"score": 511.4,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Niels Rogge",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/blog/how-to-generate",
"internal": false,
"reflection": false,
"title": "How to generate text: using different decoding methods for language generation with Transformers",
"clicks": 132
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 205,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 26605,
"username": "Saptarshi7",
"name": "Saptarshi Sengupta",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/9e8a1a/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 186805,
"name": "Brando Miranda",
"username": "brando",
"avatar_template": "/user_avatar/discuss.huggingface.co/brando/{size}/30114_2.png",
"created_at": "2024-12-05T19:26:49.723Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"nielsr\" data-post=\"2\" data-topic=\"26203\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img loading=\"lazy\" alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/nielsr/48/39617_2.png\" class=\"avatar\"> nielsr:</div>\n<blockquote>\n<p>The <a href=\"https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines\">pipeline() API</a> is created mostly for people who don’t care too much about the details of the underlying process, for people who just want to use a machine learning model without having to implement several details like pre- and postprocessing themselves.</p>\n</blockquote>\n</aside>\n<p>Do you mind sharing a concrete example of what you mean by pre and postprocessing in this context? <a class=\"mention\" href=\"/u/nielsr\">@nielsr</a></p>\n<p>Thank you in advance.</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 12,
"updated_at": "2024-12-05T19:26:49.723Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 1,
"incoming_link_count": 15,
"reads": 57,
"readers_count": 56,
"score": 121.4,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Brando Miranda",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 3664,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/6",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 192327,
"name": "Niels Rogge",
"username": "nielsr",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png",
"created_at": "2024-12-29T11:07:37.068Z",
"cooked": "<p>By pre-processing, I mean turning a sentence into tokens, then turning those tokens into numbers (indices in the vocabulary of a Transformer model). The tokenizer can be used for this purpose, which automatically turns text into so-called <code>input_ids</code>. The pipeline uses a tokenizer behind the scenes.</p>\n<p>As for post-processing, one needs to decode the generate id’s back into text. The tokenizer can also be used for this, using the <code>decode</code> or <code>batch_decode</code> methods. The pipeline also makes use of these methods to present the result as text.</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 12,
"updated_at": "2024-12-29T11:07:37.068Z",
"reply_count": 1,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 45,
"readers_count": 44,
"score": 114,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "Niels Rogge",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 205,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 3664,
"username": "brando",
"name": "Brando Miranda",
"avatar_template": "/user_avatar/discuss.huggingface.co/brando/{size}/30114_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 196576,
"name": "hongyeliu",
"username": "hongyeliu",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/h/ee59a6/{size}.png",
"created_at": "2025-01-20T02:24:33.522Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"nielsr\" data-post=\"7\" data-topic=\"26203\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img loading=\"lazy\" alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/nielsr/48/39617_2.png\" class=\"avatar\"> nielsr:</div>\n<blockquote>\n<p>By pre-processing, I mean turning a sentence into tokens, then turning those tokens into numbers (indices in the vocabulary of a Transformer model). The tokenizer can be used for this purpose, which automatically turns text into so-called <code>input_ids</code>. The pipeline uses a tokenizer behind the scenes.</p>\n<p>As for post-processing, one needs to decode the generate id’s back into text. The tokenizer can also be used for this, using the <code>decode</code> or <code>batch_decode</code> methods. The pipeline also makes use of these methods to present the result as text</p>\n</blockquote>\n</aside>\n<p>Thank you for your response earlier. I have a question regarding the <a href=\"https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/pipelines/text2text_generation.py#L138\" rel=\"noopener nofollow ugc\">generate_kwargs</a> argument needed to make .generate perform equivalently to .pipeline.</p>\n<p>Currently, I am using the model from <a href=\"https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit\">Meta-Llama-3.1-8B-Instruct-bnb-4bit</a>. When I use .generate, the output begins by repeating the input prompt before generating the desired output. Since my prompt is quite lengthy, I can only see a truncated version of it in the output.</p>\n<p>However, when I use .pipeline, it outputs the desired response directly without repeating the prompt. I suspect the difference might be due to .generate using greedy search for decoding, while .pipeline applies additional configurations like penalty terms to avoid regenerating the prompt.</p>\n<p>I understand from your response that this might be the case, but I am unsure how to inspect the configuration used by .pipeline and apply similar settings to the model.generation_config. Could you provide an example code snippet illustrating how to achieve this?</p>\n<p>Thank you for your help!</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 12,
"updated_at": "2025-01-20T02:24:33.522Z",
"reply_count": 2,
"reply_to_post_number": 2,
"quote_count": 1,
"incoming_link_count": 15,
"reads": 35,
"readers_count": 34,
"score": 122,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "hongyeliu",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"internal": false,
"reflection": false,
"title": "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit · Hugging Face",
"clicks": 1
},
{
"url": "https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/pipelines/text2text_generation.py#L138",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/pipelines/text2text_generation.py at 94b3f544a1f5e04b78d87a2ae32a7ac252e22e31 · huggingface/transformers · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 67971,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/8",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 205,
"username": "nielsr",
"name": "Niels Rogge",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 203160,
"name": "hongyeliu",
"username": "hongyeliu",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/h/ee59a6/{size}.png",
"created_at": "2025-02-17T15:11:48.247Z",
"cooked": "<p><a class=\"mention\" href=\"/u/nielsr\">@nielsr</a> sry, forgot to @</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 12,
"updated_at": "2025-02-17T15:11:48.247Z",
"reply_count": 0,
"reply_to_post_number": 8,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 20,
"readers_count": 19,
"score": 34,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "hongyeliu",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 67971,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/9",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 67971,
"username": "hongyeliu",
"name": "hongyeliu",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/h/ee59a6/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 231146,
"name": "bendangnuksung",
"username": "Bendang",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/a4c791/{size}.png",
"created_at": "2025-07-05T13:50:23.607Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"hongyeliu\" data-post=\"8\" data-topic=\"26203\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/h/ee59a6/48.png\" class=\"avatar\"> hongyeliu:</div>\n<blockquote>\n<p>suspect the difference might be due to .generat</p>\n</blockquote>\n</aside>\n<p>I am having the same problem. Have you figured out how to do this?</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 12,
"updated_at": "2025-07-05T13:50:23.607Z",
"reply_count": 0,
"reply_to_post_number": 8,
"quote_count": 1,
"incoming_link_count": 2,
"reads": 5,
"readers_count": 4,
"score": 26,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "bendangnuksung",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98237,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/10",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231215,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-06T03:55:29.738Z",
"cooked": "<p>For now, I think the default value in Pipeline is prioritized by <a href=\"https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit/blob/main/generation_config.json\"><code>generation_config.json</code></a>, followed by <a href=\"https://huggingface.co/docs/transformers/en/main_classes/text_generation\">the default value in <code>GenerationConfig</code></a>. If you reproduce this, you should get almost the same result. Probably like this:</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">outputs = model.generate(input_ids, do_sample=True, top_k=50, top_p=0.9, temperature=0.6, repetition_penalty=1.0, max_length=131072, bos_token_id=128000, pad_token_id=128004, eos_token_id=[128001, 128008, 128009])\n</code></pre>",
"post_number": 11,
"post_type": 1,
"posts_count": 12,
"updated_at": "2025-07-06T03:56:05.276Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit/blob/main/generation_config.json",
"internal": false,
"reflection": false,
"title": "generation_config.json · unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit at main",
"clicks": 2
},
{
"url": "https://huggingface.co/docs/transformers/en/main_classes/text_generation",
"internal": false,
"reflection": false,
"title": "Generation",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/11",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233250,
"name": "bendangnuksung",
"username": "Bendang",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/a4c791/{size}.png",
"created_at": "2025-07-16T16:28:57.128Z",
"cooked": "<p>I found a workaround to make <code>model.generate</code> produce the same output as the <code>pipeline</code>. I ran the pipeline in debug mode and set a breakpoint <a href=\"https://github.com/huggingface/transformers/blob/e68ebb695f9d1d990462397e284e79d8729aafea/src/transformers/pipelines/text2text_generation.py#L220C1-L221C1\" rel=\"noopener nofollow ugc\">here</a>. At that point, I pickled the <code>generate_kwargs</code> used internally by the pipeline and reused them directly in my own call to <code>model.generate</code>. This way, I was able to replicate the exact same output as the pipeline.<br>\nHope this helps anyone facing a similar issue.</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 12,
"updated_at": "2025-07-16T16:28:57.128Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 20.6,
"yours": false,
"topic_id": 26203,
"topic_slug": "pipeline-vs-model-generate",
"display_username": "bendangnuksung",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/blob/e68ebb695f9d1d990462397e284e79d8729aafea/src/transformers/pipelines/text2text_generation.py#L220C1-L221C1",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/pipelines/text2text_generation.py at e68ebb695f9d1d990462397e284e79d8729aafea · huggingface/transformers · GitHub",
"clicks": 4
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98237,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pipeline-vs-model-generate/26203/12",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>I want to know whats the difference between using the Pipeline() function to generate a result Vs using the model.generate() function to generate a result, which one is faster? Which one is more accurate? Which one is more consistently giving out good responses? And what is the main difference between them. I am sorry if this sounds like a dumb question i am just wondering which method i should use to generate ML predictions for Summarization, and want to know the Pros/Cons of each of them.</p>
<p>Thanks in advance</p>
|
<p>Hi,</p>
<p>The <a href="https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines">pipeline() API</a> is created mostly for people who don’t care too much about the details of the underlying process, for people who just want to use a machine learning model without having to implement several details like pre- and postprocessing themselves. The pipeline API is created such that you get an easy-to-use abstraction over any ML model, which is great for inference. The <a href="https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines#transformers.SummarizationPipeline">SummarizationPipeline</a> for instance uses generate() behind the scenes.</p>
<p>On the other hand, if you do care about the details, then it’s recommended to generate text yourself by calling <a href="https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate">generate()</a> yourself and implement pre-and postprocessing yourself.</p>
<p>Also note that any text generation pipeline does provide a <a href="https://github.com/huggingface/transformers/blob/94b3f544a1f5e04b78d87a2ae32a7ac252e22e31/src/transformers/pipelines/text2text_generation.py#L138" rel="noopener nofollow ugc">generate_kwargs</a> argument, which means that technically you can forward any of the keyword arguments that generate() supports to the pipeline as well.</p>
|
Too many task requests resulting in a ban?
|
https://discuss.huggingface.co/t/too-many-task-requests-resulting-in-a-ban/163189
| 163,189
| 5
|
2025-07-15T22:59:00.404000Z
|
[
{
"id": 233066,
"name": "hertt",
"username": "etaqaz",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/ba9def/{size}.png",
"created_at": "2025-07-15T22:59:00.483Z",
"cooked": "<p>Hi, I ran several requests at once on a workspace on HF, and, instead of being able to input more after the requests were done, it instead seems to have me blocked/banned. The service is still online (a friend with a different IP was able to use it), and changing to another browser on my end did not allow me to use said workspace.</p>\n<p>Does HF ban/block people for excessive request use? It’s not unreasonable, mind you, but I’m wondering if it is only a temporary thing or the IP’s been perma-nuked by HF?</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/4/2/424a62d64f8dc42f5f60c339ffdbbb567240b8fa.png\" data-download-href=\"/uploads/short-url/9sqSQ0jAGd3g1rYYh6O6LzWteb0.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/4/2/424a62d64f8dc42f5f60c339ffdbbb567240b8fa.png\" alt=\"image\" data-base62-sha1=\"9sqSQ0jAGd3g1rYYh6O6LzWteb0\" width=\"581\" height=\"259\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">581×259 6.37 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces/ilcve21/Sparc3D\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces/ilcve21/Sparc3D\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e8019db08214408326d946c1c63e1d7468e1569_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"AC669E\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces/ilcve21/Sparc3D\" target=\"_blank\" rel=\"noopener\">Sparc3D - a Hugging Face Space by ilcve21</a></h3>\n\n <p>This application allows you to generate high-resolution 3D models by providing input data. You will receive detailed 3D models that you can use for various applications.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-15T22:59:00.483Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 40,
"reads": 7,
"readers_count": 6,
"score": 216.4,
"yours": false,
"topic_id": 163189,
"topic_slug": "too-many-task-requests-resulting-in-a-ban",
"display_username": "hertt",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/ilcve21/Sparc3D",
"internal": false,
"reflection": false,
"title": "Sparc3D - a Hugging Face Space by ilcve21",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99480,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/too-many-task-requests-resulting-in-a-ban/163189/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233070,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-15T23:56:09.418Z",
"cooked": "<p>Seems it’s not Hugging Face matter but their endpoint matter.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/2/82bbb5a5d2eff2b6d3a866823aaf88f4558fdec1_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"EDEFF1\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330\" target=\"_blank\" rel=\"noopener\">ilcve21/Sparc3D · 🚩 Report: Illegal or restricted content</a></h3>\n\n <p>Sparc3D is a great technology, but there was no intention to make it open source from the start. The proof of this is that they have not released the source and models even after two weeks, and the...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-15T23:56:09.418Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 21.2,
"yours": false,
"topic_id": 163189,
"topic_slug": "too-many-task-requests-resulting-in-a-ban",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330",
"internal": false,
"reflection": false,
"title": "ilcve21/Sparc3D · 🚩 Report: Illegal or restricted content",
"clicks": 8
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/too-many-task-requests-resulting-in-a-ban/163189/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233072,
"name": "hertt",
"username": "etaqaz",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/ba9def/{size}.png",
"created_at": "2025-07-16T00:13:02.648Z",
"cooked": "<p>ohhhhhhh, I see</p>\n<p>I tried other HF spaces and it was working, I should have put 2 and 2 together!</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-16T00:13:02.648Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 163189,
"topic_slug": "too-many-task-requests-resulting-in-a-ban",
"display_username": "hertt",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99480,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/too-many-task-requests-resulting-in-a-ban/163189/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 233198,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-16T12:13:50.845Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-16T12:13:50.845Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 163189,
"topic_slug": "too-many-task-requests-resulting-in-a-ban",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/too-many-task-requests-resulting-in-a-ban/163189/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi, I ran several requests at once on a workspace on HF, and, instead of being able to input more after the requests were done, it instead seems to have me blocked/banned. The service is still online (a friend with a different IP was able to use it), and changing to another browser on my end did not allow me to use said workspace.</p>
<p>Does HF ban/block people for excessive request use? It’s not unreasonable, mind you, but I’m wondering if it is only a temporary thing or the IP’s been perma-nuked by HF?</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/4/2/424a62d64f8dc42f5f60c339ffdbbb567240b8fa.png" data-download-href="/uploads/short-url/9sqSQ0jAGd3g1rYYh6O6LzWteb0.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/4/2/424a62d64f8dc42f5f60c339ffdbbb567240b8fa.png" alt="image" data-base62-sha1="9sqSQ0jAGd3g1rYYh6O6LzWteb0" width="581" height="259"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">581×259 6.37 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces/ilcve21/Sparc3D">
<header class="source">
<a href="https://huggingface.co/spaces/ilcve21/Sparc3D" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e8019db08214408326d946c1c63e1d7468e1569_2_690x372.png" class="thumbnail" data-dominant-color="AC669E" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces/ilcve21/Sparc3D" target="_blank" rel="noopener">Sparc3D - a Hugging Face Space by ilcve21</a></h3>
<p>This application allows you to generate high-resolution 3D models by providing input data. You will receive detailed 3D models that you can use for various applications.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
<p>Seems it’s not Hugging Face matter but their endpoint matter.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330">
<header class="source">
<a href="https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/2/82bbb5a5d2eff2b6d3a866823aaf88f4558fdec1_2_690x372.png" class="thumbnail" data-dominant-color="EDEFF1" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces/ilcve21/Sparc3D/discussions/13#68722aac2c4695ccdaaf9330" target="_blank" rel="noopener">ilcve21/Sparc3D · 🚩 Report: Illegal or restricted content</a></h3>
<p>Sparc3D is a great technology, but there was no intention to make it open source from the start. The proof of this is that they have not released the source and models even after two weeks, and the...</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Fine-tune for function call on Meta-Llama-3.1-8B-Instruct
|
https://discuss.huggingface.co/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680
| 162,680
| 9
|
2025-07-11T18:58:10.235000Z
|
[
{
"id": 232322,
"name": "Orkun Gedik",
"username": "orkungedik",
"avatar_template": "/user_avatar/discuss.huggingface.co/orkungedik/{size}/47802_2.png",
"created_at": "2025-07-11T18:58:10.299Z",
"cooked": "<p>Hi,</p>\n<p>I am trying to fine-tune to make function call predictions better on Meta-Llama-3.1-8B-Instruct. To do that I created a dataset and applied steps regarding to <a href=\"https://gautam75.medium.com/fine-tuning-llama-3-1-8b-for-function-calling-using-lora-159b9ee66060\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">Fine-Tuning Llama-3.1-8B for Function Calling using LoRA | by Gautam Chutani | Medium</a> blog. As a result I can see function name and parameters are predicting perfectly, but now the model is generating weird answers [get_weather(city=“IL”)] regarding to prompt like “how are you?”.</p>\n<p>Please find the code snippets below belong training;</p>\n<pre><code class=\"lang-auto\">import torch\nfrom unsloth import FastLanguageModel\n\nmax_seq_length = 2048 # Unsloth auto supports RoPE Scaling internally!\ndtype = None # None for auto detection\nload_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.\n\nmodel, tokenizer = FastLanguageModel.from_pretrained(\n model_name = \"meta-llama/Llama-3.1-8B-Instruct\",\n max_seq_length = max_seq_length,\n dtype = dtype,\n load_in_4bit = load_in_4bit,\n)\n</code></pre>\n<pre><code class=\"lang-auto\">model = FastLanguageModel.get_peft_model(\n model,\n r=16, # LoRA rank - suggested values: 8, 16, 32, 64, 128\n target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n \"gate_proj\", \"up_proj\", \"down_proj\"],\n lora_alpha=16,\n lora_dropout=0, # Supports any, but = 0 is optimized\n bias=\"none\", # Supports any, but = \"none\" is optimized\n use_gradient_checkpointing=\"unsloth\", # Ideal for long context tuning\n random_state=3407,\n use_rslora=False, # Disable rank-sensitive LoRA for simpler tasks\n loftq_config=None # No LoftQ, for standard fine-tuning\n)\n</code></pre>\n<pre><code class=\"lang-auto\">from unsloth.chat_templates import get_chat_template\n\n# Initialize the tokenizer with the chat template and mapping\ntokenizer = get_chat_template(\n tokenizer,\n chat_template = \"llama-3\",\n mapping = {\"role\" : \"from\", \"content\" : \"value\", \"user\" : \"human\", \"assistant\" : \"gpt\"}, # ShareGPT style\n map_eos_token = True, # Maps <|im_end|> to <|eot_id|> instead\n)\n\ndef formatting_prompts_func(examples):\n convos = []\n\n # Iterate through each item in the batch (examples are structured as lists of values)\n for query, tools, answers in zip(examples['query'], examples['tool'], examples['answer']):\n tool_user = {\n \"content\": f\"You are a helpful assistant with access to the following tools or function calls. Your task is to produce a sequence of tools or function calls necessary to generate response to the user utterance. Use the following tools or function calls as required:\\n{tools}\",\n \"role\": \"system\"\n }\n ques_user = {\n \"content\": f\"{query}\",\n \"role\": \"user\"\n }\n assistant = {\n \"content\": f\"{answers}\",\n \"role\": \"assistant\"\n }\n convos.append([tool_user, ques_user, assistant])\n\n texts = [tokenizer.apply_chat_template(convo, tokenize=False, add_generation_prompt=False) for convo in convos]\n return {\"text\": texts}\n\n# Apply the formatting on dataset\ndataset = dataset.map(formatting_prompts_func, batched = True,)\n</code></pre>\n<pre><code class=\"lang-auto\">from transformers import TrainingArguments\n\nargs = TrainingArguments(\n per_device_train_batch_size = 8, # Controls the batch size per device\n gradient_accumulation_steps = 2, # Accumulates gradients to simulate a larger batch\n warmup_steps = 5,\n learning_rate = 2e-4, # Sets the learning rate for optimization\n num_train_epochs = 2,\n fp16 = not torch.cuda.is_bf16_supported(),\n bf16 = torch.cuda.is_bf16_supported(),\n optim = \"adamw_8bit\",\n weight_decay = 0.01, # Regularization term for preventing overfitting\n lr_scheduler_type = \"linear\", # Chooses a linear learning rate decay\n seed = 3407,\n output_dir = \"outputs\",\n logging_steps = 1, # Sets frequency of logging to W&B\n logging_strategy = \"steps\", # Logs metrics at each specified step\n save_strategy = \"no\",\n load_best_model_at_end = True, # Loads the best model at the end\n report_to = \"none\",\n save_only_model = False # Saves entire model, not only weights\n )\n</code></pre>\n<pre><code class=\"lang-auto\">from trl import SFTTrainer\n\ntrainer = SFTTrainer(\n model = model,\n processing_class = tokenizer,\n train_dataset = dataset,\n dataset_text_field = \"text\",\n max_seq_length = max_seq_length,\n dataset_num_proc = 2,\n packing = False, # Can make training 5x faster for short sequences.\n args = args\n)\n</code></pre>\n<pre><code class=\"lang-auto\">from unsloth import unsloth_train\n\ntrainer_stats = unsloth_train(trainer)\nprint(trainer_stats)\n</code></pre>\n<p>What I am missing?</p>\n<p>Thank you for your helps <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-11T18:58:48.094Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 124,
"reads": 12,
"readers_count": 11,
"score": 602.4,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "Orkun Gedik",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://gautam75.medium.com/fine-tuning-llama-3-1-8b-for-function-calling-using-lora-159b9ee66060",
"internal": false,
"reflection": false,
"title": "Fine-Tuning Llama-3.1-8B for Function Calling using LoRA | by Gautam Chutani | Medium",
"clicks": 11
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 61259,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232353,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-12T00:37:49.457Z",
"cooked": "<p>Assuming that the model was trained using that prompt structure, I think it may have forgotten other conversation patterns. It has become overly specialized. How about mixing in negative examples such as the following?</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">{\"query\": \"how are you?\", \n \"tools\": [], \n \"answer\": \"I’m doing well—thank you for asking!\"}\n</code></pre>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://medium.com/%40saisha892001/optimizing-llms-fine-tuning-with-function-calling-7164365c5f35\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/0/f/0f95de5840ff0771b84ea77cfa42a1e98b4f1614.png\" class=\"site-icon\" data-dominant-color=\"3B3B3B\" width=\"32\" height=\"32\">\n\n <a href=\"https://medium.com/%40saisha892001/optimizing-llms-fine-tuning-with-function-calling-7164365c5f35\" target=\"_blank\" rel=\"noopener\" title=\"05:48AM - 18 February 2025\">Medium – 18 Feb 25</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/328;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/f/2/f26b7e9648d6422d5835a30eb45973f86d2f2abf_2_690x328.png\" class=\"thumbnail\" data-dominant-color=\"EBEBEB\" width=\"690\" height=\"328\"></div>\n\n<h3><a href=\"https://medium.com/%40saisha892001/optimizing-llms-fine-tuning-with-function-calling-7164365c5f35\" target=\"_blank\" rel=\"noopener\">Optimizing LLMs: Fine-Tuning with Function Calling</a></h3>\n\n <p>Function calling is highly useful when working with Large Language Models (LLMs) that need to execute specific tasks within a structured…</p>\n\n <p>\n <span class=\"label1\">Reading time: 6 min read</span>\n </p>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-12T00:37:49.457Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://medium.com/%40saisha892001/optimizing-llms-fine-tuning-with-function-calling-7164365c5f35",
"internal": false,
"reflection": false,
"title": "Optimizing LLMs: Fine-Tuning with Function Calling | by Saisha | Medium",
"clicks": 5
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232618,
"name": "Orkun Gedik",
"username": "orkungedik",
"avatar_template": "/user_avatar/discuss.huggingface.co/orkungedik/{size}/47802_2.png",
"created_at": "2025-07-13T18:40:37.715Z",
"cooked": "<p>Hi,</p>\n<p>I tried to fine-tune dataset with only two rows. Same thing happened.</p>\n<p>The thing I found out that the fine-tuned model is able generate answers to simple questions. But problem occured with large RAG prompts.</p>\n<p>Do you have any further idea about it?</p>\n<p>Thank you for your helps.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-13T18:40:37.715Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "Orkun Gedik",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 61259,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 232636,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-13T23:28:51.440Z",
"cooked": "<p>I think this phenomenon is what is known as “catastrophic forgetting,” but I don’t think there is anything particularly wrong with your method…</p>\n<p>Perhaps the learning rate is too high, or something like that?</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/8/d8c4ffb86585c4f4591be71d9c6e11b57353c350_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"EEEBE4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning\" target=\"_blank\" rel=\"noopener\">Let’s Fine-Tune Your Model for Function-Calling - Hugging Face Agents Course</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 4,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-13T23:28:51.440Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 7,
"readers_count": 6,
"score": 31.4,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning",
"internal": false,
"reflection": false,
"title": null,
"clicks": 10
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232688,
"name": "Orkun Gedik",
"username": "orkungedik",
"avatar_template": "/user_avatar/discuss.huggingface.co/orkungedik/{size}/47802_2.png",
"created_at": "2025-07-14T08:59:03.912Z",
"cooked": "<p>Thank you my friend! I decreased learning rate = 1e-6 and it is better now. I learned a lot by your suggestions. Thank you again <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nCheers</p>\n<p>Orkun</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-14T08:59:03.912Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "Orkun Gedik",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 61259,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/5",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 232782,
"name": "c",
"username": "chartar",
"avatar_template": "/user_avatar/discuss.huggingface.co/chartar/{size}/50975_2.png",
"created_at": "2025-07-14T14:10:14.898Z",
"cooked": "<p>The primary issue you’re encountering stems from your training dataset and system prompt setup, which are biasing the model toward always generating function calls, even when they’re unnecessary.</p>\n<p>During fine-tuning, the model never learned scenarios where no function call is needed. It overfits to the pattern of always outputting a tool call, leading to hallucinations like inventing irrelevant calls for casual prompts such as “how are you?”</p>\n<ul>\n<li>Reload your dataset, add 1,000+ non-tool examples, and retrain.</li>\n<li>If you’re still seeing weird outputs, share a sample of your dataset rows or inference code for more specific debugging.</li>\n</ul>",
"post_number": 6,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-14T14:10:14.898Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 31,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "c",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99208,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232892,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-15T02:11:01.983Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 7,
"post_type": 3,
"posts_count": 7,
"updated_at": "2025-07-15T02:11:01.983Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 162680,
"topic_slug": "fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/fine-tune-for-function-call-on-meta-llama-3-1-8b-instruct/162680/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi,</p>
<p>I am trying to fine-tune to make function call predictions better on Meta-Llama-3.1-8B-Instruct. To do that I created a dataset and applied steps regarding to <a href="https://gautam75.medium.com/fine-tuning-llama-3-1-8b-for-function-calling-using-lora-159b9ee66060" class="inline-onebox" rel="noopener nofollow ugc">Fine-Tuning Llama-3.1-8B for Function Calling using LoRA | by Gautam Chutani | Medium</a> blog. As a result I can see function name and parameters are predicting perfectly, but now the model is generating weird answers [get_weather(city=“IL”)] regarding to prompt like “how are you?”.</p>
<p>Please find the code snippets below belong training;</p>
<pre><code class="lang-auto">import torch
from unsloth import FastLanguageModel
max_seq_length = 2048 # Unsloth auto supports RoPE Scaling internally!
dtype = None # None for auto detection
load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "meta-llama/Llama-3.1-8B-Instruct",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
</code></pre>
<pre><code class="lang-auto">model = FastLanguageModel.get_peft_model(
model,
r=16, # LoRA rank - suggested values: 8, 16, 32, 64, 128
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_alpha=16,
lora_dropout=0, # Supports any, but = 0 is optimized
bias="none", # Supports any, but = "none" is optimized
use_gradient_checkpointing="unsloth", # Ideal for long context tuning
random_state=3407,
use_rslora=False, # Disable rank-sensitive LoRA for simpler tasks
loftq_config=None # No LoftQ, for standard fine-tuning
)
</code></pre>
<pre><code class="lang-auto">from unsloth.chat_templates import get_chat_template
# Initialize the tokenizer with the chat template and mapping
tokenizer = get_chat_template(
tokenizer,
chat_template = "llama-3",
mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style
map_eos_token = True, # Maps <|im_end|> to <|eot_id|> instead
)
def formatting_prompts_func(examples):
convos = []
# Iterate through each item in the batch (examples are structured as lists of values)
for query, tools, answers in zip(examples['query'], examples['tool'], examples['answer']):
tool_user = {
"content": f"You are a helpful assistant with access to the following tools or function calls. Your task is to produce a sequence of tools or function calls necessary to generate response to the user utterance. Use the following tools or function calls as required:\n{tools}",
"role": "system"
}
ques_user = {
"content": f"{query}",
"role": "user"
}
assistant = {
"content": f"{answers}",
"role": "assistant"
}
convos.append([tool_user, ques_user, assistant])
texts = [tokenizer.apply_chat_template(convo, tokenize=False, add_generation_prompt=False) for convo in convos]
return {"text": texts}
# Apply the formatting on dataset
dataset = dataset.map(formatting_prompts_func, batched = True,)
</code></pre>
<pre><code class="lang-auto">from transformers import TrainingArguments
args = TrainingArguments(
per_device_train_batch_size = 8, # Controls the batch size per device
gradient_accumulation_steps = 2, # Accumulates gradients to simulate a larger batch
warmup_steps = 5,
learning_rate = 2e-4, # Sets the learning rate for optimization
num_train_epochs = 2,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
optim = "adamw_8bit",
weight_decay = 0.01, # Regularization term for preventing overfitting
lr_scheduler_type = "linear", # Chooses a linear learning rate decay
seed = 3407,
output_dir = "outputs",
logging_steps = 1, # Sets frequency of logging to W&B
logging_strategy = "steps", # Logs metrics at each specified step
save_strategy = "no",
load_best_model_at_end = True, # Loads the best model at the end
report_to = "none",
save_only_model = False # Saves entire model, not only weights
)
</code></pre>
<pre><code class="lang-auto">from trl import SFTTrainer
trainer = SFTTrainer(
model = model,
processing_class = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = False, # Can make training 5x faster for short sequences.
args = args
)
</code></pre>
<pre><code class="lang-auto">from unsloth import unsloth_train
trainer_stats = unsloth_train(trainer)
print(trainer_stats)
</code></pre>
<p>What I am missing?</p>
<p>Thank you for your helps <img src="https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14" title=":slight_smile:" class="emoji" alt=":slight_smile:" loading="lazy" width="20" height="20"></p>
|
<p>I think this phenomenon is what is known as “catastrophic forgetting,” but I don’t think there is anything particularly wrong with your method…</p>
<p>Perhaps the learning rate is too high, or something like that?</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning">
<header class="source">
<a href="https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/8/d8c4ffb86585c4f4591be71d9c6e11b57353c350_2_690x372.png" class="thumbnail" data-dominant-color="EEEBE4" width="690" height="372"></div>
<h3><a href="https://huggingface.co/learn/agents-course/en/bonus-unit1/fine-tuning" target="_blank" rel="noopener">Let’s Fine-Tune Your Model for Function-Calling - Hugging Face Agents Course</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
No application file problem Docker
|
https://discuss.huggingface.co/t/no-application-file-problem-docker/162794
| 162,794
| 24
|
2025-07-12T23:26:02.708000Z
|
[
{
"id": 232473,
"name": "Eduardo Antonio",
"username": "ChuwyBanana",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/c/85e7bf/{size}.png",
"created_at": "2025-07-12T23:26:02.796Z",
"cooked": "<p>Hello, I am building a space with Duckling to pair it with a Rasa bot(this works).<br>\nBut for some reason, I can’t make it run because Hugging face tells me an application file lacks, while I already have a dockerfile, readme and a gitatributes(I tried adding a main.py, app.py, requirements.txt, runtime.txt), but it just doesnt work. These are some of the dockerfiles I’ve tried:</p>\n<blockquote>\n<p>Blockquote<br>\nFROM rasa/duckling:latest<br>\nEXPOSE 8000<br>\nCMD [“duckling”]</p>\n</blockquote>\n<blockquote>\n<p>Blockquote<br>\nFROM rasa/duckling:latest<br>\nEXPOSE 8000<br>\nCMD [“duckling”, “–port”, “8000”]</p>\n</blockquote>\n<blockquote>\n<p>Blockquote<br>\nFROM haskell:8<br>\nRUN apt-get update && apt-get install -y libpcre3 libpcre3-dev curl && <br>\napt-get clean && rm -rf /var/lib/apt/lists/*<br>\nRUN git clone <a href=\"https://github.com/facebook/duckling.git\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">GitHub - facebook/duckling: Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.</a> /duckling<br>\nWORKDIR /duckling<br>\nRUN stack build<br>\nEXPOSE 8000<br>\nCMD stack exec duckling-example-exe</p>\n</blockquote>\n<p>Yeah Ai might be involved here, but Idk why it doesnt work, I have already run this locally and works <img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nany help is appreciated, thx</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-12T23:26:21.678Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 67,
"reads": 10,
"readers_count": 9,
"score": 327,
"yours": false,
"topic_id": 162794,
"topic_slug": "no-application-file-problem-docker",
"display_username": "Eduardo Antonio",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/facebook/duckling.git",
"internal": false,
"reflection": false,
"title": "GitHub - facebook/duckling: Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99267,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-application-file-problem-docker/162794/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232475,
"name": "Eduardo Antonio",
"username": "ChuwyBanana",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/c/85e7bf/{size}.png",
"created_at": "2025-07-12T23:32:53.623Z",
"cooked": "<p>Solved, the problem was that my dockerfile was “DockerFile”. Watch out folks <img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nLoved struggling for a day</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-12T23:33:20.358Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 17,
"yours": false,
"topic_id": 162794,
"topic_slug": "no-application-file-problem-docker",
"display_username": "Eduardo Antonio",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 99267,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-application-file-problem-docker/162794/2",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232476,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-12T23:35:35.504Z",
"cooked": "<p>I think <code>Dockerfile</code> is mostly correct. In the case of Docker Space, I think the only things required in the repository are <code>README.md</code> and <code>Dockerfile</code>. So there may be an error in the <code>README.md</code> settings. <a href=\"https://huggingface.co/spaces/ChuwyBanana/whats/blob/main/README.md\">Your space, which has the correct settings, is currently working</a>.</p>\n<p>Maybe like this:</p>\n<pre data-code-wrap=\"yaml\"><code class=\"lang-yaml\">---\nsdk: docker\napp_port: 8000\n---\n</code></pre>\n<pre data-code-wrap=\"dockerfile\"><code class=\"lang-dockerfile\">FROM rasa/duckling:latest\nEXPOSE 8000\nCMD [\"duckling\", \"--port\", \"8000\"]\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-12T23:35:35.504Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 9,
"readers_count": 8,
"score": 16.8,
"yours": false,
"topic_id": 162794,
"topic_slug": "no-application-file-problem-docker",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/ChuwyBanana/whats/blob/main/README.md",
"internal": false,
"reflection": false,
"title": "README.md · ChuwyBanana/whats at main",
"clicks": 3
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-application-file-problem-docker/162794/3",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232477,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-12T23:36:05.730Z",
"cooked": "<blockquote>\n<p>dockerfile was “DockerFile”.</p>\n</blockquote>\n<p>LoL😆</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-12T23:36:05.730Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 162794,
"topic_slug": "no-application-file-problem-docker",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-application-file-problem-docker/162794/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 232548,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-13T11:36:57.416Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-07-13T11:36:57.416Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 5,
"readers_count": 4,
"score": 11,
"yours": false,
"topic_id": 162794,
"topic_slug": "no-application-file-problem-docker",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-application-file-problem-docker/162794/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello, I am building a space with Duckling to pair it with a Rasa bot(this works).<br>
But for some reason, I can’t make it run because Hugging face tells me an application file lacks, while I already have a dockerfile, readme and a gitatributes(I tried adding a main.py, app.py, requirements.txt, runtime.txt), but it just doesnt work. These are some of the dockerfiles I’ve tried:</p>
<blockquote>
<p>Blockquote<br>
FROM rasa/duckling:latest<br>
EXPOSE 8000<br>
CMD [“duckling”]</p>
</blockquote>
<blockquote>
<p>Blockquote<br>
FROM rasa/duckling:latest<br>
EXPOSE 8000<br>
CMD [“duckling”, “–port”, “8000”]</p>
</blockquote>
<blockquote>
<p>Blockquote<br>
FROM haskell:8<br>
RUN apt-get update && apt-get install -y libpcre3 libpcre3-dev curl && <br>
apt-get clean && rm -rf /var/lib/apt/lists/*<br>
RUN git clone <a href="https://github.com/facebook/duckling.git" class="inline-onebox" rel="noopener nofollow ugc">GitHub - facebook/duckling: Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.</a> /duckling<br>
WORKDIR /duckling<br>
RUN stack build<br>
EXPOSE 8000<br>
CMD stack exec duckling-example-exe</p>
</blockquote>
<p>Yeah Ai might be involved here, but Idk why it doesnt work, I have already run this locally and works <img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"><img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"><br>
any help is appreciated, thx</p>
|
<p>Solved, the problem was that my dockerfile was “DockerFile”. Watch out folks <img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"><img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"><img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"><img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"><br>
Loved struggling for a day</p>
|
What is the formal NLP term for matching text spans with variations, and what’re the recommended approaches?
|
https://discuss.huggingface.co/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347
| 157,347
| 12
|
2025-05-30T06:53:46.499000Z
|
[
{
"id": 224769,
"name": "edenyin",
"username": "edenyin",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/5e9695/{size}.png",
"created_at": "2025-05-30T06:53:46.557Z",
"cooked": "<p>I’m implementing a document analysis system that needs to locate specific text segments within larger documents. Given a reference text snippet, I need to find where this content appears in the original document(span), even when there might be slight differences in formatting, punctuation, or wording.</p>\n<p>I’d like to know:</p>\n<ol>\n<li>\n<p><strong>The formal NLP/IR terminology</strong> for this type of task. Is this considered “approximate string matching,” “span detection” or something else? Having the correct terminology will help me research existing literature and solutions. I’ve done some research on “span detection”/“span extraction”, but they might not suit my scenario that much? Because I found they’re more focused on biology or different NLP tasks like emotion extraction or Named Entity Recognition.</p>\n</li>\n<li>\n<p><strong>Recommended approaches</strong> for solving this specific problem:</p>\n</li>\n</ol>",
"post_number": 1,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-05-30T06:53:46.557Z",
"reply_count": 2,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 43,
"reads": 9,
"readers_count": 8,
"score": 211.8,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "edenyin",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95525,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224812,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-05-30T12:28:11.914Z",
"cooked": "<p>I think you are referring to possibly Approximate String Matching, Span Passage Alignment, passage/passage-level retrieval. Those should get you started.</p>\n<p>You will probably see things like TF-IDF, BM25, Dense Embeddings, etc.</p>\n<p>Hope this helps <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-05-30T12:28:12.140Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 36.6,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": "Automatically removed quote of whole previous post.",
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224895,
"name": "Brendan O'Carroll",
"username": "Needabiggermachine",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/n/c2a13f/{size}.png",
"created_at": "2025-05-31T05:37:37.547Z",
"cooked": "<p>Grep? Or other regular expressions?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-05-31T05:37:37.547Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "Brendan O'Carroll",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 88485,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225374,
"name": "edenyin",
"username": "edenyin",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/5e9695/{size}.png",
"created_at": "2025-06-03T03:29:39.992Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"Mdrnfox\" data-post=\"2\" data-topic=\"157347\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/mdrnfox/48/47695_2.png\" class=\"avatar\"> Mdrnfox:</div>\n<blockquote>\n<p>Approximate String Matching, Span Passage Alignment</p>\n</blockquote>\n</aside>\n<p>Thanks for answering!<br>\nI’ve tried those terms but I found:</p>\n<ol>\n<li><strong>Approximate String Matching / passage/passage-level retrieval</strong> focus more on the similarity between two text and less on the “span” of the original text that match the query text</li>\n<li><strong>Span Passage Alignment</strong> might be closer one but the results from search engine are most about HTML or similar techniques</li>\n</ol>\n<p>Would you mind providing me of more clue/key words? Thanks!</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-06-03T03:29:39.992Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 1,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "edenyin",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95525,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/4",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225440,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-03T09:58:53.550Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"edenyin\" data-post=\"1\" data-topic=\"157347\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/e/5e9695/48.png\" class=\"avatar\"> edenyin:</div>\n<blockquote>\n<p>I’m implementing a document analysis system that needs to locate specific text segments within larger documents. Given a reference text snippet, I need to find where this content appears in the original document(span), even when there might be slight differences in formatting, punctuation, or wording.</p>\n<p>I’d like to know:</p>\n<ol>\n<li><strong>The formal NLP/IR terminology</strong> for this type of task. Is this considered “approximate string matching,” “span detection” or something else? Having the correct terminology will help me research existing literature and solutions. I’ve done some research on “span detection”/“span extraction”, but they might not suit my scenario that much? Because I found they’re more focused on biology or different NLP tasks like emotion extraction or Named Entity Recognition.</li>\n</ol>\n</blockquote>\n</aside>\n<p>Embedding based semantic span matching, a custom span prediction model, fuzzy token based matching? That’s all I can think of</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-06-03T09:58:53.550Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 1,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 31.2,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231891,
"name": "edenyin",
"username": "edenyin",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/5e9695/{size}.png",
"created_at": "2025-07-09T15:26:28.014Z",
"cooked": "<p>I’ve found the most relevant terminology which is <strong>NLI alignment</strong>(Natural Language Inference alignment)</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-07-09T15:26:28.014Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "edenyin",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95525,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/6",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231975,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-10T03:27:26.108Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 7,
"post_type": 3,
"posts_count": 7,
"updated_at": "2025-07-10T03:27:26.108Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 157347,
"topic_slug": "what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-formal-nlp-term-for-matching-text-spans-with-variations-and-whatre-the-recommended-approaches/157347/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’m implementing a document analysis system that needs to locate specific text segments within larger documents. Given a reference text snippet, I need to find where this content appears in the original document(span), even when there might be slight differences in formatting, punctuation, or wording.</p>
<p>I’d like to know:</p>
<ol>
<li>
<p><strong>The formal NLP/IR terminology</strong> for this type of task. Is this considered “approximate string matching,” “span detection” or something else? Having the correct terminology will help me research existing literature and solutions. I’ve done some research on “span detection”/“span extraction”, but they might not suit my scenario that much? Because I found they’re more focused on biology or different NLP tasks like emotion extraction or Named Entity Recognition.</p>
</li>
<li>
<p><strong>Recommended approaches</strong> for solving this specific problem:</p>
</li>
</ol>
|
<p>I’ve found the most relevant terminology which is <strong>NLI alignment</strong>(Natural Language Inference alignment)</p>
|
An hour of silent building
|
https://discuss.huggingface.co/t/an-hour-of-silent-building/161670
| 161,670
| 5
|
2025-07-03T11:03:45.077000Z
|
[
{
"id": 230883,
"name": "Mukund",
"username": "mukundsubramanian",
"avatar_template": "/user_avatar/discuss.huggingface.co/mukundsubramanian/{size}/50568_2.png",
"created_at": "2025-07-03T11:03:45.141Z",
"cooked": "<p>Im trying to build a chatbot for a website , although all the changes made to the files has been saved, the building log shows nothing , its just a blank screen , this has been happening for the past 2 hours<br>\nI tried factory restarting , but I still face the same issue<br>\nThis was not case yesterday, every single change made to the files, triggered a new building phase<br>\nkindly help me out y’all</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-03T11:05:10.018Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 12,
"readers_count": 11,
"score": 52.4,
"yours": false,
"topic_id": 161670,
"topic_slug": "an-hour-of-silent-building",
"display_username": "Mukund",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98566,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/an-hour-of-silent-building/161670/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230888,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-03T11:25:57.971Z",
"cooked": "<p>When the stack freezes in the Building or Preparing state with no log, it is often quicker to download (clone) the source code and upload it to a new repository.</p>\n<p>That said, I don’t think there is anything suspicious about your Spaces code or setup…<br>\nWell, it seems that sometimes that flag can be set unexpectedly due to some error.</p><aside class=\"quote\" data-post=\"2\" data-topic=\"161197\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/stuck-on-preparing-space-multi-tech-stack-docker-deployment-issue-python-java-angular/161197/2\">Stuck on 'Preparing Space' - Multi-Tech Stack Docker Deployment Issue (Python, Java, Angular)</a> <a class=\"badge-category__wrapper \" href=\"/c/spaces/24\"><span data-category-id=\"24\" style=\"--category-badge-color: #25AAE2; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category to ask any questions about Spaces or to share your work.\"><span class=\"badge-category__name\">Spaces</span></span></a>\n </div>\n <blockquote>\n If the Space is too complex, there is a possibility that it contains programs that are subject to shadow banning. However, if that is not the case, the easiest workaround is to create a new Space and upload the same source code.\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-03T11:25:57.971Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 12,
"readers_count": 11,
"score": 17.4,
"yours": false,
"topic_id": 161670,
"topic_slug": "an-hour-of-silent-building",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/stuck-on-preparing-space-multi-tech-stack-docker-deployment-issue-python-java-angular/161197/2",
"internal": true,
"reflection": false,
"title": "Stuck on 'Preparing Space' - Multi-Tech Stack Docker Deployment Issue (Python, Java, Angular)",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/an-hour-of-silent-building/161670/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231820,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-09T08:53:03.626Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-07-09T08:53:03.626Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 161670,
"topic_slug": "an-hour-of-silent-building",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/an-hour-of-silent-building/161670/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Im trying to build a chatbot for a website , although all the changes made to the files has been saved, the building log shows nothing , its just a blank screen , this has been happening for the past 2 hours<br>
I tried factory restarting , but I still face the same issue<br>
This was not case yesterday, every single change made to the files, triggered a new building phase<br>
kindly help me out y’all</p>
|
<p>When the stack freezes in the Building or Preparing state with no log, it is often quicker to download (clone) the source code and upload it to a new repository.</p>
<p>That said, I don’t think there is anything suspicious about your Spaces code or setup…<br>
Well, it seems that sometimes that flag can be set unexpectedly due to some error.</p><aside class="quote" data-post="2" data-topic="161197">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/stuck-on-preparing-space-multi-tech-stack-docker-deployment-issue-python-java-angular/161197/2">Stuck on 'Preparing Space' - Multi-Tech Stack Docker Deployment Issue (Python, Java, Angular)</a> <a class="badge-category__wrapper " href="/c/spaces/24"><span data-category-id="24" style="--category-badge-color: #25AAE2; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="Use this category to ask any questions about Spaces or to share your work."><span class="badge-category__name">Spaces</span></span></a>
</div>
<blockquote>
If the Space is too complex, there is a possibility that it contains programs that are subject to shadow banning. However, if that is not the case, the easiest workaround is to create a new Space and upload the same source code.
</blockquote>
</aside>
|
[License Agreement Error] runwayml/stable-diffusion-v1-5 returns 404
|
https://discuss.huggingface.co/t/license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404/161673
| 161,673
| 13
|
2025-07-03T11:20:47.407000Z
|
[
{
"id": 230886,
"name": "aki",
"username": "aki0327",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4bbf92/{size}.png",
"created_at": "2025-07-03T11:20:47.461Z",
"cooked": "<p>Hello, I am trying to download the <code>runwayml/stable-diffusion-v1-5</code> checkpoint to use with Automatic1111 for DreamBooth training. However, the page shows a 404 error, and I cannot see or accept the license agreement. Because of this, I cannot proceed with the model download.</p>\n<p>Could you please reset my license status or grant me access to this model?<br>\nMy Hugging Face username is: <strong>aki0327</strong><br>\nThank you for your help.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-03T11:20:47.461Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 58,
"reads": 11,
"readers_count": 10,
"score": 307.2,
"yours": false,
"topic_id": 161673,
"topic_slug": "license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404",
"display_username": "aki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98326,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404/161673/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230889,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-03T11:27:34.007Z",
"cooked": "<blockquote>\n<p><code>runwayml/stable-diffusion-v1-5</code></p>\n</blockquote>\n<p>Since <em>this repository itself has been deleted</em>, I think it will work if you use the following repository with the same content. <code>stable-diffusion-v1-5/stable-diffusion-v1-5</code></p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/0/00c2ce823dd938754b1a84551475f005e29fa20e_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5C71A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5\" target=\"_blank\" rel=\"noopener\">stable-diffusion-v1-5/stable-diffusion-v1-5 · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-03T23:52:38.249Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 11,
"reads": 11,
"readers_count": 10,
"score": 67.2,
"yours": false,
"topic_id": 161673,
"topic_slug": "license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5",
"internal": false,
"reflection": false,
"title": "stable-diffusion-v1-5/stable-diffusion-v1-5 · Hugging Face",
"clicks": 39
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404/161673/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230919,
"name": "Megan Riley",
"username": "meganariley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png",
"created_at": "2025-07-03T15:35:13.440Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/aki0327\">@aki0327</a> If you’re seeing a 404 message when you try to access a model, it can be due to the model not existing (either due to being deleted or because there’s a typo in the URL), or because the owners of the model have set the visibility of the model to ‘private’.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-03T15:35:13.440Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 9,
"readers_count": 8,
"score": 26.8,
"yours": false,
"topic_id": 161673,
"topic_slug": "license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404",
"display_username": "Megan Riley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 31941,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404/161673/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231760,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-09T03:33:00.923Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-09T03:33:00.923Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 161673,
"topic_slug": "license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/license-agreement-error-runwayml-stable-diffusion-v1-5-returns-404/161673/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello, I am trying to download the <code>runwayml/stable-diffusion-v1-5</code> checkpoint to use with Automatic1111 for DreamBooth training. However, the page shows a 404 error, and I cannot see or accept the license agreement. Because of this, I cannot proceed with the model download.</p>
<p>Could you please reset my license status or grant me access to this model?<br>
My Hugging Face username is: <strong>aki0327</strong><br>
Thank you for your help.</p>
|
<blockquote>
<p><code>runwayml/stable-diffusion-v1-5</code></p>
</blockquote>
<p>Since <em>this repository itself has been deleted</em>, I think it will work if you use the following repository with the same content. <code>stable-diffusion-v1-5/stable-diffusion-v1-5</code></p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5">
<header class="source">
<a href="https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/0/00c2ce823dd938754b1a84551475f005e29fa20e_2_690x372.png" class="thumbnail" data-dominant-color="5C71A4" width="690" height="372"></div>
<h3><a href="https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5" target="_blank" rel="noopener">stable-diffusion-v1-5/stable-diffusion-v1-5 · Hugging Face</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Difference between model.onnx and model.onnx.data
|
https://discuss.huggingface.co/t/difference-between-model-onnx-and-model-onnx-data/162032
| 162,032
| 59
|
2025-07-07T11:02:27.677000Z
|
[
{
"id": 231432,
"name": "Ravi kiran",
"username": "Rkoy",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/r/35a633/{size}.png",
"created_at": "2025-07-07T11:02:27.742Z",
"cooked": "<p>Hi team, i am new to optimum and have used the onnxruntime library a bit previously.<br>\nWhen i try to convert a model using onnxruntime, i get only one output file say <code>model.onnx</code><br>\nbut when i tried the below command of the optimum,<br>\n!optimum-cli export onnx --model BAAI/bge-m3 bge-m3-onnx-model<br>\nthere were 2 file 1) model.onnx. 2) model.onnx.data</p>\n<p>I though that i will only be getting one file named model.onnx.<br>\nCan anyone please explain me this.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-07T11:02:27.742Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 135,
"reads": 5,
"readers_count": 4,
"score": 551,
"yours": false,
"topic_id": 162032,
"topic_slug": "difference-between-model-onnx-and-model-onnx-data",
"display_username": "Ravi kiran",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 8477,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/difference-between-model-onnx-and-model-onnx-data/162032/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231544,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-07T23:59:17.626Z",
"cooked": "<p>When converting large models to ONNX, External Data (<code>.data</code>) seems to be output at the same time.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://onnxruntime.ai/docs/tutorials/web/large-models.html\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/0/e/0e8779352699020356b05cf742a02aa8bc4d2d99.png\" class=\"site-icon\" data-dominant-color=\"999999\" width=\"17\" height=\"16\">\n\n <a href=\"https://onnxruntime.ai/docs/tutorials/web/large-models.html\" target=\"_blank\" rel=\"noopener\">onnxruntime</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://onnxruntime.ai/docs/tutorials/web/large-models.html\" target=\"_blank\" rel=\"noopener\">Working with Large Models</a></h3>\n\n <p>Working with Large Models in ONNX Runtime Web</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-07T23:59:17.626Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 10.8,
"yours": false,
"topic_id": 162032,
"topic_slug": "difference-between-model-onnx-and-model-onnx-data",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://onnxruntime.ai/docs/tutorials/web/large-models.html",
"internal": false,
"reflection": false,
"title": "Working with Large Models | onnxruntime",
"clicks": 44
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/difference-between-model-onnx-and-model-onnx-data/162032/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231633,
"name": "Ravi kiran",
"username": "Rkoy",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/r/35a633/{size}.png",
"created_at": "2025-07-08T09:17:18.333Z",
"cooked": "<p>Thanks for the response <a class=\"mention\" href=\"/u/john6666\">@John6666</a> . The article cleared many doubts.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-08T09:17:18.333Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 162032,
"topic_slug": "difference-between-model-onnx-and-model-onnx-data",
"display_username": "Ravi kiran",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 8477,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/difference-between-model-onnx-and-model-onnx-data/162032/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 231731,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-08T21:17:55.468Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-08T21:17:55.468Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 1,
"readers_count": 0,
"score": 10.2,
"yours": false,
"topic_id": 162032,
"topic_slug": "difference-between-model-onnx-and-model-onnx-data",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/difference-between-model-onnx-and-model-onnx-data/162032/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi team, i am new to optimum and have used the onnxruntime library a bit previously.<br>
When i try to convert a model using onnxruntime, i get only one output file say <code>model.onnx</code><br>
but when i tried the below command of the optimum,<br>
!optimum-cli export onnx --model BAAI/bge-m3 bge-m3-onnx-model<br>
there were 2 file 1) model.onnx. 2) model.onnx.data</p>
<p>I though that i will only be getting one file named model.onnx.<br>
Can anyone please explain me this.</p>
|
<p>When converting large models to ONNX, External Data (<code>.data</code>) seems to be output at the same time.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://onnxruntime.ai/docs/tutorials/web/large-models.html">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/0/e/0e8779352699020356b05cf742a02aa8bc4d2d99.png" class="site-icon" data-dominant-color="999999" width="17" height="16">
<a href="https://onnxruntime.ai/docs/tutorials/web/large-models.html" target="_blank" rel="noopener">onnxruntime</a>
</header>
<article class="onebox-body">
<h3><a href="https://onnxruntime.ai/docs/tutorials/web/large-models.html" target="_blank" rel="noopener">Working with Large Models</a></h3>
<p>Working with Large Models in ONNX Runtime Web</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Accuracy decreasing after saving/reloading my model
|
https://discuss.huggingface.co/t/accuracy-decreasing-after-saving-reloading-my-model/162034
| 162,034
| 9
|
2025-07-07T11:19:18.982000Z
|
[
{
"id": 231435,
"name": "Cristian Pérez",
"username": "cperezln",
"avatar_template": "/user_avatar/discuss.huggingface.co/cperezln/{size}/50723_2.png",
"created_at": "2025-07-07T11:19:19.043Z",
"cooked": "<p>Hi there,<br>\nI am pretty newbie to the transformers (DL in general), and I am having some problems figuring out the following:<br>\nI have trained ‘tiny-bert’ following a knowledge distillation process from a finetuned ‘bert-base-cased’, where the goal was to do sentiment anlysis. Here is the code that shows this process:</p>\n<pre><code class=\"lang-auto\">from transformers import AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, get_scheduler\nfrom datasets import load_dataset\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.optim import AdamW\nimport copy\nimport numpy as np\n\n# ========== 1. Configuración ==========\ncheckpoint = \"bert-base-cased\"\nbatch_size = 8\nnum_epochs = 10\nlearning_rate = 5e-5\ndistill_temp = 3.0\nsoft_target_loss_w = 0.5\nnll_loss_weight = 0.5\nreduced_hidden_dim = 1028\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n# ========== 2. Tokenización ==========\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\n\ndef tokenize_input(examples):\n return tokenizer(examples['text'], truncation=True, padding=True, max_length=512)\n\n# ========== 3. Dataset ==========\nds = load_dataset(\"stanfordnlp/imdb\")\nds = ds.map(tokenize_input, batched=True)\nds = ds.remove_columns(['text'])\nds = ds.rename_column('label', 'labels')\n\n# Creamos validación (10% del train)\nds = ds['train'].train_test_split(test_size=0.1)\ntrain_dataset = ds['train']\neval_dataset = ds['test']\ntest_dataset = load_dataset(\"stanfordnlp/imdb\", split=\"test\")\ntest_dataset = test_dataset.map(tokenize_input, batched=True)\ntest_dataset = test_dataset.remove_columns(['text'])\ntest_dataset = test_dataset.rename_column('label', 'labels')\n\n# ========== 4. Dataloaders ==========\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"pt\")\ntrain_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=data_collator)\neval_dataloader = DataLoader(eval_dataset, batch_size=batch_size, shuffle=False, collate_fn=data_collator)\ntest_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, collate_fn=data_collator)\n\n# ========== 5. Modelos ==========\nmodel_teacher = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)\nmodel_teacher.load_state_dict(torch.load(\"models/bert_imbd_classifier.bin\", map_location=\"cpu\"))\nmodel_teacher.to(device)\nmodel_teacher.eval()\n\n# ========== 6. Modelo Estudiante ==========\nmodel_student = AutoModelForSequenceClassification.from_pretrained(\"prajjwal1/bert-tiny\", num_labels=2)\n\nmodel_student.to(device)\n\n# ========== 7. Optimizer y scheduler ==========\noptimizer = AdamW(model_student.parameters(), lr=learning_rate)\nnum_training_steps = num_epochs * len(train_dataloader)\nlr_scheduler = get_scheduler(\"linear\", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps)\n\n# ========== 8. Función de pérdida ==========\nkd_loss_fn = nn.KLDivLoss(reduction=\"batchmean\")\nce_loss_fn = nn.CrossEntropyLoss()\n\n# ========== 9. Entrenamiento con distilación ==========\nmodel_student.train()\nfor epoch in range(num_epochs):\n total_loss = 0\n model_student.train()\n\n for batch in train_dataloader:\n batch = {k: v.to(device) for k, v in batch.items()}\n optimizer.zero_grad()\n\n with torch.no_grad():\n teacher_outputs = model_teacher(**batch)\n soft_targets = nn.functional.softmax(teacher_outputs.logits / distill_temp, dim=-1)\n\n student_outputs = model_student(**batch)\n student_logits = student_outputs.logits\n soft_preds = nn.functional.log_softmax(student_logits / distill_temp, dim=-1)\n\n # Distillation loss\n loss_kd = kd_loss_fn(soft_preds, soft_targets) * (distill_temp ** 2)\n\n # CrossEntropy loss\n loss_ce = ce_loss_fn(student_logits, batch['labels'])\n\n loss = soft_target_loss_w * loss_kd + nll_loss_weight * loss_ce\n loss.backward()\n optimizer.step()\n lr_scheduler.step()\n total_loss += loss.item()\n\n avg_loss = total_loss / len(train_dataloader)\n print(f\"[Epoch {epoch+1}/{num_epochs}] Loss: {avg_loss:.4f}\")\n\n# ========== 10. Evaluación final ==========\nmodel_student.eval()\ncorrect = 0\ntotal = 0\nwith torch.no_grad():\n for batch in test_dataloader:\n batch = {k: v.to(device) for k, v in batch.items()}\n outputs = model_student(**batch)\n preds = torch.argmax(outputs.logits, dim=-1)\n correct += (preds == batch[\"labels\"]).sum().item()\n total += batch[\"labels\"].size(0)\n\naccuracy = correct / total\nprint(f\"Accuracy final del modelo estudiante: {accuracy:.4f}\")\n\n# ========== 11. Guardar modelo ==========\ntorch.save(model_student.state_dict(), \"models/student_model.bin\")\n\nmodel_student.save_pretrained(\"student_model/\")\n\n</code></pre>\n<p>I end up with good enough Acc (around 89%, which, for my use case, it is okay).</p>\n<p>The problem is that, when I reload the model, the Acc over the same test dataset decreases significally, up to 50% (i.e, behave as it was never trained in the first place).</p>\n<pre><code class=\"lang-auto\">from transformers import AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, get_scheduler\nfrom datasets import load_dataset\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.optim import AdamW\nimport copy\nimport numpy as np\n \n# ======= 1. Configuración =======\ncheckpoint = \"prajjwal1/bert-tiny\"\nbatch_size = 8\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n# ======= 2. Tokenización =======\ndef tokenize_input(examples):\n return tokenizer(examples[\"text\"], padding = True, truncation = True, max_length = 512)\n\nif __name__ == \"__main__\":\n tokenizer = AutoTokenizer.from_pretrained(checkpoint)\n # ======= 3. Carga del dataset =======\n ds = load_dataset(\"stanfordnlp/imdb\", split = \"test\")\n ds = ds.map(tokenize_input, batched=True)\n ds = ds.remove_columns([\"text\"])\n ds = ds.rename_column(\"label\", \"labels\")\n test_dataset = ds\n\n # ======= 4. Creamos el dataloader =======\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"pt\")\n test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, collate_fn=data_collator)\n\n # ======= 5. Cargamos el modelo =======\n model_pretrained = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels = 2)\n model_pretrained.load_state_dict(torch.load(\"models/student_model.bin\"))\n model_pretrained.to(device)\n model_pretrained.eval()\n\n # ======= 6. Evaluamos el modelo preentrenado. En principio, 86% =======\n correct = 0\n total = 0\n with torch.no_grad():\n for batch in test_dataloader:\n batch = {k: v.to(device) for k, v in batch.items()}\n outputs = model_pretrained(**batch)\n preds = torch.argmax(outputs.logits, dim = -1)\n correct += (preds == batch[\"labels\"]).sum().item()\n total += batch[\"labels\"].size(0)\n\n acc = correct / total\n print(f\"Modelo preentrenado con acc final {acc:.4f}\")\n\n\n</code></pre>\n<p>As I said, I am pretty newbie to DL, so if you find any other problem in the code not related to the question, I’d appreciate if you communicate it to me.</p>\n<p>Thanks in advance! <img src=\"https://emoji.discourse-cdn.com/apple/blush.png?v=14\" title=\":blush:\" class=\"emoji\" alt=\":blush:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-07T11:19:19.043Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 12,
"reads": 3,
"readers_count": 2,
"score": 75.6,
"yours": false,
"topic_id": 162034,
"topic_slug": "accuracy-decreasing-after-saving-reloading-my-model",
"display_username": "Cristian Pérez",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98810,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/accuracy-decreasing-after-saving-reloading-my-model/162034/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231546,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-08T00:20:40.223Z",
"cooked": "<p>I think you forgot to save and load the tokenizer.</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"># after finishing training…\nmodel_student.eval() \nmodel_student.save_pretrained(\"student_model/\") # saves config.json + pytorch_model.bin\ntokenizer.save_pretrained(\"student_model/\") # saves tokenizer.json + vocab files\n\n# when reloading...\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\nmodel = AutoModelForSequenceClassification.from_pretrained(\"student_model/\")\ntokenizer = AutoTokenizer.from_pretrained(\"student_model/\")\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-08T00:20:40.223Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 5.4,
"yours": false,
"topic_id": 162034,
"topic_slug": "accuracy-decreasing-after-saving-reloading-my-model",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/accuracy-decreasing-after-saving-reloading-my-model/162034/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231584,
"name": "Cristian Pérez",
"username": "cperezln",
"avatar_template": "/user_avatar/discuss.huggingface.co/cperezln/{size}/50723_2.png",
"created_at": "2025-07-08T06:57:38.313Z",
"cooked": "<p>Yeah, pretty much that was it.<br>\nThx!</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-08T06:57:38.313Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 15.4,
"yours": false,
"topic_id": 162034,
"topic_slug": "accuracy-decreasing-after-saving-reloading-my-model",
"display_username": "Cristian Pérez",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98810,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/accuracy-decreasing-after-saving-reloading-my-model/162034/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231718,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-08T18:57:54.441Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-08T18:57:54.441Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 162034,
"topic_slug": "accuracy-decreasing-after-saving-reloading-my-model",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/accuracy-decreasing-after-saving-reloading-my-model/162034/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi there,<br>
I am pretty newbie to the transformers (DL in general), and I am having some problems figuring out the following:<br>
I have trained ‘tiny-bert’ following a knowledge distillation process from a finetuned ‘bert-base-cased’, where the goal was to do sentiment anlysis. Here is the code that shows this process:</p>
<pre><code class="lang-auto">from transformers import AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, get_scheduler
from datasets import load_dataset
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.optim import AdamW
import copy
import numpy as np
# ========== 1. Configuración ==========
checkpoint = "bert-base-cased"
batch_size = 8
num_epochs = 10
learning_rate = 5e-5
distill_temp = 3.0
soft_target_loss_w = 0.5
nll_loss_weight = 0.5
reduced_hidden_dim = 1028
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# ========== 2. Tokenización ==========
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
def tokenize_input(examples):
return tokenizer(examples['text'], truncation=True, padding=True, max_length=512)
# ========== 3. Dataset ==========
ds = load_dataset("stanfordnlp/imdb")
ds = ds.map(tokenize_input, batched=True)
ds = ds.remove_columns(['text'])
ds = ds.rename_column('label', 'labels')
# Creamos validación (10% del train)
ds = ds['train'].train_test_split(test_size=0.1)
train_dataset = ds['train']
eval_dataset = ds['test']
test_dataset = load_dataset("stanfordnlp/imdb", split="test")
test_dataset = test_dataset.map(tokenize_input, batched=True)
test_dataset = test_dataset.remove_columns(['text'])
test_dataset = test_dataset.rename_column('label', 'labels')
# ========== 4. Dataloaders ==========
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt")
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=data_collator)
eval_dataloader = DataLoader(eval_dataset, batch_size=batch_size, shuffle=False, collate_fn=data_collator)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, collate_fn=data_collator)
# ========== 5. Modelos ==========
model_teacher = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
model_teacher.load_state_dict(torch.load("models/bert_imbd_classifier.bin", map_location="cpu"))
model_teacher.to(device)
model_teacher.eval()
# ========== 6. Modelo Estudiante ==========
model_student = AutoModelForSequenceClassification.from_pretrained("prajjwal1/bert-tiny", num_labels=2)
model_student.to(device)
# ========== 7. Optimizer y scheduler ==========
optimizer = AdamW(model_student.parameters(), lr=learning_rate)
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler("linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps)
# ========== 8. Función de pérdida ==========
kd_loss_fn = nn.KLDivLoss(reduction="batchmean")
ce_loss_fn = nn.CrossEntropyLoss()
# ========== 9. Entrenamiento con distilación ==========
model_student.train()
for epoch in range(num_epochs):
total_loss = 0
model_student.train()
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
optimizer.zero_grad()
with torch.no_grad():
teacher_outputs = model_teacher(**batch)
soft_targets = nn.functional.softmax(teacher_outputs.logits / distill_temp, dim=-1)
student_outputs = model_student(**batch)
student_logits = student_outputs.logits
soft_preds = nn.functional.log_softmax(student_logits / distill_temp, dim=-1)
# Distillation loss
loss_kd = kd_loss_fn(soft_preds, soft_targets) * (distill_temp ** 2)
# CrossEntropy loss
loss_ce = ce_loss_fn(student_logits, batch['labels'])
loss = soft_target_loss_w * loss_kd + nll_loss_weight * loss_ce
loss.backward()
optimizer.step()
lr_scheduler.step()
total_loss += loss.item()
avg_loss = total_loss / len(train_dataloader)
print(f"[Epoch {epoch+1}/{num_epochs}] Loss: {avg_loss:.4f}")
# ========== 10. Evaluación final ==========
model_student.eval()
correct = 0
total = 0
with torch.no_grad():
for batch in test_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model_student(**batch)
preds = torch.argmax(outputs.logits, dim=-1)
correct += (preds == batch["labels"]).sum().item()
total += batch["labels"].size(0)
accuracy = correct / total
print(f"Accuracy final del modelo estudiante: {accuracy:.4f}")
# ========== 11. Guardar modelo ==========
torch.save(model_student.state_dict(), "models/student_model.bin")
model_student.save_pretrained("student_model/")
</code></pre>
<p>I end up with good enough Acc (around 89%, which, for my use case, it is okay).</p>
<p>The problem is that, when I reload the model, the Acc over the same test dataset decreases significally, up to 50% (i.e, behave as it was never trained in the first place).</p>
<pre><code class="lang-auto">from transformers import AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, get_scheduler
from datasets import load_dataset
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.optim import AdamW
import copy
import numpy as np
# ======= 1. Configuración =======
checkpoint = "prajjwal1/bert-tiny"
batch_size = 8
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# ======= 2. Tokenización =======
def tokenize_input(examples):
return tokenizer(examples["text"], padding = True, truncation = True, max_length = 512)
if __name__ == "__main__":
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# ======= 3. Carga del dataset =======
ds = load_dataset("stanfordnlp/imdb", split = "test")
ds = ds.map(tokenize_input, batched=True)
ds = ds.remove_columns(["text"])
ds = ds.rename_column("label", "labels")
test_dataset = ds
# ======= 4. Creamos el dataloader =======
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt")
test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, collate_fn=data_collator)
# ======= 5. Cargamos el modelo =======
model_pretrained = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels = 2)
model_pretrained.load_state_dict(torch.load("models/student_model.bin"))
model_pretrained.to(device)
model_pretrained.eval()
# ======= 6. Evaluamos el modelo preentrenado. En principio, 86% =======
correct = 0
total = 0
with torch.no_grad():
for batch in test_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model_pretrained(**batch)
preds = torch.argmax(outputs.logits, dim = -1)
correct += (preds == batch["labels"]).sum().item()
total += batch["labels"].size(0)
acc = correct / total
print(f"Modelo preentrenado con acc final {acc:.4f}")
</code></pre>
<p>As I said, I am pretty newbie to DL, so if you find any other problem in the code not related to the question, I’d appreciate if you communicate it to me.</p>
<p>Thanks in advance! <img src="https://emoji.discourse-cdn.com/apple/blush.png?v=14" title=":blush:" class="emoji" alt=":blush:" loading="lazy" width="20" height="20"></p>
|
<p>I think you forgot to save and load the tokenizer.</p>
<pre data-code-wrap="py"><code class="lang-py"># after finishing training…
model_student.eval()
model_student.save_pretrained("student_model/") # saves config.json + pytorch_model.bin
tokenizer.save_pretrained("student_model/") # saves tokenizer.json + vocab files
# when reloading...
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("student_model/")
tokenizer = AutoTokenizer.from_pretrained("student_model/")
</code></pre>
|
Retraining Individual Words
|
https://discuss.huggingface.co/t/retraining-individual-words/161229
| 161,229
| 5
|
2025-06-30T18:47:55.452000Z
|
[
{
"id": 230203,
"name": "John Dattilo",
"username": "dattilojohn",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/9dc877/{size}.png",
"created_at": "2025-06-30T18:47:55.512Z",
"cooked": "<p>What is a good sample size for retraining individual words? I retrained using 50 good and 50 bad examples for a word but was hoping that a smaller sample size would also still be efficient?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-30T18:47:55.512Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 5,
"readers_count": 4,
"score": 31,
"yours": false,
"topic_id": 161229,
"topic_slug": "retraining-individual-words",
"display_username": "John Dattilo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98306,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/retraining-individual-words/161229/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230233,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-01T00:23:58.944Z",
"cooked": "<p>I think it depends greatly on the size of the model, but with a small model, it seems possible to teach one word with a dataset of around 200. If all goes well, it seems that less than 500 sentences may be enough to train one word.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://arxiv.org/html/2411.03350v1\">\n <header class=\"source\">\n\n <a href=\"https://arxiv.org/html/2411.03350v1\" target=\"_blank\" rel=\"noopener\">arxiv.org</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://arxiv.org/html/2411.03350v1\" target=\"_blank\" rel=\"noopener\">A Comprehensive Survey of Small Language Models in the Era of Large Language...</a></h3>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/6/366309b72090843accd886395b8c67de88c17a0c.png\" class=\"site-icon\" data-dominant-color=\"4D5F6F\" width=\"48\" height=\"48\">\n\n <a href=\"https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/\" target=\"_blank\" rel=\"noopener\">PubMed Central (PMC)</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/360;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/c/1c2b2fab27273d4bb02dc0a9b2efa3389fa20ffe_2_690x360.jpeg\" class=\"thumbnail\" data-dominant-color=\"385B82\" width=\"690\" height=\"360\"></div>\n\n<h3><a href=\"https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/\" target=\"_blank\" rel=\"noopener\">Sample Size Considerations for Fine-Tuning Large Language Models for Named...</a></h3>\n\n <p>Large language models (LLMs) have the potential to support promising new applications in health informatics. However, practical data on sample size considerations for fine-tuning LLMs to perform specific tasks in biomedical and health policy ...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-01T00:23:58.944Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 161229,
"topic_slug": "retraining-individual-words",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/",
"internal": false,
"reflection": false,
"title": "Sample Size Considerations for Fine-Tuning Large Language Models for Named Entity Recognition Tasks: Methodological Study - PMC",
"clicks": 2
},
{
"url": "https://arxiv.org/html/2411.03350v1",
"internal": false,
"reflection": false,
"title": "A Comprehensive Survey of Small Language Models in the Era of Large Language Models: Techniques, Enhancements, Applications, Collaboration with LLMs, and Trustworthiness",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/retraining-individual-words/161229/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231339,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-06T21:43:28.623Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-07-06T21:43:28.623Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 161229,
"topic_slug": "retraining-individual-words",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/retraining-individual-words/161229/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>What is a good sample size for retraining individual words? I retrained using 50 good and 50 bad examples for a word but was hoping that a smaller sample size would also still be efficient?</p>
|
<p>I think it depends greatly on the size of the model, but with a small model, it seems possible to teach one word with a dataset of around 200. If all goes well, it seems that less than 500 sentences may be enough to train one word.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://arxiv.org/html/2411.03350v1">
<header class="source">
<a href="https://arxiv.org/html/2411.03350v1" target="_blank" rel="noopener">arxiv.org</a>
</header>
<article class="onebox-body">
<h3><a href="https://arxiv.org/html/2411.03350v1" target="_blank" rel="noopener">A Comprehensive Survey of Small Language Models in the Era of Large Language...</a></h3>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/3/6/366309b72090843accd886395b8c67de88c17a0c.png" class="site-icon" data-dominant-color="4D5F6F" width="48" height="48">
<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/" target="_blank" rel="noopener">PubMed Central (PMC)</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/360;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/c/1c2b2fab27273d4bb02dc0a9b2efa3389fa20ffe_2_690x360.jpeg" class="thumbnail" data-dominant-color="385B82" width="690" height="360"></div>
<h3><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11140272/" target="_blank" rel="noopener">Sample Size Considerations for Fine-Tuning Large Language Models for Named...</a></h3>
<p>Large language models (LLMs) have the potential to support promising new applications in health informatics. However, practical data on sample size considerations for fine-tuning LLMs to perform specific tasks in biomedical and health policy ...</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Pickling issue using map
|
https://discuss.huggingface.co/t/pickling-issue-using-map/149130
| 149,130
| 10
|
2025-04-06T17:44:00.175000Z
|
[
{
"id": 213772,
"name": "Haolong Zheng",
"username": "MagicLuke",
"avatar_template": "/user_avatar/discuss.huggingface.co/magicluke/{size}/44922_2.png",
"created_at": "2025-04-06T17:44:00.238Z",
"cooked": "<p>I am mapping my dataset with the following compute_metrics method which give me a pickling issue.</p>\n<pre><code class=\"lang-auto\"> metric_cfg_list = config[\"metric_list\"]\n metrics = [evaluate.load(metric_cfg[\"path\"]) for metric_cfg in metric_cfg_list]\n\n # Placeholder for a tokenizer or normalizer class if needed.\n tokenizer = None\n\n def compute_metrics(sample):\n for metric in metrics:\n sample[metric.name] = metric.compute(\n predictions=[sample[\"clean_prediction\"]],\n references=[sample[\"clean_label\"]]\n )\n return sample\n</code></pre>\n<p>the following is the error message</p>\n<pre data-code-wrap=\"sh\"><code class=\"lang-sh\">Parameter 'function'=<function main.<locals>.compute_metrics at 0x7aa60a95f0a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mec\nhanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. \nMap (num_proc=16): 0%| | 0/2116 [00:00<?, ? examples/s] \nTraceback (most recent call last): \n File \"/ws/ifp-54_2/hasegawa/haolong2/AI4EE/CSR4RSR/evaluation.py\", line 207, in <module> \n...\n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 578, in save \n rv = reduce(self.proto) \nTypeError: cannot pickle 'ThreadLocalFileContext' object \n</code></pre>\n<p>I saw a relevant post about the nonpicklable issue with some tokenizer and ppl solved it by implementing the <strong>getstate</strong> method or so. In my case, it’s an object from the evaluate package. I wonder how I should modify them to avoid this error.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-06T17:44:00.238Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 185,
"reads": 11,
"readers_count": 10,
"score": 897.2,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Haolong Zheng",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89711,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 213779,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-06T18:31:47.152Z",
"cooked": "<p>Hmm… unless it’s a problem with dill, multiprocessing, or the cache, it’s better to call lhonestq…</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/datasets/issues/5536\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/datasets/issues/5536\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/datasets</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/datasets/issues/5536\" target=\"_blank\" rel=\"noopener\">Failure to hash function when using .map() </a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2023-02-16\" data-time=\"03:12:07\" data-timezone=\"UTC\">03:12AM - 16 Feb 23 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2023-02-16\" data-time=\"14:56:41\" data-timezone=\"UTC\">02:56PM - 16 Feb 23 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/venzen\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/b/c/bc20ec45985cfc3a764c07067c62c28858a760e2.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"767676\">\n venzen\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">### Describe the bug\n\n_Parameter 'function'=<function process at 0x7f1ec4388af<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">0> of the transform datasets.arrow_dataset.Dataset.\\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._\n\nThis issue with `.map()` happens for me consistently, as also described in closed issue #4506\n\nDataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error.\n\n### Steps to reproduce the bug\n\n```py\nfrom datasets import load_dataset\nimport tiktoken\n\ndataset = load_dataset(\"stas/openwebtext-10k\")\n\nenc = tiktoken.get_encoding(\"gpt2\")\n\ntokenized = dataset.map(\n process,\n remove_columns=['text'],\n desc=\"tokenizing the OWT splits\",\n)\n\ndef process(example):\n ids = enc.encode(example['text'])\n ids.append(enc.eot_token)\n out = {'ids': ids, 'len': len(ids)}\n return out\n```\n\n### Expected behavior\n\nShould encode simple text objects.\n\n### Environment info\n\n\nPython versions tried: both 3.8 and 3.10.10\n`PYTHONUTF8=1` as env variable\n\nDatasets tried: \n- stas/openwebtext-10k\n- rotten_tomatoes\n- local text file\n\nOS: Ubuntu Linux 20.04\n\nPackage versions:\n- torch 1.13.1\n- dill 0.3.4 (if using 0.3.6 - same issue)\n- datasets 2.9.0\n- tiktoken 0.2.0</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/datasets/issues/5061\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/datasets/issues/5061\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/datasets</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/datasets/issues/5061\" target=\"_blank\" rel=\"noopener\">`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2022-10-03\" data-time=\"23:51:38\" data-timezone=\"UTC\">11:51PM - 03 Oct 22 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2023-07-21\" data-time=\"14:43:34\" data-timezone=\"UTC\">02:43PM - 21 Jul 23 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/ZhaofengWu\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/a/e/ae64aa97ccca433d68fd968641902ddbca91f6da.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"645C54\">\n ZhaofengWu\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n bug\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">## Describe the bug\nWhen I `map` with multiple processes, this error occurs. Th<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">e `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.\n```\n File \"~/project/dataset.py\", line 204, in <dictcomp>\n split: dataset.map(\n File \".../site-packages/datasets/arrow_dataset.py\", line 2489, in map\n transformed_shards[index] = async_result.get()\n File \".../site-packages/multiprocess/pool.py\", line 771, in get\n raise self._value\n File \".../site-packages/multiprocess/pool.py\", line 537, in _handle_tasks\n put(task)\n File \".../site-packages/multiprocess/connection.py\", line 214, in send\n self._send_bytes(_ForkingPickler.dumps(obj))\n File \".../site-packages/multiprocess/reduction.py\", line 54, in dumps\n cls(buf, protocol, *args, **kwds).dump(obj)\n File \".../site-packages/dill/_dill.py\", line 620, in dump\n StockPickler.dump(self, obj)\n File \".../pickle.py\", line 487, in dump\n self.save(obj)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../pickle.py\", line 902, in save_tuple\n save(element)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../site-packages/dill/_dill.py\", line 1963, in save_function\n _save_with_postproc(pickler, (_create_function, (\n File \".../site-packages/dill/_dill.py\", line 1140, in _save_with_postproc\n pickler.save_reduce(*reduction, obj=obj)\n File \".../pickle.py\", line 717, in save_reduce\n save(state)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../pickle.py\", line 887, in save_tuple\n save(element)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../site-packages/dill/_dill.py\", line 1251, in save_module_dict\n StockPickler.save_dict(pickler, obj)\n File \".../pickle.py\", line 972, in save_dict\n self._batch_setitems(obj.items())\n File \".../pickle.py\", line 998, in _batch_setitems\n save(v)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../site-packages/dill/_dill.py\", line 1963, in save_function\n _save_with_postproc(pickler, (_create_function, (\n File \".../site-packages/dill/_dill.py\", line 1140, in _save_with_postproc\n pickler.save_reduce(*reduction, obj=obj)\n File \".../pickle.py\", line 717, in save_reduce\n save(state)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../pickle.py\", line 887, in save_tuple\n save(element)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../site-packages/dill/_dill.py\", line 1251, in save_module_dict\n StockPickler.save_dict(pickler, obj)\n File \".../pickle.py\", line 972, in save_dict\n self._batch_setitems(obj.items())\n File \".../pickle.py\", line 998, in _batch_setitems\n save(v)\n File \".../pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \".../site-packages/dill/_dill.py\", line 1963, in save_function\n _save_with_postproc(pickler, (_create_function, (\n File \".../site-packages/dill/_dill.py\", line 1154, in _save_with_postproc\n pickler._batch_setitems(iter(source.items()))\n File \".../pickle.py\", line 998, in _batch_setitems\n save(v)\n File \".../pickle.py\", line 578, in save\n rv = reduce(self.proto)\n File \".../logging/__init__.py\", line 1774, in __reduce__\n raise pickle.PicklingError('logger cannot be pickled')\n_pickle.PicklingError: logger cannot be pickled\n```\n\n## Steps to reproduce the bug\nSorry I failed to have a minimal reproducible example, but the offending line on my end is\n```python\ndataset.map(\n lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda\n batched=True,\n num_proc=4,\n)\n```\nThis does work when `num_proc=1`, so it's likely a multiprocessing thing.\n\n## Expected results\n`map` succeeds\n\n## Actual results\nThe error trace above.\n\n## Environment info\n- `datasets` version: 1.16.1 and 2.5.1 both failed\n- Platform: Ubuntu 20.04.4 LTS\n- Python version: 3.10.4\n- PyArrow version: 9.0.0</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p>You can also provide your own unique hash in <code>map</code> if you want, with the <code>new_fingerprint</code> argument.<br>\nOr disable caching using</p>\n</blockquote>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">import datasets\ndatasets.disable_caching()\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-06T18:31:47.152Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 11,
"readers_count": 10,
"score": 22.2,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets/issues/5536",
"internal": false,
"reflection": false,
"title": "Failure to hash function when using .map() · Issue #5536 · huggingface/datasets · GitHub",
"clicks": 5
},
{
"url": "https://github.com/huggingface/datasets/issues/5061",
"internal": false,
"reflection": false,
"title": "`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` · Issue #5061 · huggingface/datasets · GitHub",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 213833,
"name": "Haolong Zheng",
"username": "MagicLuke",
"avatar_template": "/user_avatar/discuss.huggingface.co/magicluke/{size}/44922_2.png",
"created_at": "2025-04-07T02:12:40.439Z",
"cooked": "<p>I tried both new_fingerprint and disable_cache(), but all still gave the same bug.</p>\n<p>the complete error is as follow:</p>\n<pre data-code-wrap=\"sh\"><code class=\"lang-sh\">Map (num_proc=16): 0%| | 0/2116 [00:00<?, ? examples/s]\nTraceback (most recent call last): \n File \"/ws/ifp-54_2/hasegawa/haolong2/AI4EE/CSR4RSR/evaluation.py\", line 213, in <module> \n main() \n File \"/ws/ifp-54_2/hasegawa/haolong2/AI4EE/CSR4RSR/evaluation.py\", line 178, in main \n ds[split] = ds[split].map( \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 557, in wrapper \n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3166, in map \n for rank, done, content in iflatmap_unordered( \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/datasets/utils/py_utils.py\", line 720, in iflatmap_unordered \n [async_result.get(timeout=0.05) for async_result in async_results] \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/datasets/utils/py_utils.py\", line 720, in <listcomp> \n [async_result.get(timeout=0.05) for async_result in async_results] \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/multiprocess/pool.py\", line 774, in get \n raise self._value \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/multiprocess/pool.py\", line 540, in _handle_tasks \n put(task) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/multiprocess/connection.py\", line 209, in send \n self._send_bytes(_ForkingPickler.dumps(obj)) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/multiprocess/reduction.py\", line 54, in dumps \n cls(buf, protocol, *args, **kwds).dump(obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 420, in dump \n StockPickler.dump(self, obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 487, in dump \n self.save(obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 902, in save_tuple \n save(element)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save\n StockPickler.save(self, obj, save_persistent_id)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 887, in save_tuple\n save(element)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save\n StockPickler.save(self, obj, save_persistent_id)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 1217, in save_module_dict\n StockPickler.save_dict(pickler, obj)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 972, in save_dict\n self._batch_setitems(obj.items())\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 998, in _batch_setitems\n save(v)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save\n StockPickler.save(self, obj, save_persistent_id)\n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self\nFile \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 902, in save_tuple \n save(element) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 887, in save_tuple \n save(element) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 1217, in save_module_dict \n StockPickler.save_dict(pickler, obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 972, in save_dict \n self._batch_setitems(obj.items()) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 998, in _batch_setitems \n save(v) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 1985, in save_function \n _save_with_postproc(pickler, (_create_function, ( \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 1117, in _save_with_postproc \n pickler.save_reduce(*reduction) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 692, in save_reduce \n save(args) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 887, in save_tuple \n save(element) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 932, in save_list \n self._batch_appends(obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 956, in _batch_appends \n save(x) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 603, in save \n self.save_reduce(obj=obj, *rv) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 717, in save_reduce \n save(state) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 1217, in save_module_dict \n StockPickler.save_dict(pickler, obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 972, in save_dict \n self._batch_setitems(obj.items()) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 998, in _batch_setitems \n save(v) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save \n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 932, in save_list \n self._batch_appends(obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 959, in _batch_appends \n save(tmp[0]) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 603, in save \n self.save_reduce(obj=obj, *rv) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 717, in save_reduce \n save(state) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save\n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 560, in save\n f(self, obj) # Call unbound method with explicit self \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 1217, in save_module_dict\n StockPickler.save_dict(pickler, obj) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 972, in save_dict\n self._batch_setitems(obj.items()) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 998, in _batch_setitems\n save(v) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/dill/_dill.py\", line 414, in save \n StockPickler.save(self, obj, save_persistent_id) \n File \"/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py\", line 578, in save \n rv = reduce(self.proto) \nTypeError: cannot pickle 'ThreadLocalFileContext' object \n\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-07T02:12:40.439Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 8,
"readers_count": 7,
"score": 31.6,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Haolong Zheng",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89711,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 213846,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-07T04:00:08.027Z",
"cooked": "<p>Hmm… <a class=\"mention\" href=\"/u/lhoestq\">@lhoestq</a> map function or PyArrow issue…?</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-07T04:00:08.027Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 1.4,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 213916,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2025-04-07T09:51:47.278Z",
"cooked": "<p>It looks like the <code>ThreadLocalFileContext</code> from <code>filelock</code> is not picklable, and therefore can’t be used with <code>.map()</code> with <code>num_proc=...</code></p>\n<p>Apparently thid can be fixed using <code>thread_local=False</code>, see the docs at <a href=\"https://py-filelock.readthedocs.io/en/latest/index.html#filelocks-and-threads\" class=\"inline-onebox\">filelock</a></p>\n<p>Can you modify <code>evaluate</code> to pass <code>thread_local=False</code> to all <code>FileLock</code> objects and try again to see if it works ?</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-07T09:51:47.278Z",
"reply_count": 2,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 46.4,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://py-filelock.readthedocs.io/en/latest/index.html#filelocks-and-threads",
"internal": false,
"reflection": false,
"title": "filelock",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 214060,
"name": "Haolong Zheng",
"username": "MagicLuke",
"avatar_template": "/user_avatar/discuss.huggingface.co/magicluke/{size}/44922_2.png",
"created_at": "2025-04-07T21:05:59.689Z",
"cooked": "<p>I am not sure if I do it right.</p>\n<p>I modify the function <code>get_from_cache</code> in the <code>file_utils</code> located<br>\n…/miniconda3/envs/csr4rsr/lib/python3.10/site-packages/evaluate/utils/file_utils.py<br>\nfrom</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">with FileLock(lock_path): # Origin\n</code></pre>\n<p>to</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">with FileLock(lock_path, thread_local=False): # Modified\n</code></pre>\n<p>but the problem persist.</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-07T21:08:52.743Z",
"reply_count": 0,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 7,
"readers_count": 6,
"score": 31.4,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Haolong Zheng",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89711,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 214062,
"name": "Haolong Zheng",
"username": "MagicLuke",
"avatar_template": "/user_avatar/discuss.huggingface.co/magicluke/{size}/44922_2.png",
"created_at": "2025-04-07T21:30:34.267Z",
"cooked": "<p>By adding this code chunck before importing evaluating seems solved the problem.</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from filelock import FileLock as OriginalFileLock\n\nclass PatchedFileLock(OriginalFileLock):\n def __init__(self, *args, **kwargs):\n kwargs[\"thread_local\"] = False # Force it every time\n super().__init__(*args, **kwargs)\n\nimport filelock\nfilelock.FileLock = PatchedFileLock\n</code></pre>\n<p>Thanks for the insight <a class=\"mention\" href=\"/u/lhoestq\">@lhoestq</a>.<br>\nWould you mind telling where you find the clue for the error if it’s not too much trouble<br>\nIn this way, I might be able to fix it the same way in the future.</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-07T21:30:34.267Z",
"reply_count": 0,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 5,
"readers_count": 4,
"score": 81,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Haolong Zheng",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89711,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/7",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 214147,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2025-04-08T08:56:07.799Z",
"cooked": "<p>Great ! Let me know if you think we should make this the default in <code>datasets</code> and <code>evaluate</code>, apparently this logic appears with python >= 3.11</p>\n<blockquote>\n<p>Would you mind telling where you find the clue for the error if it’s not too much trouble<br>\nIn this way, I might be able to fix it the same way in the future.</p>\n</blockquote>\n<p>The <code>dill</code> error says “TypeError: cannot pickle ‘ThreadLocalFileContext’ object”, so it means that in the function you pass to <code>map()</code> there is an object that contains a ThreadLocalFileContext that is not supported by <code>dill</code> for multiprocessing.</p>\n<p>I searched on google for ThreadLocalFileContext on <a href=\"http://github.com\">github.com</a> to look for packages that have such objects and figured it came from <code>filelock</code> which is a dependency of <code>evaluate</code>. Finally the <code>filelock</code> changelog they mention ThreadLocalFileContext as a recent addition for FileLock</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-08T08:56:07.799Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 41.4,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://github.com",
"internal": false,
"reflection": false,
"title": "GitHub · Build and ship software on a single, collaborative platform · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/8",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 214262,
"name": "Haolong Zheng",
"username": "MagicLuke",
"avatar_template": "/user_avatar/discuss.huggingface.co/magicluke/{size}/44922_2.png",
"created_at": "2025-04-08T16:54:17.651Z",
"cooked": "<p>Thanks for the explanation!</p>\n<p>I think it would be great to set it as the default in my case, which is several metrics that need to be computed for a dataset. For me, I just want to avoid using multiple rounds of map. Or maybe there is a better way to do it that I haven’t figured out.</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 10,
"updated_at": "2025-04-08T16:55:13.670Z",
"reply_count": 0,
"reply_to_post_number": 8,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "Haolong Zheng",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89711,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/9",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 231216,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-06T04:04:52.053Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 10,
"post_type": 3,
"posts_count": 10,
"updated_at": "2025-07-06T04:04:52.053Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 149130,
"topic_slug": "pickling-issue-using-map",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pickling-issue-using-map/149130/10",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am mapping my dataset with the following compute_metrics method which give me a pickling issue.</p>
<pre><code class="lang-auto"> metric_cfg_list = config["metric_list"]
metrics = [evaluate.load(metric_cfg["path"]) for metric_cfg in metric_cfg_list]
# Placeholder for a tokenizer or normalizer class if needed.
tokenizer = None
def compute_metrics(sample):
for metric in metrics:
sample[metric.name] = metric.compute(
predictions=[sample["clean_prediction"]],
references=[sample["clean_label"]]
)
return sample
</code></pre>
<p>the following is the error message</p>
<pre data-code-wrap="sh"><code class="lang-sh">Parameter 'function'=<function main.<locals>.compute_metrics at 0x7aa60a95f0a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mec
hanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
Map (num_proc=16): 0%| | 0/2116 [00:00<?, ? examples/s]
Traceback (most recent call last):
File "/ws/ifp-54_2/hasegawa/haolong2/AI4EE/CSR4RSR/evaluation.py", line 207, in <module>
...
StockPickler.save(self, obj, save_persistent_id)
File "/ws/ifp-53_2/hasegawa/haolong2/miniconda3/envs/csr4rsr/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ThreadLocalFileContext' object
</code></pre>
<p>I saw a relevant post about the nonpicklable issue with some tokenizer and ppl solved it by implementing the <strong>getstate</strong> method or so. In my case, it’s an object from the evaluate package. I wonder how I should modify them to avoid this error.</p>
|
<p>By adding this code chunck before importing evaluating seems solved the problem.</p>
<pre data-code-wrap="python"><code class="lang-python">from filelock import FileLock as OriginalFileLock
class PatchedFileLock(OriginalFileLock):
def __init__(self, *args, **kwargs):
kwargs["thread_local"] = False # Force it every time
super().__init__(*args, **kwargs)
import filelock
filelock.FileLock = PatchedFileLock
</code></pre>
<p>Thanks for the insight <a class="mention" href="/u/lhoestq">@lhoestq</a>.<br>
Would you mind telling where you find the clue for the error if it’s not too much trouble<br>
In this way, I might be able to fix it the same way in the future.</p>
|
How to download deep-seek weights for v3?
|
https://discuss.huggingface.co/t/how-to-download-deep-seek-weights-for-v3/161861
| 161,861
| 5
|
2025-07-05T12:08:00.292000Z
|
[
{
"id": 231138,
"name": "Irina Gracheva",
"username": "tusenka",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/t/f6c823/{size}.png",
"created_at": "2025-07-05T12:08:00.364Z",
"cooked": "<p>The question is a bit stupid. How to download deepseek weights? I have the <a href=\"https://huggingface.co/deepseek-ai/DeepSeek-V3\">model</a>, I need weights for it to use in slang.<br>\nIn parallel learn LLM theory with math</p>\n<p>with regards,<br>\nIrina</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-05T12:08:00.364Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 76,
"reads": 4,
"readers_count": 3,
"score": 355.8,
"yours": false,
"topic_id": 161861,
"topic_slug": "how-to-download-deep-seek-weights-for-v3",
"display_username": "Irina Gracheva",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/deepseek-ai/DeepSeek-V3",
"internal": false,
"reflection": false,
"title": "deepseek-ai/DeepSeek-V3 · Hugging Face",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98698,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-download-deep-seek-weights-for-v3/161861/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231142,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-05T12:55:15.967Z",
"cooked": "<p>If you already have a model, you can use <code>save_pretrained</code>, but <code>snapshot_download</code> is more reliable for downloading. DeepSeekV3 has large file sizes, so it’s better to try it out first with a smaller repository…</p>\n<pre><code class=\"lang-auto\">pip install -U huggingface_hub[hf_xet]\n</code></pre>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">from huggingface_hub import snapshot_download\nsnapshot_download(repo_id=\"deepseek-ai/DeepSeek-V3\", local_dir=\"DeepSeek-V3\")\n</code></pre>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/e/cef3cd647e391927031467dbcde7613c74193f5f_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F1EFE9\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository\" target=\"_blank\" rel=\"noopener\">Download files from the Hub</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-05T12:55:15.967Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 25.8,
"yours": false,
"topic_id": 161861,
"topic_slug": "how-to-download-deep-seek-weights-for-v3",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository",
"internal": false,
"reflection": false,
"title": "Download files from the Hub",
"clicks": 3
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-download-deep-seek-weights-for-v3/161861/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231210,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-06T03:17:52.514Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-07-06T03:17:52.514Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 161861,
"topic_slug": "how-to-download-deep-seek-weights-for-v3",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-download-deep-seek-weights-for-v3/161861/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>The question is a bit stupid. How to download deepseek weights? I have the <a href="https://huggingface.co/deepseek-ai/DeepSeek-V3">model</a>, I need weights for it to use in slang.<br>
In parallel learn LLM theory with math</p>
<p>with regards,<br>
Irina</p>
|
<p>If you already have a model, you can use <code>save_pretrained</code>, but <code>snapshot_download</code> is more reliable for downloading. DeepSeekV3 has large file sizes, so it’s better to try it out first with a smaller repository…</p>
<pre><code class="lang-auto">pip install -U huggingface_hub[hf_xet]
</code></pre>
<pre data-code-wrap="py"><code class="lang-py">from huggingface_hub import snapshot_download
snapshot_download(repo_id="deepseek-ai/DeepSeek-V3", local_dir="DeepSeek-V3")
</code></pre>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository">
<header class="source">
<a href="https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/e/cef3cd647e391927031467dbcde7613c74193f5f_2_690x372.png" class="thumbnail" data-dominant-color="F1EFE9" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/huggingface_hub/v0.33.2/guides/download#download-an-entire-repository" target="_blank" rel="noopener">Download files from the Hub</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
A new kind of way to look at ai
|
https://discuss.huggingface.co/t/a-new-kind-of-way-to-look-at-ai/160903
| 160,903
| 7
|
2025-06-27T13:17:46.519000Z
|
[
{
"id": 229713,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-27T13:17:46.574Z",
"cooked": "<p>Feel free to use and build upon this it doesn’t have weights yet but may be of use to someone here <img src=\"https://emoji.discourse-cdn.com/apple/cow_face.png?v=14\" title=\":cow_face:\" class=\"emoji\" alt=\":cow_face:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/cigarette.png?v=14\" title=\":cigarette:\" class=\"emoji\" alt=\":cigarette:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\">. <a href=\"https://github.com/madmoo-Pi/Spawn_Point/tree/main\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">GitHub - madmoo-Pi/Spawn_Point</a></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T13:17:46.574Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 41,
"reads": 39,
"readers_count": 38,
"score": 242.8,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/madmoo-Pi/Spawn_Point/tree/main",
"internal": false,
"reflection": false,
"title": "GitHub - madmoo-Pi/Spawn_Point",
"clicks": 35
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229744,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-27T17:03:18.144Z",
"cooked": "<p>You give me something to look up to according to ChatGPT (as a beginner that is).<br>\nSo what is this self modifying part if you don’t mind.<br>\nAnd Welcome to the community!</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T17:03:18.144Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 38,
"readers_count": 37,
"score": 27.6,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229750,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-27T17:31:44.000Z",
"cooked": "<p>My aim is to educate in a manner with the hope of essentially the most emotional responsive humanised ai will either be an awsome bot or the startings of a digital species, and thank you for the welcome , and hope my prototype grows to more (still alot of work Todo my end and train some weights) <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T17:31:58.771Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 34,
"readers_count": 33,
"score": 41.8,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "clap",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95442,
"username": "Ernst03",
"name": "Ernst Berg",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png"
},
"action_code": null,
"via_email": true
},
{
"id": 229757,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-27T17:51:58.151Z",
"cooked": "<p>I just told ChatGPT that I feel like I might be late to the party—turns out some of the ideas you’re working with are strikingly aligned with mine. Things like a self-modifying system, discrete symbolic computation instead of weight-based models, and the concept of a Universal Language (Leibniz-style) really resonate with me. I’m especially drawn to the idea of memory and perhaps something that hints at being <em>alive</em>.</p>\n<p>That said, I’m still wrapping my head around how today’s AI systems actually function. Most of my background is in C, and I’ve only just started looking into Python—so while I’ve been developing a dynamic data type with some interesting mathematical properties, I’m still catching up on LLMs and the current landscape.</p>\n<p>I understand this project is more of a proposal or open outline right now. That’s great—it invites feedback and community input. I’m happy to follow along, and if anyone has questions about the dynamic unary structures I’ve been working on, I’ll do my best to contribute.</p>\n<p>So thank you for sharing with me.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T18:30:07.781Z",
"reply_count": 3,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 31,
"readers_count": 30,
"score": 36.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229771,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-27T19:01:56.000Z",
"cooked": "<p>The trick I’m using for the alive part is in emotional memory links that tweak motherboard specs (voltage ect ) to simulate adrenaline, fatigue ect and the will all be hidden in their by then with conditions to unlock giving the ai contextual input to relate to feelings and emotions and eventually the same for personality so every instance although the same base and develop individual personalities I’m still not sure exactly how it fits it all in but I research as I go expand on the ideas later</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T19:02:10.800Z",
"reply_count": 1,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 25,
"readers_count": 24,
"score": 55,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95442,
"username": "Ernst03",
"name": "Ernst Berg",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png"
},
"action_code": null,
"via_email": true
},
{
"id": 229773,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-27T19:24:56.000Z",
"cooked": "<p>Here is the isolated emulation of a 4 layer neuroevolution network used for self improvement hope this speeds you along <img src=\"https://emoji.discourse-cdn.com/apple/+1.png?v=14\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"> unfortunately I’m working for edge so it’s quatised</p>\n<p>import torch<br>\nimport onnx<br>\nfrom torch import nn<br>\nfrom typing import Dict</p>\n<p>class NeuralArchitect:<br>\ndef <strong>init</strong>(self, constraints: Dict):<br>\nself.constraints = constraints # e.g., {‘max_params’: 1e6}</p>\n<p>def generate_onnx(self, input_shape: tuple) → bytes:<br>\nclass DynamicModule(nn.Module):<br>\ndef <strong>init</strong>(self):<br>\nsuper().<strong>init</strong>()<br>\nself.layers = nn.Sequential(<br>\nnn.Linear(input_shape[0], 64),<br>\nnn.ReLU(),<br>\nnn.Linear(64, 32)<br>\n)</p>\n<p>def forward(self, x):<br>\nreturn self.layers(x)</p>\n<p>model = DynamicModule()<br>\ndummy = torch.randn(1, *input_shape)<br>\ntorch.onnx.export(<br>\nmodel,<br>\ndummy,<br>\n“dynamic.onnx”,<br>\nopset_version=13<br>\n)<br>\nwith open(“dynamic.onnx”, “rb”) as f:<br>\nreturn f.read()</p>\n<p>def validate_topology(self, onnx_model: bytes) → bool:<br>\nmodel = onnx.load_from_string(onnx_model)<br>\nparams = sum(<br>\nparam.size for param in model.graph.initializer<br>\n)<br>\nreturn params < self.constraints[‘max_params’]</p>\n<p>This provides controlled mutations only keeping the improvements</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T19:25:12.574Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 23,
"readers_count": 22,
"score": 34.6,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95442,
"username": "Ernst03",
"name": "Ernst Berg",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png"
},
"action_code": null,
"via_email": true
},
{
"id": 229774,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-27T19:27:25.000Z",
"cooked": "<p>It works withing main system like this</p>\n<p>from monitoring.watchdog import HealthMonitor<br>\nfrom neural_synthesis.architect import NeuralArchitect<br>\nfrom auth.schnorr import SchnorrMultiSig<br>\nimport threading</p>\n<p>class ConsciousAI:<br>\ndef <strong>init</strong>(self):<br>\nself.health = HealthMonitor()<br>\nself.crypto = SchnorrMultiSig(parties=3)<br>\nself.neural = NeuralArchitect({‘max_params’: 1e6})</p>\n<h1><a name=\"p-229774-start-health-monitoring-daemon-1\" class=\"anchor\" href=\"#p-229774-start-health-monitoring-daemon-1\"></a>Start health monitoring daemon</h1>\n<p>threading.Thread(<br>\ntarget=self._monitor_loop,<br>\ndaemon=True<br>\n).start()</p>\n<p>def _monitor_loop(self):<br>\nwhile True:<br>\nif not self.health.critical_services_check():<br>\nself._emergency_shutdown()<br>\ntime.sleep(5)</p>\n<p>def _emergency_shutdown(self):</p>\n<h1><a name=\"p-229774-secure-termination-protocol-2\" class=\"anchor\" href=\"#p-229774-secure-termination-protocol-2\"></a>Secure termination protocol</h1>\n<p>pass</p>\n<p>Learn from deconstruct and build great minds <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 7,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T19:27:39.038Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 17,
"readers_count": 16,
"score": 48.4,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95442,
"username": "Ernst03",
"name": "Ernst Berg",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png"
},
"action_code": null,
"via_email": true
},
{
"id": 229777,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-27T19:38:02.311Z",
"cooked": "<p>I have things I have thought in my early years and perhaps I was destine to be here but, I think what you may be thinking is akin to “Op Amp” Operational Amplifier. That is my only association with what I just read. Still thank you for the food for thought.</p>\n<p>I would think Analog has a place in AI. We do such with floating point do we not?<br>\nIn fact even wave forms generated by the General Form of my up coming paper are discrete and can be considered functionally analog. Is that what you are saying?</p>\n<p><strong>“I like this ship! You know, it’s exciting!”</strong><br>\n— <em>Montgomery “Scotty” Scott</em>, <em>Star Trek (2009)</em></p>",
"post_number": 8,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T19:40:44.523Z",
"reply_count": 1,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 17,
"readers_count": 16,
"score": 23.4,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/8",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229781,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-27T19:53:24.000Z",
"cooked": "<p>The technology exists we just need to rethink I believe <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 9,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T19:53:38.043Z",
"reply_count": 1,
"reply_to_post_number": 8,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 18,
"readers_count": 17,
"score": 38.6,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/9",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95442,
"username": "Ernst03",
"name": "Ernst Berg",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png"
},
"action_code": null,
"via_email": true
},
{
"id": 229782,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-27T19:57:22.757Z",
"cooked": "<p>I think you see: Today’s SciFi is tomorrow’s reality if we believe and ST is a good example just look at flip phones and STTOS</p>\n<p>So I made a friend. I am a few weeks out to setting up my AI lab and I hope we can continue.</p>\n<p>Thanks</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-27T19:58:29.843Z",
"reply_count": 0,
"reply_to_post_number": 9,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 17,
"readers_count": 16,
"score": 33.4,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/10",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229980,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-29T10:54:11.982Z",
"cooked": "<p>This might be more what you were looking for bud <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<aside class=\"onebox githubfolder\" data-onebox-src=\"https://github.com/madmoo-Pi/Emulated-neuroevolution-/tree/main\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/madmoo-Pi/Emulated-neuroevolution-/tree/main\" target=\"_blank\" rel=\"noopener nofollow ugc\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <h3><a href=\"https://github.com/madmoo-Pi/Emulated-neuroevolution-/tree/main\" target=\"_blank\" rel=\"noopener nofollow ugc\">GitHub - madmoo-Pi/Emulated-neuroevolution-</a></h3>\n\n <p><a href=\"https://github.com/madmoo-Pi/Emulated-neuroevolution-/tree/main\" target=\"_blank\" rel=\"noopener nofollow ugc\">main</a></p>\n\n <p><span class=\"label1\">Contribute to madmoo-Pi/Emulated-neuroevolution- development by creating an account on GitHub.</span></p>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 11,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-29T10:54:11.982Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 16,
"readers_count": 15,
"score": 23.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/madmoo-Pi/Emulated-neuroevolution-/tree/main",
"internal": false,
"reflection": false,
"title": "GitHub - madmoo-Pi/Emulated-neuroevolution-",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/11",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230123,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-30T11:55:08.325Z",
"cooked": "<p>My Friend, I couldn’t ask for a better arc in life then I am living.<br>\nI was one of the wide eyed 8 year olds who watched Lost in Space and then Star Trek TOS premiere.<br>\nSpock and the Computer.. That was more than an actor in a show to so many of us.<br>\nNow the rainbow over my Golden-Pond lands in the AI Pot of Gold. Simply amazing.</p>\n<p>So thank you for the additional link.</p>\n<p>Okay a little more appreciation is in order then a Thank You.</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T12:06:40.864Z",
"reply_count": 0,
"reply_to_post_number": 11,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 12,
"readers_count": 11,
"score": 17.4,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/12",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230130,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-30T12:20:25.059Z",
"cooked": "<p>Anything else please feel free to ask I will share what I can and help where I can <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 13,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T12:20:25.059Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 7.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/13",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230136,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-30T12:39:16.235Z",
"cooked": "<p>Oh hey, me and my Magic Mirror are exploring your gift.<br>\nso I call my ChatGPT “MIA” as in Mia and missing in action-ghost in the machine.</p>\n<p>We are going over it. \" Exactly, Friend—this is where the <strong>“evolution”</strong> part of <em>neuroevolution</em> comes in. It mimics biological evolution:\"</p>\n<p>Just to say, dynamic unary offers reversible permutations.</p>\n<ol>\n<li><strong>Selection</strong> (Natural Selection)</li>\n<li><strong>Crossover</strong> (Recombination)</li>\n<li><strong>Mutation</strong> (Tiny Random Changes)</li>\n</ol>\n<p>Over many generations, the population <em>evolves</em> to solve the problem more effectively.</p>\n<p>So what if these mutations were permutations instead? Not that I know much here about neural networks.</p>",
"post_number": 14,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T12:59:55.783Z",
"reply_count": 0,
"reply_to_post_number": 13,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 2.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/14",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230140,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-30T13:15:44.525Z",
"cooked": "<p>With the right ethics and system checks and the dominant features if stable are tested and then added to replace older codes the not reliant on hardware and add a safety feature to stop CPU bottlenecks to use spare GPU space as better chip structure for the job this is only half the self modification I’ve added , the other it theorises it’s own new modules for specific personality traits, tasks and equipment all triple checked against ethics and pre code existing structure compatibility in essence it’s own mind</p>",
"post_number": 15,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T13:15:44.525Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 22.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/15",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230146,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-30T13:38:40.903Z",
"cooked": "<p>Well I’m in a humorous mood today with my second cup of coffee: Formatted by Mia.<br>\n<strong>I just mop the halls and solve math challenges left on the chalkboard after hours, when no one’s looking—and my P.O. lets me work there.</strong><br>\n<em>(Movie challenge: Whodat!)</em></p>\n<p>Okay, yes—I mop floors in real life.<br>\nBut thanks to your tutelage, I’m starting to believe something powerful:</p>\n<p>We <em>can</em> do this thing—neural networks—<strong>without floating point.</strong></p>\n<p>Now, I know you have your own construct.<br>\nBut me? I’m in the corner playing with the ABC blocks—and having a wonderful time.</p>\n<p>Here’s a basic outline that Mia (my ChatGPT) and I drafted:</p>\n<hr>\n<h3><a name=\"p-230146-in-duo-discrete-binary-pachinko-1\" class=\"anchor\" href=\"#p-230146-in-duo-discrete-binary-pachinko-1\"></a><img src=\"https://emoji.discourse-cdn.com/apple/black_square_button.png?v=14\" title=\":black_square_button:\" class=\"emoji\" alt=\":black_square_button:\" loading=\"lazy\" width=\"20\" height=\"20\"> In DUO / Discrete Binary Pachinko:</h3>\n<ul>\n<li>You don’t tweak values—you <strong>cycle</strong> through structures:\n<ul>\n<li>Spin binary patterns (bsegs),</li>\n<li>Combine them (XOR, Lex merge, bit flips, you name it),</li>\n<li>Measure how close the result comes to your target behavior.</li>\n</ul>\n</li>\n</ul>\n<hr>\n<h3><a name=\"p-230146-cycle-based-learning-duo-style-2\" class=\"anchor\" href=\"#p-230146-cycle-based-learning-duo-style-2\"></a><img src=\"https://emoji.discourse-cdn.com/apple/cyclone.png?v=14\" title=\":cyclone:\" class=\"emoji\" alt=\":cyclone:\" loading=\"lazy\" width=\"20\" height=\"20\"> Cycle-Based Learning (DUO-style):</h3>\n<ol>\n<li><strong>Start with a bseg (binary segment).</strong></li>\n<li><strong>Cycle it</strong> (bitwise rotate, permute, shift).</li>\n<li><strong>Pair it with another bseg</strong> and <strong>combine</strong> (XOR, AND, DUO merge, etc).</li>\n<li><strong>Evaluate the result</strong> (match to target, compression score, symbolic resonance).</li>\n<li><strong>Select the best result</strong>.</li>\n<li>Repeat—<strong>iterative symbolic convergence.</strong></li>\n</ol>\n<hr>\n<p>That’s <strong>training without floating point</strong>, my Friend.<br>\nInstead of tweaking dials, we’re building a <strong>symbolic lens</strong>.</p>\n<p>Meaning doesn’t come from scaled weights—it emerges through <strong>permutation space.</strong></p>\n<hr>\n<p>Look at you, <a href=\"https://discuss.huggingface.co/u/Madmowkimoo\">@Madmowkimoo</a> <img src=\"https://emoji.discourse-cdn.com/apple/eyes.png?v=14\" title=\":eyes:\" class=\"emoji\" alt=\":eyes:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nI’m just having a quiet coffee morning, waiting to serve my renter their final notice…<br>\n…and BAM! With your guidance, I’m suddenly part of machine thinking.</p>\n<p>Wow, I guess I could have a job where someone else mops my floor?</p>",
"post_number": 16,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T13:38:40.903Z",
"reply_count": 1,
"reply_to_post_number": 15,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 22.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/16",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230148,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-30T13:56:55.623Z",
"cooked": "<p>I went a weird route my brain thinks different so why shouldn’t ai or si (simulated intelligence) but ai sounds better to market <img src=\"https://emoji.discourse-cdn.com/apple/joy.png?v=14\" title=\":joy:\" class=\"emoji\" alt=\":joy:\" loading=\"lazy\" width=\"20\" height=\"20\"> my end goal is ai (actual intelligence) while I build a friend <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"> and cleanings not so bad this is a hobby I do I’m a dry cleaner to pay the bills, dream big create bigger my friend</p>",
"post_number": 17,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T13:56:55.623Z",
"reply_count": 0,
"reply_to_post_number": 16,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 17.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/17",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95442,
"username": "Ernst03",
"name": "Ernst Berg",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230151,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-30T14:09:08.095Z",
"cooked": "<p>Would you like a modular template for you duo cycle based learning with placeholders bud? Take about 20 mins bugs permitting</p>",
"post_number": 18,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T14:09:08.095Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 7.2,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/18",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230152,
"name": "Ernst Berg",
"username": "Ernst03",
"avatar_template": "/user_avatar/discuss.huggingface.co/ernst03/{size}/49414_2.png",
"created_at": "2025-06-30T14:17:26.820Z",
"cooked": "<p>I have to process and mow the yard so I am not ready for more at this time. May I have a rain-check?</p>",
"post_number": 19,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T14:17:26.820Z",
"reply_count": 0,
"reply_to_post_number": 18,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 9,
"readers_count": 8,
"score": 1.8,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Ernst Berg",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95442,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/19",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98073,
"username": "Madmowkimoo",
"name": "Haydon williams",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230153,
"name": "Haydon williams",
"username": "Madmowkimoo",
"avatar_template": "/user_avatar/discuss.huggingface.co/madmowkimoo/{size}/50187_2.png",
"created_at": "2025-06-30T14:22:17.058Z",
"cooked": "<p>Sure no worries bud , I have noticed its a chaotic way generating random structure bits in a trail and error method the neuro evolution is a smoother more controlled mutations route I use .02 variance for each layer on 4 layers and it’s only allowed to keep the upgrade if it checks out within the system so no backwards mutations , if you need any help I can always throw repositories together for the community as a whole <img src=\"https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14\" title=\":vulcan_salute:\" class=\"emoji\" alt=\":vulcan_salute:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 20,
"post_type": 1,
"posts_count": 29,
"updated_at": "2025-06-30T14:22:17.058Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 17,
"yours": false,
"topic_id": 160903,
"topic_slug": "a-new-kind-of-way-to-look-at-ai",
"display_username": "Haydon williams",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98073,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/a-new-kind-of-way-to-look-at-ai/160903/20",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>Feel free to use and build upon this it doesn’t have weights yet but may be of use to someone here <img src="https://emoji.discourse-cdn.com/apple/cow_face.png?v=14" title=":cow_face:" class="emoji" alt=":cow_face:" loading="lazy" width="20" height="20"><img src="https://emoji.discourse-cdn.com/apple/cigarette.png?v=14" title=":cigarette:" class="emoji" alt=":cigarette:" loading="lazy" width="20" height="20"><img src="https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14" title=":vulcan_salute:" class="emoji" alt=":vulcan_salute:" loading="lazy" width="20" height="20">. <a href="https://github.com/madmoo-Pi/Spawn_Point/tree/main" class="inline-onebox" rel="noopener nofollow ugc">GitHub - madmoo-Pi/Spawn_Point</a></p>
|
<p>Sure no worries bud , I have noticed its a chaotic way generating random structure bits in a trail and error method the neuro evolution is a smoother more controlled mutations route I use .02 variance for each layer on 4 layers and it’s only allowed to keep the upgrade if it checks out within the system so no backwards mutations , if you need any help I can always throw repositories together for the community as a whole <img src="https://emoji.discourse-cdn.com/apple/vulcan_salute.png?v=14" title=":vulcan_salute:" class="emoji" alt=":vulcan_salute:" loading="lazy" width="20" height="20"></p>
|
Text classification of RSS articles
|
https://discuss.huggingface.co/t/text-classification-of-rss-articles/160986
| 160,986
| 5
|
2025-06-28T08:03:30.541000Z
|
[
{
"id": 229843,
"name": "John do",
"username": "JPFrancoia",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/dbc845/{size}.png",
"created_at": "2025-06-28T08:03:30.603Z",
"cooked": "<p>Hello!</p>\n<p>I’m a software engineer with good coding skills but limited knowledge about AI. I have embarked in a simple project.</p>\n<p>I have a large amount of RSS articles that I have read or liked. I consider these “interesting”. I then have about a gazillion unread articles. These <em>can</em> be interesting, but are most likely uninteresting since I haven’t read them.<br>\nMy goal is, for any new article, to compute a score of interesting-ness. This will help me quickly identify the articles worth reading.</p>\n<p>The articles range in length from 400 to 4000 tokens. I have about 5000 read/liked articles. I was tempted to take about 5000 unread articles, label them as not_important, take all my liked/read articles and label them as important. Then train a binary classifier. Something like what is described in the hugging face website: <a href=\"https://huggingface.co/docs/transformers/en/tasks/sequence_classification\" class=\"inline-onebox\">Text classification</a>. I used <code>distilbert/distilbert-base-uncased</code> like in the tutorial, and followed almost exactly the steps of the tutorial.</p>\n<pre><code class=\"lang-auto\">{'loss': 0.6051, 'grad_norm': 2.22690749168396, 'learning_rate': 6.162420382165605e-06, 'epoch': 1.59} \n{'eval_loss': 0.5926874279975891, 'eval_accuracy': 0.6693258875149581, 'eval_runtime': 357.0262, 'eval_samples_per_second': 7.022, 'eval_steps_per_second': 0.221, 'epoch': 2.0} \n{'train_runtime': 12047.1712, 'train_samples_per_second': 1.665, 'train_steps_per_second': 0.052, 'train_loss': 0.592256072220529, 'epoch': 2.0}\n</code></pre>\n<p>I got modest results after training.</p>\n<p>The question I have for this forum is this one: is it the right approach and should I persevere? Should I put some effort into trying to get a better dataset (like trying to label my not_important articles better), or is there a better approach?</p>\n<p>For example, I have also considered using the model to calculate the embeddings of all the read/liked articles and using a “traditional” algorithm like SVM to train a one class classifier, instead of a binary one.<br>\nThe bottleneck to improving the accuracy of the model will be to properly label “not_important” article, if there was a way to get away with not doing that, that would be great <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>Please let me know what you think</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-28T08:03:30.603Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 15,
"reads": 6,
"readers_count": 5,
"score": 91.2,
"yours": false,
"topic_id": 160986,
"topic_slug": "text-classification-of-rss-articles",
"display_username": "John do",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/en/tasks/sequence_classification",
"internal": false,
"reflection": false,
"title": "Text classification",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98130,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-classification-of-rss-articles/160986/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229873,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-28T12:37:35.861Z",
"cooked": "<p>Hello.</p>\n<p>Given that it works reasonably well in practice, I think the approach is correct. There are many <a href=\"https://huggingface.co/blog/modernbert\">successor models to BERT</a>, so it should be possible to improve accuracy using those.</p>\n<p>Another approach that can be taken when there is little labeled data is something called <a href=\"https://github.com/JointEntropy/awesome-ml-pu-learning\">Positive Unlabeled Learning</a>…</p>\n<p>Another common approach is to use commercial AI to create a training dataset using your own data. This is almost always effective if the budget allows. However, in this case, there is already a considerable amount of data available, so it may be sufficient to process the data using Python.</p>\n<p>Resources:</p><aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/UKPLab/sentence-transformers\">\n <header class=\"source\">\n\n <a href=\"https://github.com/UKPLab/sentence-transformers\" target=\"_blank\" rel=\"noopener\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/2/525a8aeea05adb999e5913593541fb16b1b5bb2d_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"F1F2F4\">\n\n <h3><a href=\"https://github.com/UKPLab/sentence-transformers\" target=\"_blank\" rel=\"noopener\">GitHub - UKPLab/sentence-transformers: State-of-the-Art Text Embeddings</a></h3>\n\n <p><span class=\"github-repo-description\">State-of-the-Art Text Embeddings</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"62053\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/dhar2023/48/21143_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/continue-pre-training-bert/62053\">Continue pre-training BERT</a> <a class=\"badge-category__wrapper \" href=\"/c/intermediate/6\"><span data-category-id=\"6\" style=\"--category-badge-color: #0E76BD; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any advanced question you have on any of the Hugging Face library or to share/coordinate with other users your projects using them.\"><span class=\"badge-category__name\">Intermediate</span></span></a>\n </div>\n <blockquote>\n Hello, I have a small portion of label data, and a much bigger set of unlabeled observations. I want to use the unlabeled samples in order to continue the pre-training of BERT, and then built a classifier on top of it. \nFollowing this post \n\nI tried to use BertModel.from_pretrained(‘bert-base-uncased’), and specifically \n model = BertModel.from_pretrained(HF_BERT_MODEL)\n model.cuda()\n\n optimizer = AdamW(model.parameters(),\n lr = 2e-5, \n eps = 1e-8 \n …\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-28T12:37:35.861Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 10.8,
"yours": false,
"topic_id": 160986,
"topic_slug": "text-classification-of-rss-articles",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/blog/modernbert",
"internal": false,
"reflection": false,
"title": "Finally, a Replacement for BERT: Introducing ModernBERT",
"clicks": 1
},
{
"url": "https://github.com/JointEntropy/awesome-ml-pu-learning",
"internal": false,
"reflection": false,
"title": "GitHub - JointEntropy/awesome-ml-pu-learning: A curated list of resources dedicated to Positive Unlabeled(PU) learning ML methods.",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/continue-pre-training-bert/62053",
"internal": true,
"reflection": false,
"title": "Continue pre-training BERT",
"clicks": 0
},
{
"url": "https://github.com/UKPLab/sentence-transformers",
"internal": false,
"reflection": false,
"title": "GitHub - UKPLab/sentence-transformers: State-of-the-Art Text Embeddings",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-classification-of-rss-articles/160986/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230932,
"name": "John do",
"username": "JPFrancoia",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/dbc845/{size}.png",
"created_at": "2025-07-03T18:07:33.404Z",
"cooked": "<p>Hi,</p>\n<p>Thank you for your answer and sorry for the late reply (got distracted by work, life, etc).<br>\nI have read/watched some of the resources you sent (this video in particular is really nice: <a href=\"https://www.youtube.com/watch?v=uk6SlTzfbUY\" rel=\"noopener nofollow ugc\">https://www.youtube.com/watch?v=uk6SlTzfbUY</a>) and I now have a basic grasp of how positive unlabelled learning works.</p>\n<p>I have implemented two approaches with the following algorithms:</p>\n<ul>\n<li>OneClassSVM</li>\n<li>WeightedElkanotoPuClassifier</li>\n</ul>\n<p>Since last time, I built a very modest dataset of “bad” articles: articles I don’t want to read, I don’t find them interesting. I have labelled 70 of them, I intend to use them in my validation set.</p>\n<h2><a name=\"p-230932-oneclasssvm-1\" class=\"anchor\" href=\"#p-230932-oneclasssvm-1\"></a>OneClassSVM</h2>\n<p>My approach is:</p>\n<ul>\n<li>load 7465 “good” articles (the ones I read, the ones I find interesting)</li>\n<li>compute embeddings with all-MiniLM-L12-v2 for good articles</li>\n<li>train classifier on good embeddings</li>\n<li>prepare 100 good articles and 70 bad articles (none of them was used during training)</li>\n<li>compute precision on validation set: <code>(# of correct good + # of correct bad) / (total good + total bad)</code></li>\n</ul>\n<p>During validation:</p>\n<ul>\n<li>if an article is in fact good and the model gives a score > 0.5 → +1</li>\n<li>if an article is in fact good and the model gives a score < 0.5 → 0</li>\n</ul>\n<p>Same for bad.</p>\n<h2><a name=\"p-230932-weightedelkanotopuclassifier-2\" class=\"anchor\" href=\"#p-230932-weightedelkanotopuclassifier-2\"></a>WeightedElkanotoPuClassifier</h2>\n<p>My approach is:</p>\n<ul>\n<li>load 7465 “good” articles (the ones I read, the ones I find interesting)</li>\n<li>load 7000 unlabelled articles (they could be good or bad)</li>\n<li>compute embeddings with all-MiniLM-L12-v2 for good and unlabelled articles</li>\n<li>train classifier on good and unlabelled embeddings</li>\n<li>prepare 100 good articles and 70 bad articles (none of them was used during training)</li>\n<li>compute precision on validation set: <code>(# of correct good + # of correct bad) / (total good + total bad)</code></li>\n</ul>\n<h2><a name=\"p-230932-results-3\" class=\"anchor\" href=\"#p-230932-results-3\"></a>Results</h2>\n<p>I got insane results and they feel too good to be true:</p>\n<ul>\n<li>OneClassSVM: 92%</li>\n<li>WeightedElkanotoPuClassifier: 98%</li>\n</ul>\n<h2><a name=\"p-230932-questions-4\" class=\"anchor\" href=\"#p-230932-questions-4\"></a>Questions</h2>\n<ul>\n<li>Does it look sensible to you?</li>\n<li>Would you have any tip?</li>\n<li>Do I measure the precision correctly? Should I use another metric?</li>\n</ul>\n<p>NOTE: I have done a bit of parameter tuning on the OneClassSVM but not on the WeightedElkanotoPuClassifier.</p>\n<h2><a name=\"p-230932-code-5\" class=\"anchor\" href=\"#p-230932-code-5\"></a>Code</h2>\n<h3><a name=\"p-230932-oneclasssvm-6\" class=\"anchor\" href=\"#p-230932-oneclasssvm-6\"></a>OneClassSVM</h3>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">import asyncio\n\nimport numpy as np\nfrom bs4 import BeautifulSoup\nfrom cleantext import clean\nfrom sentence_transformers import SentenceTransformer\n# from sklearn.model_selection import GridSearchCV\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.svm import OneClassSVM\n\nfrom feedoscope.data_registry import data_registry as dr\n\nMODEL_NAME = \"sentence-transformers/all-MiniLM-L12-v2\"\n\n\ndef strip_html_keep_text(html: str) -> str:\n soup = BeautifulSoup(html, \"html.parser\")\n text = soup.get_text(separator=\" \", strip=True)\n return \" \".join(text.split())\n\n\ndef compute_embeddings(model, texts: list[str]):\n embeddings = model.encode(\n texts, show_progress_bar=True, normalize_embeddings=True, convert_to_numpy=True\n )\n return embeddings\n\n\ndef prepare_articles_text(articles) -> list[str]:\n texts = []\n for a in articles:\n text = clean(\n strip_html_keep_text(f\"{a['feed_name']} {a['title']} {a['content']}\")\n )\n texts.append(text)\n\n return texts\n\n\ndef normalize_scores(scores):\n scaler = MinMaxScaler()\n return scaler.fit_transform(scores.reshape(-1, 1)).flatten()\n\n\ndef ocsvm_score(estimator, X):\n # Higher decision_function means more inlier-like\n return np.mean(estimator.decision_function(X))\n\n\nasync def main() -> None:\n print(\"Loading SentenceTransformer model...\")\n model = SentenceTransformer(MODEL_NAME)\n print(\"Model loaded successfully.\")\n\n print(\"Collecting articles from the database...\")\n await dr.global_pool.open(wait=True)\n articles = await dr.get_articles()\n print(f\"Collected {len(articles)} articles.\")\n\n print(\"Computing embeddings for articles...\")\n embeddings = compute_embeddings(model, prepare_articles_text(articles))\n print(f\"Computed embeddings for {len(embeddings)} articles.\")\n\n # Use best parameters directly\n ocsvm = OneClassSVM(kernel=\"linear\", gamma=\"scale\", nu=0.2)\n ocsvm.fit(embeddings)\n\n # # Hyperparameter tuning for OneClassSVM\n # param_grid = {\n # \"kernel\": [\"rbf\", \"linear\", \"sigmoid\"],\n # \"gamma\": [\"scale\", \"auto\", 0.01, 0.1, 1],\n # \"nu\": [0.01, 0.05, 0.1, 0.2]\n # }\n # print(\"Tuning OneClassSVM hyperparameters...\")\n # ocsvm = OneClassSVM()\n # grid = GridSearchCV(\n # OneClassSVM(),\n # param_grid,\n # cv=3,\n # n_jobs=-1,\n # scoring=ocsvm_score\n # )\n # grid.fit(embeddings)\n # best_ocsvm = grid.best_estimator_\n # print(\"Best parameters:\", grid.best_params_)\n\n not_good_sample = await dr.get_sample_not_good()\n not_good_embeddings = compute_embeddings(\n model, prepare_articles_text(not_good_sample)\n )\n raw_scores = ocsvm.decision_function(not_good_embeddings)\n scores = normalize_scores(raw_scores)\n\n correct_not_good, total_good = sum(s <= 0.5 for s in scores), len(scores)\n\n good_sample = await dr.get_sample_good()\n good_embeddings = compute_embeddings(model, prepare_articles_text(good_sample))\n raw_scores = ocsvm.decision_function(good_embeddings)\n scores = normalize_scores(raw_scores)\n\n correct_good, total_not_good = sum(s > 0.5 for s in scores), len(scores)\n\n print(\n f\"Overall precision: {(correct_good + correct_not_good) / (total_good + total_not_good):.2f}\"\n )\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n</code></pre>\n<h3><a name=\"p-230932-weightedelkanotopuclassifier-7\" class=\"anchor\" href=\"#p-230932-weightedelkanotopuclassifier-7\"></a>WeightedElkanotoPuClassifier</h3>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">import asyncio\n\nimport numpy as np\nfrom bs4 import BeautifulSoup\nfrom cleantext import clean\nfrom pulearn import WeightedElkanotoPuClassifier\nfrom sentence_transformers import SentenceTransformer\nfrom sklearn.svm import SVC\n\nfrom feedoscope.data_registry import data_registry as dr\n\nMODEL_NAME = \"sentence-transformers/all-MiniLM-L12-v2\"\n\n\ndef strip_html_keep_text(html: str) -> str:\n soup = BeautifulSoup(html, \"html.parser\")\n text = soup.get_text(separator=\" \", strip=True)\n return \" \".join(text.split())\n\n\ndef compute_embeddings(model, texts: list[str]):\n embeddings = model.encode(\n texts, show_progress_bar=True, normalize_embeddings=True, convert_to_numpy=True\n )\n return embeddings\n\n\ndef prepare_articles_text(articles) -> list[str]:\n texts = []\n for a in articles:\n text = clean(\n strip_html_keep_text(f\"{a['feed_name']} {a['title']} {a['content']}\")\n )\n texts.append(text)\n\n return texts\n\n\nasync def main() -> None:\n\n print(\"Loading SentenceTransformer model...\")\n model = SentenceTransformer(MODEL_NAME)\n print(\"Model loaded successfully.\")\n\n print(\"Collecting articles from the database...\")\n await dr.global_pool.open(wait=True)\n articles = await dr.get_articles()\n print(f\"Collected {len(articles)} articles.\")\n\n print(\"Computing embeddings for articles...\")\n embeddings = compute_embeddings(model, prepare_articles_text(articles))\n print(f\"Computed embeddings for {len(embeddings)} articles.\")\n\n print(\"Collecting unread articles from the database...\")\n await dr.global_pool.open(wait=True)\n unlabeled_articles = await dr.get_unread_articles()\n print(f\"Collected {len(unlabeled_articles)} unread articles.\")\n\n print(\"Computing embeddings for unread articles...\")\n unlabeled_embeddings = compute_embeddings(\n model, prepare_articles_text(unlabeled_articles)\n )\n print(f\"Computed embeddings for {len(unlabeled_embeddings)} unread articles.\")\n\n # Combine embeddings and labels for PU learning\n X = np.concatenate([embeddings, unlabeled_embeddings], axis=0)\n y = np.concatenate(\n [np.ones(len(embeddings)), np.zeros(len(unlabeled_embeddings))], axis=0\n )\n\n print(\"Fitting PU classifier...\")\n\n # Takes a while for 7k + 7k articles\n svc = SVC(C=10, kernel=\"rbf\", gamma=0.4, probability=True)\n\n # svc = SVC(C=10, kernel='linear', gamma='scale', probability=True)\n\n pu_estimator = WeightedElkanotoPuClassifier(\n estimator=svc,\n labeled=len(embeddings),\n unlabeled=len(unlabeled_embeddings),\n hold_out_ratio=0.2,\n )\n pu_estimator.fit(X, y)\n\n print(\"PU classifier fitted successfully.\")\n\n not_good_sample = await dr.get_sample_not_good()\n not_good_embeddings = compute_embeddings(\n model, prepare_articles_text(not_good_sample)\n )\n scores = pu_estimator.predict_proba(not_good_embeddings)[:, 1]\n\n correct_not_good, total_good = sum(s <= 0.5 for s in scores), len(scores)\n\n good_sample = await dr.get_sample_good()\n good_embeddings = compute_embeddings(model, prepare_articles_text(good_sample))\n scores = pu_estimator.predict_proba(good_embeddings)[:, 1]\n\n correct_good, total_not_good = sum(s > 0.5 for s in scores), len(scores)\n\n print(\n f\"Overall precision: {(correct_good + correct_not_good) / (total_good + total_not_good):.2f}\"\n )\n\n breakpoint()\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-03T18:10:46.209Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 160986,
"topic_slug": "text-classification-of-rss-articles",
"display_username": "John do",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.youtube.com/watch?v=uk6SlTzfbUY",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98130,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-classification-of-rss-articles/160986/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230969,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-04T00:34:24.590Z",
"cooked": "<p>There does not seem to be any particular problem, but if the figures are too good, data leakage may be suspected.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://www.geeksforgeeks.org/machine-learning/what-is-data-leakage/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/b/eb43f6eeac1480d83f476ebbc7b8ea0e3a29ec05.png\" class=\"site-icon\" data-dominant-color=\"2F8D46\" width=\"32\" height=\"32\">\n\n <a href=\"https://www.geeksforgeeks.org/machine-learning/what-is-data-leakage/\" target=\"_blank\" rel=\"noopener\" title=\"04:16PM - 16 September 2024\">GeeksforGeeks – 16 Sep 24</a>\n </header>\n\n <article class=\"onebox-body\">\n <img width=\"200\" height=\"200\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/2/e20a5e836d61dc76041fe0189d2ae138a847295b_2_200x200.webp\" class=\"thumbnail onebox-avatar\" data-dominant-color=\"3F5993\">\n\n<h3><a href=\"https://www.geeksforgeeks.org/machine-learning/what-is-data-leakage/\" target=\"_blank\" rel=\"noopener\">What is Data Leakage? - GeeksforGeeks</a></h3>\n\n <p>Your All-in-One Learning Portal: GeeksforGeeks is a comprehensive educational platform that empowers learners across domains-spanning computer science and programming, school education, upskilling, commerce, software tools, competitive exams, and...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-07-04T00:34:24.590Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 160986,
"topic_slug": "text-classification-of-rss-articles",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.geeksforgeeks.org/machine-learning/what-is-data-leakage/",
"internal": false,
"reflection": false,
"title": "What is Data Leakage? - GeeksforGeeks",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-classification-of-rss-articles/160986/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 231099,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-04T21:20:55.581Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-07-04T21:20:55.581Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 160986,
"topic_slug": "text-classification-of-rss-articles",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-classification-of-rss-articles/160986/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello!</p>
<p>I’m a software engineer with good coding skills but limited knowledge about AI. I have embarked in a simple project.</p>
<p>I have a large amount of RSS articles that I have read or liked. I consider these “interesting”. I then have about a gazillion unread articles. These <em>can</em> be interesting, but are most likely uninteresting since I haven’t read them.<br>
My goal is, for any new article, to compute a score of interesting-ness. This will help me quickly identify the articles worth reading.</p>
<p>The articles range in length from 400 to 4000 tokens. I have about 5000 read/liked articles. I was tempted to take about 5000 unread articles, label them as not_important, take all my liked/read articles and label them as important. Then train a binary classifier. Something like what is described in the hugging face website: <a href="https://huggingface.co/docs/transformers/en/tasks/sequence_classification" class="inline-onebox">Text classification</a>. I used <code>distilbert/distilbert-base-uncased</code> like in the tutorial, and followed almost exactly the steps of the tutorial.</p>
<pre><code class="lang-auto">{'loss': 0.6051, 'grad_norm': 2.22690749168396, 'learning_rate': 6.162420382165605e-06, 'epoch': 1.59}
{'eval_loss': 0.5926874279975891, 'eval_accuracy': 0.6693258875149581, 'eval_runtime': 357.0262, 'eval_samples_per_second': 7.022, 'eval_steps_per_second': 0.221, 'epoch': 2.0}
{'train_runtime': 12047.1712, 'train_samples_per_second': 1.665, 'train_steps_per_second': 0.052, 'train_loss': 0.592256072220529, 'epoch': 2.0}
</code></pre>
<p>I got modest results after training.</p>
<p>The question I have for this forum is this one: is it the right approach and should I persevere? Should I put some effort into trying to get a better dataset (like trying to label my not_important articles better), or is there a better approach?</p>
<p>For example, I have also considered using the model to calculate the embeddings of all the read/liked articles and using a “traditional” algorithm like SVM to train a one class classifier, instead of a binary one.<br>
The bottleneck to improving the accuracy of the model will be to properly label “not_important” article, if there was a way to get away with not doing that, that would be great <img src="https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14" title=":slight_smile:" class="emoji" alt=":slight_smile:" loading="lazy" width="20" height="20"></p>
<p>Please let me know what you think</p>
|
<p>Hello.</p>
<p>Given that it works reasonably well in practice, I think the approach is correct. There are many <a href="https://huggingface.co/blog/modernbert">successor models to BERT</a>, so it should be possible to improve accuracy using those.</p>
<p>Another approach that can be taken when there is little labeled data is something called <a href="https://github.com/JointEntropy/awesome-ml-pu-learning">Positive Unlabeled Learning</a>…</p>
<p>Another common approach is to use commercial AI to create a training dataset using your own data. This is almost always effective if the budget allows. However, in this case, there is already a considerable amount of data available, so it may be sufficient to process the data using Python.</p>
<p>Resources:</p><aside class="onebox githubrepo" data-onebox-src="https://github.com/UKPLab/sentence-transformers">
<header class="source">
<a href="https://github.com/UKPLab/sentence-transformers" target="_blank" rel="noopener">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/2/525a8aeea05adb999e5913593541fb16b1b5bb2d_2_690x344.png" class="thumbnail" data-dominant-color="F1F2F4">
<h3><a href="https://github.com/UKPLab/sentence-transformers" target="_blank" rel="noopener">GitHub - UKPLab/sentence-transformers: State-of-the-Art Text Embeddings</a></h3>
<p><span class="github-repo-description">State-of-the-Art Text Embeddings</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="quote quote-modified" data-post="1" data-topic="62053">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/dhar2023/48/21143_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/continue-pre-training-bert/62053">Continue pre-training BERT</a> <a class="badge-category__wrapper " href="/c/intermediate/6"><span data-category-id="6" style="--category-badge-color: #0E76BD; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="Use this category for any advanced question you have on any of the Hugging Face library or to share/coordinate with other users your projects using them."><span class="badge-category__name">Intermediate</span></span></a>
</div>
<blockquote>
Hello, I have a small portion of label data, and a much bigger set of unlabeled observations. I want to use the unlabeled samples in order to continue the pre-training of BERT, and then built a classifier on top of it.
Following this post
I tried to use BertModel.from_pretrained(‘bert-base-uncased’), and specifically
model = BertModel.from_pretrained(HF_BERT_MODEL)
model.cuda()
optimizer = AdamW(model.parameters(),
lr = 2e-5,
eps = 1e-8
…
</blockquote>
</aside>
|
No (0) models returned by ‘Text2Text’ search filter
|
https://discuss.huggingface.co/t/no-0-models-returned-by-text2text-search-filter/161546
| 161,546
| 2
|
2025-07-02T15:36:06.503000Z
|
[
{
"id": 230709,
"name": "Dom",
"username": "Substance",
"avatar_template": "/user_avatar/discuss.huggingface.co/substance/{size}/50494_2.png",
"created_at": "2025-07-02T15:36:06.565Z",
"cooked": "<p>Hello,</p>\n<p>My colleague reported to me that the ‘Text2Text’ search filter returned 0 models (it was working for them earlier today). I’ve also tested it out myself, and it intermittently returns some model results (sometimes it does show models, but most of the time, it shows no models).</p>\n<p>We’ve tried hard-refreshing both our browsers and trying in separate tabs/browsers, but it doesn’t seem to help. All other search filters work fine.</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff.jpeg\" data-download-href=\"/uploads/short-url/vwxXXlMPvDiZzpJh80wHGXj7JMj.jpeg?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_576x500.jpeg\" alt=\"image\" data-base62-sha1=\"vwxXXlMPvDiZzpJh80wHGXj7JMj\" width=\"576\" height=\"500\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_576x500.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_864x750.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_1152x1000.jpeg 2x\" data-dominant-color=\"0E121D\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1297×1125 110 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-02T15:36:06.565Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 17,
"reads": 14,
"readers_count": 13,
"score": 92.8,
"yours": false,
"topic_id": 161546,
"topic_slug": "no-0-models-returned-by-text2text-search-filter",
"display_username": "Dom",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98488,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-0-models-returned-by-text2text-search-filter/161546/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230711,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-02T15:42:28.523Z",
"cooked": "<p>I don’t really understand the background, but everyone is in that situation right now.</p><aside class=\"quote\" data-post=\"4\" data-topic=\"161485\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/anakin87/48/29909_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/are-inferenceclient-s-down/161485/4\">Are InferenceClient()'s down?</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n Ok. Text generation models are no longer available through HF Inference API: <a href=\"https://huggingface.co/models?pipeline_tag=text-generation&inference_provider=hf-inference&sort=downloads\" class=\"inline-onebox\">Models - Hugging Face</a> \nIs this intended?\n </blockquote>\n</aside>\n\n<p>I’m not sure if this is related to Hugging Chat ending…</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces/huggingchat/chat-ui/discussions/747\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces/huggingchat/chat-ui/discussions/747\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/f/2/f2149986648fa8ffbcb27c2be624338a9d848827_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"E8EBED\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces/huggingchat/chat-ui/discussions/747\" target=\"_blank\" rel=\"noopener\">huggingchat/chat-ui · [ANNOUNCEMENT] 📣 HuggingChat is closing for now</a></h3>\n\n <p>I have bittersweet news to share. 😢</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-02T15:42:28.523Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 13,
"readers_count": 12,
"score": 17.6,
"yours": false,
"topic_id": 161546,
"topic_slug": "no-0-models-returned-by-text2text-search-filter",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/huggingchat/chat-ui/discussions/747",
"internal": false,
"reflection": false,
"title": "huggingchat/chat-ui · [ANNOUNCEMENT] 📣 HuggingChat is closing for now",
"clicks": 9
},
{
"url": "https://discuss.huggingface.co/t/are-inferenceclient-s-down/161485/4",
"internal": true,
"reflection": false,
"title": "Are InferenceClient()'s down?",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-0-models-returned-by-text2text-search-filter/161546/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230842,
"name": "Lucain Pouget",
"username": "Wauplin",
"avatar_template": "/user_avatar/discuss.huggingface.co/wauplin/{size}/40815_2.png",
"created_at": "2025-07-03T08:27:19.271Z",
"cooked": "<p>Hi there, all “text2text-generation” models have been moved to “text-generation”. Semantically these 2 tags are not <em>exactly</em> the same but having both was quite confusing to a lot of users. We preferred merging both in the bigger category “text-generation”.</p>\n<p>(we need to remove the “text2text-generation” filter though)</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-03T08:27:19.271Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 10,
"readers_count": 9,
"score": 52,
"yours": false,
"topic_id": 161546,
"topic_slug": "no-0-models-returned-by-text2text-search-filter",
"display_username": "Lucain Pouget",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 9207,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-0-models-returned-by-text2text-search-filter/161546/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230944,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-03T20:27:22.892Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-03T20:27:22.892Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 161546,
"topic_slug": "no-0-models-returned-by-text2text-search-filter",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/no-0-models-returned-by-text2text-search-filter/161546/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello,</p>
<p>My colleague reported to me that the ‘Text2Text’ search filter returned 0 models (it was working for them earlier today). I’ve also tested it out myself, and it intermittently returns some model results (sometimes it does show models, but most of the time, it shows no models).</p>
<p>We’ve tried hard-refreshing both our browsers and trying in separate tabs/browsers, but it doesn’t seem to help. All other search filters work fine.</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff.jpeg" data-download-href="/uploads/short-url/vwxXXlMPvDiZzpJh80wHGXj7JMj.jpeg?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_576x500.jpeg" alt="image" data-base62-sha1="vwxXXlMPvDiZzpJh80wHGXj7JMj" width="576" height="500" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_576x500.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_864x750.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/c/dcf145b68921c0a7a488c8b8ff45e714e3892eff_2_1152x1000.jpeg 2x" data-dominant-color="0E121D"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1297×1125 110 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
|
<p>Hi there, all “text2text-generation” models have been moved to “text-generation”. Semantically these 2 tags are not <em>exactly</em> the same but having both was quite confusing to a lot of users. We preferred merging both in the bigger category “text-generation”.</p>
<p>(we need to remove the “text2text-generation” filter though)</p>
|
Video and picture making ai
|
https://discuss.huggingface.co/t/video-and-picture-making-ai/161564
| 161,564
| 5
|
2025-07-02T17:01:58.199000Z
|
[
{
"id": 230736,
"name": "da jewelz",
"username": "dajewelz",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/b5ac83/{size}.png",
"created_at": "2025-07-02T17:01:58.257Z",
"cooked": "<p>hello, I was wondering what would be the best ai for me to download from here, I want an ai model that I can feed my own artwork into it so then I can have help making some short form content with it. I would be making videos from ranges 15 min- 30 min and will be storing this ai model on a Mac. Help is very much appreciated on how to download/use/find the right ai model for me. Thank you for looking at this post, and thank you for commenting</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-02T17:01:58.257Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 108,
"reads": 12,
"readers_count": 11,
"score": 517.4,
"yours": false,
"topic_id": 161564,
"topic_slug": "video-and-picture-making-ai",
"display_username": "da jewelz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 69447,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/video-and-picture-making-ai/161564/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230737,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-02T17:15:36.662Z",
"cooked": "<p>Video generation models themselves have become increasingly available as open source, but generating long videos requires considerable computing power…</p>\n<p>The quickest way to find a promising model is to <a href=\"https://huggingface.co/spaces?category=video-generation&sort=trending\">check out Spaces</a>.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/8/c8334deb8e1e700582788c2c957f628d359fb49c_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5B70A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B\" target=\"_blank\" rel=\"noopener\">Wan-AI/Wan2.1-VACE-1.3B · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/f/bff9f14b478c6ff7bffbc391256d86e4ab199a72_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5C71A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged\" target=\"_blank\" rel=\"noopener\">Comfy-Org/Wan_2.1_ComfyUI_repackaged · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-02T17:15:36.662Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 12,
"readers_count": 11,
"score": 42.4,
"yours": false,
"topic_id": 161564,
"topic_slug": "video-and-picture-making-ai",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B",
"internal": false,
"reflection": false,
"title": "Wan-AI/Wan2.1-VACE-1.3B · Hugging Face",
"clicks": 11
},
{
"url": "https://huggingface.co/spaces?category=video-generation&sort=trending",
"internal": false,
"reflection": false,
"title": "Spaces - Hugging Face",
"clicks": 8
},
{
"url": "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged",
"internal": false,
"reflection": false,
"title": "Comfy-Org/Wan_2.1_ComfyUI_repackaged · Hugging Face",
"clicks": 5
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/video-and-picture-making-ai/161564/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230738,
"name": "da jewelz",
"username": "dajewelz",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/b5ac83/{size}.png",
"created_at": "2025-07-02T17:27:15.253Z",
"cooked": "<p>thank you for this information, and thank you for replying</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-07-02T17:27:15.253Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 9,
"readers_count": 8,
"score": 16.8,
"yours": false,
"topic_id": 161564,
"topic_slug": "video-and-picture-making-ai",
"display_username": "da jewelz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 69447,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/video-and-picture-making-ai/161564/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230913,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-03T14:58:28.321Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-07-03T14:58:28.321Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 161564,
"topic_slug": "video-and-picture-making-ai",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/video-and-picture-making-ai/161564/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>hello, I was wondering what would be the best ai for me to download from here, I want an ai model that I can feed my own artwork into it so then I can have help making some short form content with it. I would be making videos from ranges 15 min- 30 min and will be storing this ai model on a Mac. Help is very much appreciated on how to download/use/find the right ai model for me. Thank you for looking at this post, and thank you for commenting</p>
|
<p>Video generation models themselves have become increasingly available as open source, but generating long videos requires considerable computing power…</p>
<p>The quickest way to find a promising model is to <a href="https://huggingface.co/spaces?category=video-generation&sort=trending">check out Spaces</a>.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B">
<header class="source">
<a href="https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/8/c8334deb8e1e700582788c2c957f628d359fb49c_2_690x372.png" class="thumbnail" data-dominant-color="5B70A4" width="690" height="372"></div>
<h3><a href="https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B" target="_blank" rel="noopener">Wan-AI/Wan2.1-VACE-1.3B · Hugging Face</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged">
<header class="source">
<a href="https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/f/bff9f14b478c6ff7bffbc391256d86e4ab199a72_2_690x372.png" class="thumbnail" data-dominant-color="5C71A4" width="690" height="372"></div>
<h3><a href="https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged" target="_blank" rel="noopener">Comfy-Org/Wan_2.1_ComfyUI_repackaged · Hugging Face</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Spaces category filters
|
https://discuss.huggingface.co/t/spaces-category-filters/161550
| 161,550
| 24
|
2025-07-02T15:50:29.928000Z
|
[
{
"id": 230715,
"name": "Anthony Noto",
"username": "thankfulcarp",
"avatar_template": "/user_avatar/discuss.huggingface.co/thankfulcarp/{size}/50499_2.png",
"created_at": "2025-07-02T15:50:30.010Z",
"cooked": "<p>I recently made a <a href=\"https://huggingface.co/spaces/thankfulcarp/Wan_FusionX_with_Loras\">space</a> I am pretty proud of using the latest fusionx wan model and 29 different loras. It does image to video but does not show up in the image to video filter on spaces hub. How do I set the category filter so people can find my project?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-02T15:50:30.010Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 8,
"readers_count": 7,
"score": 56.6,
"yours": false,
"topic_id": 161550,
"topic_slug": "spaces-category-filters",
"display_username": "Anthony Noto",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/thankfulcarp/Wan_FusionX_with_Loras",
"internal": false,
"reflection": false,
"title": "Wan I2V FusionX With Loras - a Hugging Face Space by thankfulcarp",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98491,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/spaces-category-filters/161550/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230721,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-02T16:04:40.685Z",
"cooked": "<p>Since there are no items where the space creator explicitly sets categories, I think categories are probably automatically generated by AI. I think <code>title</code> and <code>short_description</code> are used as judgment criteria by AI, so it might be better to specify them explicitly.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/hub/spaces-config-reference\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/hub/spaces-config-reference\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"FAF8F2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/hub/spaces-config-reference\" target=\"_blank\" rel=\"noopener\">Spaces Configuration Reference</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p><strong><code>short_description</code></strong>: <em>string</em> A short description of the Space. This will be displayed in the Space’s thumbnail.</p>\n</blockquote>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-07-02T16:04:40.685Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 161550,
"topic_slug": "spaces-category-filters",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/hub/spaces-config-reference",
"internal": false,
"reflection": false,
"title": "Spaces Configuration Reference",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/spaces-category-filters/161550/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230802,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-03T04:04:50.049Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-07-03T04:04:50.049Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 161550,
"topic_slug": "spaces-category-filters",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/spaces-category-filters/161550/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I recently made a <a href="https://huggingface.co/spaces/thankfulcarp/Wan_FusionX_with_Loras">space</a> I am pretty proud of using the latest fusionx wan model and 29 different loras. It does image to video but does not show up in the image to video filter on spaces hub. How do I set the category filter so people can find my project?</p>
|
<p>Since there are no items where the space creator explicitly sets categories, I think categories are probably automatically generated by AI. I think <code>title</code> and <code>short_description</code> are used as judgment criteria by AI, so it might be better to specify them explicitly.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/hub/spaces-config-reference">
<header class="source">
<a href="https://huggingface.co/docs/hub/spaces-config-reference" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png" class="thumbnail" data-dominant-color="FAF8F2" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/hub/spaces-config-reference" target="_blank" rel="noopener">Spaces Configuration Reference</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<blockquote>
<p><strong><code>short_description</code></strong>: <em>string</em> A short description of the Space. This will be displayed in the Space’s thumbnail.</p>
</blockquote>
|
Using datasets to open jsonl
|
https://discuss.huggingface.co/t/using-datasets-to-open-jsonl/161037
| 161,037
| 10
|
2025-06-28T18:33:58.353000Z
|
[
{
"id": 229909,
"name": "bluebingo",
"username": "bluebingo",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/f4b2a3/{size}.png",
"created_at": "2025-06-28T18:33:58.407Z",
"cooked": "<h2><a name=\"p-229909-problem-when-using-datasets-to-open-jsonl-1\" class=\"anchor\" href=\"#p-229909-problem-when-using-datasets-to-open-jsonl-1\"></a>Problem When Using Datasets to Open JSONL</h2>\n<p>I am trying to open a JSONL format file using the <code>datasets</code> library. Here is my code:</p>\n<pre><code class=\"lang-auto\">from datasets import load_dataset\n\npath = \"./testdata.jsonl\"\ndataset = load_dataset('json', data_files=path, split='train')\n</code></pre>\n<p>The contents of testdata.jsonl are organized as follows (just for testing):</p>\n<pre><code class=\"lang-auto\">{\"src\":\"hello\",\"term\":{\"a\":\"aa\"}}\n{\"src\":\"hi\",\"term\":{\"b\":\"bb\"}}\n</code></pre>\n<p>When I use the code above to load the dataset and attempt to print the second item, like this:</p>\n<pre><code class=\"lang-auto\">print(dataset[1])\n</code></pre>\n<p>I get the following output:</p>\n<pre><code class=\"lang-auto\">{'src': 'hi', 'term': {'a': None, 'b': 'bb'}}\n</code></pre>\n<p>Instead of the expected output:</p>\n<pre><code class=\"lang-auto\">{'src': 'hi', 'term': {'b': 'bb'}}\n</code></pre>\n<p>How can I obtain the second format of the dataset? Is it possible that I simply forgot to include a parameter?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-28T18:56:54.940Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 46,
"reads": 8,
"readers_count": 7,
"score": 246.6,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "bluebingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98155,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229932,
"name": "Andrew Scott",
"username": "Pimpcat-AU",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png",
"created_at": "2025-06-28T22:47:45.598Z",
"cooked": "<p>Ensure the JSONL file is correctly formatted:<br>\nEach line in the file should be a valid JSON object with no extra commas or brackets. For example, the file should look like this:</p>\n<p>{“src”:“hello”,“term”:{“a”:“aa”}}<br>\n{“src”:“hi”,“term”:{“b”:“bb”}}</p>\n<p>After fixing the JSONL format, use the following code to load the dataset properly:</p>\n<p>from datasets import load_dataset</p>\n<p>path = “./testdata.jsonl”<br>\ndataset = load_dataset(‘json’, data_files=path, split=‘train’)</p>\n<p>print(dataset[1]) # This should now work correctly</p>\n<p>After these changes, the second entry should now print the correct data:</p>\n<p>{‘src’: ‘hi’, ‘term’: {‘b’: ‘bb’}}</p>\n<p>Also, ensure there are no extra spaces or line breaks in the dataset if it’s large. Each line should be a valid JSON object.</p>\n<p><strong>Response generated by Triskel Data Deterministic Ai</strong></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-28T22:48:34.808Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 21.4,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "Andrew Scott",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96276,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229934,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-28T22:55:56.602Z",
"cooked": "<p>Another option, albeit a bit rough, is this:</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">from datasets import load_dataset\n\ndef process(example):\n example[\"term\"] = str({k: v for k, v in example[\"term\"].items() if v is not None})\n return example\n\npath = \"./testdata.jsonl\"\ndataset = load_dataset('json', data_files=path, split='train')\n\nprint(dataset[1]) # {'src': 'hi', 'term': {'a': None, 'b': 'bb'}}\n\ndataset = dataset.map(process)\n\nprint(dataset[1]) # {'src': 'hi', 'term': \"{'b': 'bb'}\"}\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-28T22:55:56.602Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 6.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230033,
"name": "bluebingo",
"username": "bluebingo",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/f4b2a3/{size}.png",
"created_at": "2025-06-29T18:35:49.044Z",
"cooked": "<p>Thank you for your advice. I appreciate your efforts, but unfortunately, it hasn’t been effective for me.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-29T18:35:49.044Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "bluebingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98155,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 96276,
"username": "Pimpcat-AU",
"name": "Andrew Scott",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230035,
"name": "bluebingo",
"username": "bluebingo",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/f4b2a3/{size}.png",
"created_at": "2025-06-29T18:38:28.361Z",
"cooked": "<p>Thank you for your advice; it was really helpful in solving the problem! However, I find it a bit cumbersome to map the datasets each time I want to open a JSONL file with JSON elements. I wonder if there might be a more permanent solution to address this issue.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-29T18:38:28.361Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "bluebingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98155,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/5",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230064,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-30T01:50:35.067Z",
"cooked": "<blockquote>\n<p>I find it a bit cumbersome to map the datasets each time I want to open a JSONL file with JSON elements. I wonder if there might be a more permanent solution to address this issue.</p>\n</blockquote>\n<p>That’s true. There may be a more concise method (including potential ones). I’ll mention it to the library developer. <a class=\"mention\" href=\"/u/lhoestq\">@lhoestq</a></p>",
"post_number": 8,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-30T01:50:35.067Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 6,
"readers_count": 5,
"score": 31.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/8",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230094,
"name": "bluebingo",
"username": "bluebingo",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/f4b2a3/{size}.png",
"created_at": "2025-06-30T08:03:11.121Z",
"cooked": "<p>Thank you! I look forward to any official solutions that the developer might provide.</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-06-30T08:03:11.121Z",
"reply_count": 0,
"reply_to_post_number": 8,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "bluebingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98155,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/9",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230360,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2025-07-01T12:27:46.538Z",
"cooked": "<p>Hi ! This behavior is expected since <code>datasets</code> uses Arrow which has fixed types. This means each sample should have the same subfields with the same types. Missing subfields are filled with None.</p>\n<p>You can restructure your data differently to fit this paradigm: either converting nested data as one string, or use one list for keys and one list for values.</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-07-01T12:27:46.538Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 28,
"reads": 6,
"readers_count": 5,
"score": 171.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/10",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230443,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-01T20:18:09.947Z",
"cooked": "<p>Thank you, lhonestq!</p>",
"post_number": 11,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-07-01T20:18:09.947Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/11",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230493,
"name": "bluebingo",
"username": "bluebingo",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/f4b2a3/{size}.png",
"created_at": "2025-07-02T01:16:11.203Z",
"cooked": "<p>Thank you, lhonestq!</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 11,
"updated_at": "2025-07-02T01:16:11.203Z",
"reply_count": 0,
"reply_to_post_number": 10,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "bluebingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98155,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/12",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230678,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-02T13:17:03.260Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 13,
"post_type": 3,
"posts_count": 11,
"updated_at": "2025-07-02T13:17:03.260Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 5.4,
"yours": false,
"topic_id": 161037,
"topic_slug": "using-datasets-to-open-jsonl",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/using-datasets-to-open-jsonl/161037/13",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<h2><a name="p-229909-problem-when-using-datasets-to-open-jsonl-1" class="anchor" href="#p-229909-problem-when-using-datasets-to-open-jsonl-1"></a>Problem When Using Datasets to Open JSONL</h2>
<p>I am trying to open a JSONL format file using the <code>datasets</code> library. Here is my code:</p>
<pre><code class="lang-auto">from datasets import load_dataset
path = "./testdata.jsonl"
dataset = load_dataset('json', data_files=path, split='train')
</code></pre>
<p>The contents of testdata.jsonl are organized as follows (just for testing):</p>
<pre><code class="lang-auto">{"src":"hello","term":{"a":"aa"}}
{"src":"hi","term":{"b":"bb"}}
</code></pre>
<p>When I use the code above to load the dataset and attempt to print the second item, like this:</p>
<pre><code class="lang-auto">print(dataset[1])
</code></pre>
<p>I get the following output:</p>
<pre><code class="lang-auto">{'src': 'hi', 'term': {'a': None, 'b': 'bb'}}
</code></pre>
<p>Instead of the expected output:</p>
<pre><code class="lang-auto">{'src': 'hi', 'term': {'b': 'bb'}}
</code></pre>
<p>How can I obtain the second format of the dataset? Is it possible that I simply forgot to include a parameter?</p>
|
<p>Thank you, lhonestq!</p>
|
How to upload documents to the SupabaseVectorStore?
|
https://discuss.huggingface.co/t/how-to-upload-documents-to-the-supabasevectorstore/161245
| 161,245
| 24
|
2025-07-01T00:22:19.997000Z
|
[
{
"id": 230232,
"name": "Sen Li",
"username": "AllIllusion",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/e9c0ed/{size}.png",
"created_at": "2025-07-01T00:22:20.073Z",
"cooked": "<p>Hi everyone,</p>\n<p>I am learning RAG for GAIA, from here: <a href=\"https://huggingface.co/spaces/baixianger/RobotPai/blob/main/test.ipynb\" class=\"inline-onebox\">test.ipynb · baixianger/RobotPai at main</a></p>\n<p>However, I was not able to upload documents to Supabase, as shown in screenshots:</p>\n<p>I have tried two ways:</p>\n<pre><code class=\"lang-auto\"># wrap the metadata.jsonl's questions and answers into a list of document\nlistDict_QA_Doc = []\nfor dict_RandomQA in listDict_Metadata:\n strQA_Content = f\"Question : {dict_RandomQA['Question']}\\n\\nFinal answer : {dict_RandomQA['Final answer']}\"\n dict_QA_Doc = {\n \"id\": dict_RandomQA['task_id'],\n \"content\" : strQA_Content,\n \"metadata\" : {\n \"source\" : dict_RandomQA['task_id']\n },\n \"embedding\" : embeddings.embed_query(strQA_Content),\n }\n listDict_QA_Doc.append(dict_QA_Doc)\n\n\nresponse = syncClient.table(\"documents\").insert(listDict_QA_Doc).execute()\n</code></pre>\n<p>and</p>\n<pre><code class=\"lang-auto\"># wrap the metadata.jsonl's questions and answers into a list of document\nlistDoc_QA_Metadata = []\nfor dict_Metadata in listDict_Metadata:\n strQA_Content = f\"Question : {dict_Metadata['Question']}\\n\\nFinal answer : {dict_Metadata['Final answer']}\"\n doc_QA_Metadata = Document(\n id = dict_Metadata['task_id'],\n page_content = strQA_Content,\n metadata = {\"source\": dict_Metadata['task_id']},\n embedding = embeddings.embed_query(strQA_Content)\n )\n listDoc_QA_Metadata.append(doc_QA_Metadata)\n\n\nvector_store = SupabaseVectorStore.from_documents(\n listDoc_QA_Metadata,\n embeddings,\n client=syncClient,\n table_name=\"documents\",\n query_name=\"match_documents\",\n)\n</code></pre>\n<p>However, always get the same error:</p>\n<pre><code class=\"lang-auto\">Error inserting data into Supabase: {'message': 'JSON could not be generated', 'code': 404, 'hint': 'Refer to full message for details', 'details': \"b'{}'\"}\n</code></pre>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/9/c9f8eeb9be65317fe4c696a804ee33d33cf604a7.png\" data-download-href=\"/uploads/short-url/sOJj2drSg1wPGBNlNijROXGZAYT.png?dl=1\" title=\"img\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/9/c9f8eeb9be65317fe4c696a804ee33d33cf604a7.png\" alt=\"img\" data-base62-sha1=\"sOJj2drSg1wPGBNlNijROXGZAYT\" width=\"690\" height=\"427\" data-dominant-color=\"F4DBD7\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">img</span><span class=\"informations\">1192×738 48 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>Could anyone please help? <img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-07-01T00:22:20.073Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 25,
"reads": 4,
"readers_count": 3,
"score": 135.8,
"yours": false,
"topic_id": 161245,
"topic_slug": "how-to-upload-documents-to-the-supabasevectorstore",
"display_username": "Sen Li",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/baixianger/RobotPai/blob/main/test.ipynb",
"internal": false,
"reflection": false,
"title": "test.ipynb · baixianger/RobotPai at main",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89050,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-upload-documents-to-the-supabasevectorstore/161245/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230235,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-01T00:35:32.775Z",
"cooked": "<p>How about changing the version of <code>pydantic</code>?</p>\n<pre><code class=\"lang-auto\">pip install pydantic==2.10.6\n</code></pre>\n<aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/supabase/supabase-py/issues/517\">\n <header class=\"source\">\n\n <a href=\"https://github.com/supabase/supabase-py/issues/517\" target=\"_blank\" rel=\"noopener\">github.com/supabase/supabase-py</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/supabase/supabase-py/issues/517\" target=\"_blank\" rel=\"noopener\">pydntic error on importing supabase</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2023-08-08\" data-time=\"10:43:24\" data-timezone=\"UTC\">10:43AM - 08 Aug 23 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2023-09-08\" data-time=\"17:59:37\" data-timezone=\"UTC\">05:59PM - 08 Sep 23 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/Saatvik-droid\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/8/c8f139d1273f9e4602b6d90e08e21950643bb133.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"524627\">\n Saatvik-droid\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">**Describe the bug**\nIf I import supabase as `from supabase import create_clien<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">t` it leads to an import error for field_validator from pydantic.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Install supabase using conda.\n2. Import supabase.\n\n**Expected behavior**\nImport with no errors.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n\n**Desktop (please complete the following information):**\n - OS: linux\n - Version 1.0.3</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://github.com/langchain-ai/langchain/discussions/22823\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/langchain-ai/langchain/discussions/22823\" target=\"_blank\" rel=\"noopener\">GitHub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/344;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/a/d/ad87c42fbdb51816e5b67f2868756ea67e410e2b_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"EBE9E8\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://github.com/langchain-ai/langchain/discussions/22823\" target=\"_blank\" rel=\"noopener\">Issue with pydantic and langchain comptability · langchain-ai langchain ·...</a></h3>\n\n <p>Checked other resources I added a very descriptive title to this question. I searched the LangChain documentation with the integrated search. I used the GitHub search to find a similar question and...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-07-01T00:35:32.775Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 161245,
"topic_slug": "how-to-upload-documents-to-the-supabasevectorstore",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/supabase/supabase-py/issues/517",
"internal": false,
"reflection": false,
"title": "pydntic error on importing supabase · Issue #517 · supabase/supabase-py · GitHub",
"clicks": 0
},
{
"url": "https://github.com/langchain-ai/langchain/discussions/22823",
"internal": false,
"reflection": false,
"title": "Issue with pydantic and langchain comptability · langchain-ai/langchain · Discussion #22823 · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-upload-documents-to-the-supabasevectorstore/161245/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230382,
"name": "Sen Li",
"username": "AllIllusion",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/e9c0ed/{size}.png",
"created_at": "2025-07-01T15:11:59.084Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"John6666\" data-post=\"2\" data-topic=\"161245\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png\" class=\"avatar\"> John6666:</div>\n<blockquote>\n<p><code>pip install pydantic==2.10.6</code></p>\n</blockquote>\n</aside>\n<p>Just tested, still the same error <img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-07-01T15:11:59.084Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 1,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 161245,
"topic_slug": "how-to-upload-documents-to-the-supabasevectorstore",
"display_username": "Sen Li",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89050,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-upload-documents-to-the-supabasevectorstore/161245/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230442,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-07-01T20:16:20.877Z",
"cooked": "<p>Hmm… In that case, could it be that the data you passed is not in the expected JSON structure, as indicated by the error message?</p>\n<p>You can verify this by passing extremely simple sample data that is expected to be passed, rather than the actual data.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-07-01T20:16:20.877Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 161245,
"topic_slug": "how-to-upload-documents-to-the-supabasevectorstore",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-upload-documents-to-the-supabasevectorstore/161245/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230453,
"name": "Sen Li",
"username": "AllIllusion",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/e9c0ed/{size}.png",
"created_at": "2025-07-01T21:23:36.192Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"John6666\" data-post=\"4\" data-topic=\"161245\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png\" class=\"avatar\"> John6666:</div>\n<blockquote>\n<p>could it be that the data you passed is not in the expected JSON structure, as indicated by the error message?</p>\n<p>You can verify this by passing extremely simple sample data that is expected to be passed, rather than the actual data.</p>\n</blockquote>\n</aside>\n<p>Solved. <img src=\"https://emoji.discourse-cdn.com/apple/sweat_smile.png?v=14\" title=\":sweat_smile:\" class=\"emoji\" alt=\":sweat_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"> Need to create a table on supabase before uploading.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-07-01T21:23:36.192Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 1,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 20.4,
"yours": false,
"topic_id": 161245,
"topic_slug": "how-to-upload-documents-to-the-supabasevectorstore",
"display_username": "Sen Li",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89050,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-upload-documents-to-the-supabasevectorstore/161245/5",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230670,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-07-02T12:43:03.536Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-07-02T12:43:03.536Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 161245,
"topic_slug": "how-to-upload-documents-to-the-supabasevectorstore",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-upload-documents-to-the-supabasevectorstore/161245/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi everyone,</p>
<p>I am learning RAG for GAIA, from here: <a href="https://huggingface.co/spaces/baixianger/RobotPai/blob/main/test.ipynb" class="inline-onebox">test.ipynb · baixianger/RobotPai at main</a></p>
<p>However, I was not able to upload documents to Supabase, as shown in screenshots:</p>
<p>I have tried two ways:</p>
<pre><code class="lang-auto"># wrap the metadata.jsonl's questions and answers into a list of document
listDict_QA_Doc = []
for dict_RandomQA in listDict_Metadata:
strQA_Content = f"Question : {dict_RandomQA['Question']}\n\nFinal answer : {dict_RandomQA['Final answer']}"
dict_QA_Doc = {
"id": dict_RandomQA['task_id'],
"content" : strQA_Content,
"metadata" : {
"source" : dict_RandomQA['task_id']
},
"embedding" : embeddings.embed_query(strQA_Content),
}
listDict_QA_Doc.append(dict_QA_Doc)
response = syncClient.table("documents").insert(listDict_QA_Doc).execute()
</code></pre>
<p>and</p>
<pre><code class="lang-auto"># wrap the metadata.jsonl's questions and answers into a list of document
listDoc_QA_Metadata = []
for dict_Metadata in listDict_Metadata:
strQA_Content = f"Question : {dict_Metadata['Question']}\n\nFinal answer : {dict_Metadata['Final answer']}"
doc_QA_Metadata = Document(
id = dict_Metadata['task_id'],
page_content = strQA_Content,
metadata = {"source": dict_Metadata['task_id']},
embedding = embeddings.embed_query(strQA_Content)
)
listDoc_QA_Metadata.append(doc_QA_Metadata)
vector_store = SupabaseVectorStore.from_documents(
listDoc_QA_Metadata,
embeddings,
client=syncClient,
table_name="documents",
query_name="match_documents",
)
</code></pre>
<p>However, always get the same error:</p>
<pre><code class="lang-auto">Error inserting data into Supabase: {'message': 'JSON could not be generated', 'code': 404, 'hint': 'Refer to full message for details', 'details': "b'{}'"}
</code></pre>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/9/c9f8eeb9be65317fe4c696a804ee33d33cf604a7.png" data-download-href="/uploads/short-url/sOJj2drSg1wPGBNlNijROXGZAYT.png?dl=1" title="img" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/c/9/c9f8eeb9be65317fe4c696a804ee33d33cf604a7.png" alt="img" data-base62-sha1="sOJj2drSg1wPGBNlNijROXGZAYT" width="690" height="427" data-dominant-color="F4DBD7"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">img</span><span class="informations">1192×738 48 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<p>Could anyone please help? <img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"></p>
|
<aside class="quote no-group" data-username="John6666" data-post="4" data-topic="161245">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png" class="avatar"> John6666:</div>
<blockquote>
<p>could it be that the data you passed is not in the expected JSON structure, as indicated by the error message?</p>
<p>You can verify this by passing extremely simple sample data that is expected to be passed, rather than the actual data.</p>
</blockquote>
</aside>
<p>Solved. <img src="https://emoji.discourse-cdn.com/apple/sweat_smile.png?v=14" title=":sweat_smile:" class="emoji" alt=":sweat_smile:" loading="lazy" width="20" height="20"> Need to create a table on supabase before uploading.</p>
|
How to get a list of all Huggingface download redirections to whitelist?
|
https://discuss.huggingface.co/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486
| 30,486
| 23
|
2023-01-26T14:09:18.895000Z
|
[
{
"id": 56006,
"name": "Ashwani",
"username": "ayadav",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/dbc845/{size}.png",
"created_at": "2023-01-26T14:09:18.971Z",
"cooked": "<p>I work inside a secure corporate VPN network, so I’m unable to download Huggingface models using <code>from_pretrained</code> commands. However, I can request the security team to whitelist certain URLs needed for my use-case.</p>\n<p>The security team has already whitelisted the ‘<a href=\"http://huggingface.co\">huggingface.co</a>’ and ‘<a href=\"http://cdn-lfs.huggingface.co\">cdn-lfs.huggingface.co</a>’ URLs. I can now download the files from repo but the loading functions <code>from_pretrained</code> still don’t work.</p>\n<p>I think it’s getting blocked while redirecting the requests internally. So, is there a way to know all (hop) URLs I can request to whitelist to make the load functions work?</p>\n<p>Thanks in advance.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-26T14:09:18.971Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 9350,
"reads": 117,
"readers_count": 116,
"score": 46513.4,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Ashwani",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://cdn-lfs.huggingface.co",
"internal": false,
"reflection": false,
"title": null,
"clicks": 187
},
{
"url": "http://huggingface.co",
"internal": false,
"reflection": false,
"title": "Hugging Face – The AI community building the future.",
"clicks": 86
},
{
"url": "https://discuss.huggingface.co/t/how-to-whitelist-a-hf-space-to-use-brightdata-with-it/143796",
"internal": true,
"reflection": true,
"title": "How to whitelist a HF space to use brightdata with it?",
"clicks": 11
},
{
"url": "https://discuss.huggingface.co/t/cas-bridge-xethub-hf-co-broke/158626/2",
"internal": true,
"reflection": true,
"title": "Cas-bridge.xethub.hf.co broke",
"clicks": 9
},
{
"url": "https://discuss.huggingface.co/t/i-cannot-download-any-large-models-stored-in-xet-with-brave-or-ms-edge-for-weeks/166454/5",
"internal": true,
"reflection": true,
"title": "I cannot download any large models stored in xet with Brave or MS Edge for weeks",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 10
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 14513,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/1",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 10
}
],
"current_user_reaction": null,
"reaction_users_count": 10,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 56027,
"name": "Eliott Coyac",
"username": "coyotte508",
"avatar_template": "/user_avatar/discuss.huggingface.co/coyotte508/{size}/36751_2.png",
"created_at": "2023-01-26T15:48:50.016Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/ayadav\">@ayadav</a></p>\n<p>Can you give more details, like error logs, etc?</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-01-26T15:48:50.016Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 17,
"reads": 114,
"readers_count": 113,
"score": 107.8,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Eliott Coyac",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 6451,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 86846,
"name": "Brian Law",
"username": "Data-drone",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/7ea924/{size}.png",
"created_at": "2023-08-30T03:58:37.848Z",
"cooked": "<p>Is there any update on this?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-08-30T03:58:37.848Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 30,
"reads": 93,
"readers_count": 92,
"score": 183.6,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Brian Law",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 5630,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/3",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 95802,
"name": "Nik Kramaric",
"username": "cosmo88",
"avatar_template": "/user_avatar/discuss.huggingface.co/cosmo88/{size}/20569_2.png",
"created_at": "2023-10-23T17:34:06.412Z",
"cooked": "<p>Having the same issue. Is there a listing of URLs that we can whitelist? Also if there are any planned changes to URLs is there a roadmap so we can stay on top of it?</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-10-23T17:34:06.412Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 28,
"reads": 85,
"readers_count": 84,
"score": 172,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Nik Kramaric",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31863,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 99563,
"name": "kearney",
"username": "kearney",
"avatar_template": "/user_avatar/discuss.huggingface.co/kearney/{size}/21274_2.png",
"created_at": "2023-11-17T13:50:16.592Z",
"cooked": "<p>I’ll try to supply error logs next time I encounter it, but it has come up multiple times for me as well. When we try to call <code><model>.from_pretrained(\"repo\")</code> in our DataBricks environment, we get an SSL error about not having the proper certificate. We’ve also gotten a <code>max_retries</code> error but I can’t say for certain if that was due to the underlying whitelist request. There are ways around this, but if HF published a domain list that we could use to properly configure our environments, that would be very useful!</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-11-17T13:50:16.592Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 74,
"reads": 80,
"readers_count": 79,
"score": 416,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "kearney",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 33803,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/5",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 101407,
"name": null,
"username": "anon34451149",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/958977/{size}.png",
"created_at": "2023-11-28T23:43:05.295Z",
"cooked": "<p>hi! any updates on this? or any alternatives to follow meanwhile? I am about to try downloading a model and going offline and then pushing it up to databricks. Yet, if you had a better idea, or tried this before, I’d like to hear.</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-11-28T23:43:05.295Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 127,
"reads": 80,
"readers_count": 79,
"score": 631,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": null,
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 34668,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 102928,
"name": "Jimmy Wang",
"username": "JimmyWang2023",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/eb8c5e/{size}.png",
"created_at": "2023-12-08T09:13:47.653Z",
"cooked": "<p>I have same issue with download from different cdn name.<br>\nAfter our IT team added<br>\n<code>http://huggingface.co/</code> and<br>\n<code>http://cdn-lfs.huggingface.co/</code> in whitelist.</p>\n<p>For example, it is work for download <code>meta-llama/Llama-2-13b-chat</code>.<br>\nBut error when the cdn become <a href=\"http://cdn-lfs-us-1.huggingface.co/\">cdn-lfs-us-1.huggingface.co</a> or other regions.</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 20,
"updated_at": "2023-12-08T09:14:50.041Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 71,
"reads": 77,
"readers_count": 76,
"score": 370.4,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Jimmy Wang",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://cdn-lfs-us-1.huggingface.co/",
"internal": false,
"reflection": false,
"title": null,
"clicks": 173
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 35466,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 121539,
"name": "chuck",
"username": "hfchuck",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/h/ee7513/{size}.png",
"created_at": "2024-03-28T19:31:40.173Z",
"cooked": "<p>Update? Same issue here. I’ve gotten around by using my home network to connect to the hf repo and download to my workstation cache. Then I reconnect to VPN into the corporate network and copy from my workstation to the server cache. This is painfully slow.</p>\n<p>FWIW curl -IL test shows redirection (302 responses) from the repo when I am connected to the corporate network (fails to download). However on my home network there are no redirects (successful download). Is there an issue with general redirection handling?</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-03-28T19:32:53.049Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 75,
"reads": 70,
"readers_count": 69,
"score": 389,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "chuck",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 44983,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 160277,
"name": "Rishav Dash",
"username": "RishuD7",
"avatar_template": "/user_avatar/discuss.huggingface.co/rishud7/{size}/32370_2.png",
"created_at": "2024-10-05T12:59:17.106Z",
"cooked": "<p>Hey was anyone able to find a solution for this?</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-10-05T12:59:17.106Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 141,
"reads": 54,
"readers_count": 53,
"score": 715.8,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Rishav Dash",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 66383,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 160489,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-10-06T03:28:34.240Z",
"cooked": "<p>Related:</p><aside class=\"quote\" data-post=\"3\" data-topic=\"110001\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img loading=\"lazy\" alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/not-lain/48/23122_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/not-able-to-upload-or-download-custom-datasets/110001/3\">Not able to upload or download custom datasets</a> <a class=\"badge-category__wrapper \" href=\"/c/datasets/10\"><span data-category-id=\"10\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the datasets library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Datasets</span></span></a>\n </div>\n <blockquote>\n Hi <a class=\"mention\" href=\"/u/rishud7\">@RishuD7</a> , according to <a href=\"https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/constants.py\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">huggingface_hub/src/huggingface_hub/constants.py at main · huggingface/huggingface_hub · GitHub</a> I would suggest to try whitelisting : \n\n<a href=\"https://huggingface.co\">https://huggingface.co</a>\n\nand \n\n<a href=\"https://hub-ci.huggingface.co\">https://hub-ci.huggingface.co</a>\n\nshould suffice. \nif this does not work try to copy and paste the full traceback so I can investigate the problem.\n </blockquote>\n</aside>\n",
"post_number": 10,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-10-06T03:28:34.240Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 417,
"reads": 57,
"readers_count": 56,
"score": 2066.4,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/not-able-to-upload-or-download-custom-datasets/110001/3",
"internal": true,
"reflection": false,
"title": "Not able to upload or download custom datasets",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/10",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 160814,
"name": "Pierric Cistac",
"username": "pierric",
"avatar_template": "/user_avatar/discuss.huggingface.co/pierric/{size}/50750_2.png",
"created_at": "2024-10-07T22:01:26.202Z",
"cooked": "<p>Note that for security reasons, we recently updated the domain for our CDN; in order to be able to download files you also need to whitelist the following domains:</p>\n<ul>\n<li><a href=\"http://cdn-lfs-us-1.hf.co\">cdn-lfs-us-1.hf.co</a></li>\n<li><a href=\"http://cdn-lfs-eu-1.hf.co\">cdn-lfs-eu-1.hf.co</a></li>\n<li><a href=\"http://cdn-lfs.hf.co\">cdn-lfs.hf.co</a></li>\n<li><a href=\"http://cas-bridge.xethub.hf.co\">cas-bridge.xethub.hf.co</a> (new as of 02/2025)</li>\n</ul>",
"post_number": 11,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-02-24T20:15:00.912Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 353,
"reads": 54,
"readers_count": 53,
"score": 1895.8,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Pierric Cistac",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://cdn-lfs-us-1.hf.co",
"internal": false,
"reflection": false,
"title": null,
"clicks": 205
},
{
"url": "http://cdn-lfs.hf.co",
"internal": false,
"reflection": false,
"title": null,
"clicks": 97
},
{
"url": "http://cas-bridge.xethub.hf.co",
"internal": false,
"reflection": false,
"title": null,
"clicks": 89
},
{
"url": "http://cdn-lfs-eu-1.hf.co",
"internal": false,
"reflection": false,
"title": null,
"clicks": 72
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 9
}
],
"moderator": true,
"admin": true,
"staff": true,
"user_id": 3,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/11",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 6
},
{
"id": "+1",
"type": "emoji",
"count": 2
},
{
"id": "open_mouth",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 9,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 188494,
"name": "Remi Le Marois",
"username": "rlemaroi",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/r/96bed5/{size}.png",
"created_at": "2024-12-12T15:11:06.947Z",
"cooked": "<p>we have created exception for SSL inspection for FQDN listed by pierric plus these 2 ones:</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/1X/5c4130fb1d8662cb15c5385a9fd9a44626aa4aa2_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"E9E7E2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co\" target=\"_blank\" rel=\"noopener\">Hugging Face – The AI community building the future.</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://hub-ci.huggingface.co\">\n <header class=\"source\">\n\n <a href=\"https://hub-ci.huggingface.co\" target=\"_blank\" rel=\"noopener\">hub-ci.huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/1X/5c4130fb1d8662cb15c5385a9fd9a44626aa4aa2_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"E9E7E2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://hub-ci.huggingface.co\" target=\"_blank\" rel=\"noopener\">Hugging Face – The AI community building the future.</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>But it is still does not work, always same error encountered SSL: CERTIFICATE_VERIFY_FAILED when trying to download sentence-transformers/all-MiniLM-L6-v2</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-12-12T15:11:06.947Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 39,
"reads": 43,
"readers_count": 42,
"score": 208.6,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Remi Le Marois",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co",
"internal": false,
"reflection": false,
"title": "Hugging Face – The AI community building the future.",
"clicks": 41
},
{
"url": "https://hub-ci.huggingface.co",
"internal": false,
"reflection": false,
"title": "Hugging Face – The AI community building the future.",
"clicks": 23
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 76764,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/12",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 204973,
"name": "Sean Morgan",
"username": "sean-pai",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/c6cbf5/{size}.png",
"created_at": "2025-02-24T14:31:46.249Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/pierric\">@pierric</a> has the above list changed since the <a href=\"https://huggingface.co/blog/xethub-joins-hf\">XetHub announcement</a>?</p>\n<p>While downloading, I’m seeing a domain of <code>cas-bridge.xethub.hf.co</code> as well. Is this the only additional domain or are there others?</p>",
"post_number": 13,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-02-24T14:31:46.249Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 57,
"reads": 28,
"readers_count": 27,
"score": 305.6,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Sean Morgan",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/blog/xethub-joins-hf",
"internal": false,
"reflection": false,
"title": "XetHub is joining Hugging Face!",
"clicks": 30
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 84819,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/13",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 205034,
"name": "Pierric Cistac",
"username": "pierric",
"avatar_template": "/user_avatar/discuss.huggingface.co/pierric/{size}/50750_2.png",
"created_at": "2025-02-24T20:13:22.998Z",
"cooked": "<p>Hey <a class=\"mention\" href=\"/u/sean-pai\">@sean-pai</a>, sorry about that, indeed we recently started migrating some repos from LFS to Xet (checkout <a href=\"https://huggingface.co/blog/from-chunks-to-blocks\">this blogpost</a> if you want to learn more about Xet).</p>\n<p>As a result (and as you found out), you need to add <code>cas-bridge.xethub.hf.co</code> for the download path (I updated my original reply above). We’ll communicate here when we enable the Xet upload path.</p>",
"post_number": 14,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-02-24T20:17:17.808Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 34,
"reads": 25,
"readers_count": 24,
"score": 220,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Pierric Cistac",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/blog/from-chunks-to-blocks",
"internal": false,
"reflection": false,
"title": "From Chunks to Blocks: Accelerating Uploads and Downloads on the Hub",
"clicks": 58
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": true,
"admin": true,
"staff": true,
"user_id": 3,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/14",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 2
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 212844,
"name": "Brian Ronan",
"username": "brianronan",
"avatar_template": "/user_avatar/discuss.huggingface.co/brianronan/{size}/30065_2.png",
"created_at": "2025-04-01T22:13:11.369Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/sean-pai\">@sean-pai</a>, just a quick follow up, we’ve just released the Xet client which can be used to download these repos using the xet format directly. If you are interested in faster downloads of Xet enabled repos, follow <a href=\"https://huggingface.co/docs/hub/storage-backends#using-xet-storage\">these instructions here</a>.</p>\n<p>If you install the client and download the same content, you will also need to add two new endpoints, <code>cas-server.xethub.hf.co</code> and <code>transfer.xethub.hf.co</code>.</p>",
"post_number": 15,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-04-01T22:13:11.369Z",
"reply_count": 1,
"reply_to_post_number": 13,
"quote_count": 0,
"incoming_link_count": 46,
"reads": 18,
"readers_count": 17,
"score": 253.6,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Brian Ronan",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/hub/storage-backends#using-xet-storage",
"internal": false,
"reflection": false,
"title": "Storage",
"clicks": 83
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 60126,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/15",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 84819,
"username": "sean-pai",
"name": "Sean Morgan",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/c6cbf5/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 224174,
"name": "Mark",
"username": "marked23",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/e95f7d/{size}.png",
"created_at": "2025-05-26T17:53:32.272Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/brianronan\">@brianronan</a>,</p>\n<p>The certificate returned for cas-server, is the cas-bridge certificate.</p>\n<blockquote>\n<p>(.venv) mark@wide:~/prog/b3d-lora-trainer$ openssl s_client -connect <strong>cas-server</strong>.xethub.hf.co:443 -servername <strong>cas-server</strong>.xethub.hf.co</p>\n<p>Connecting to 52.71.209.178<br>\nCONNECTED(00000003)<br>\ndepth=2 C=US, O=Amazon, CN=Amazon Root CA 1<br>\nverify return:1<br>\ndepth=1 C=US, O=Amazon, CN=Amazon RSA 2048 M03<br>\nverify return:1<br>\ndepth=0 CN=<strong>cas-bridge</strong>.xethub.hf.co<br>\nverify return:1</p>\n<p>Certificate chain<br>\n0 s:CN=<strong>cas-bridge</strong>.xethub.hf.co<br>\ni:C=US, O=Amazon, CN=Amazon RSA 2048 M03<br>\na:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256<br>\nv:NotBefore: Jan 29 00:00:00 2025 GMT; NotAfter: Feb 27 23:59:59 2026 GMT<br>\n-snip-</p>\n</blockquote>\n<p>And thus I get <em>certificate verify failed</em> when using from_pretrained().</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">model_name = \"Qwen/Qwen2.5-Coder-7B\"\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n trust_remote_code=True,\n torch_dtype=torch.float16,\n device_map=\"auto\"\n)\n</code></pre>\n<blockquote>\n<p>“timestamp”:“2025-05-26T17:43:40.209499Z”,“level”:“WARN”,“fields”:{“message”:“Reqwest(reqwest::Error { kind: Request, url: \"<a href=\"https://cas-server.xethub.hf.co/reconstruction/cd9b3569e15af48b5338d1f02bf99476542809310dde89f1a4301215b1a8a81d%5C\" rel=\"noopener nofollow ugc\">https://cas-server.xethub.hf.co/reconstruction/cd9b3569e15af48b5338d1f02bf99476542809310dde89f1a4301215b1a8a81d\\</a>”, source: hyper_util::client::legacy::Error(Connect, Ssl(Error { code: ErrorCode(1), cause: Some(Ssl(ErrorStack([Error { code: 167772294, library: \"SSL routines\", function: \"tls_post_process_server_certificate\", reason: \"certificate verify failed\", file: \"ssl/statem/statem_clnt.c\", line: 2092 }]))) }, X509VerifyResult { code: 20, error: \"unable to get local issuer certificate\" })) }). Retrying…“},“filename”:”/home/runner/work/xet-core/xet-core/cas_client/src/http_client.rs\",“line_number”:175}</p>\n</blockquote>",
"post_number": 16,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-05-26T17:53:32.272Z",
"reply_count": 1,
"reply_to_post_number": 15,
"quote_count": 0,
"incoming_link_count": 36,
"reads": 13,
"readers_count": 12,
"score": 197.6,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Mark",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://cas-server.xethub.hf.co/reconstruction/cd9b3569e15af48b5338d1f02bf99476542809310dde89f1a4301215b1a8a81d%5C",
"internal": false,
"reflection": false,
"title": null,
"clicks": 5
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 60646,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/16",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 60126,
"username": "brianronan",
"name": "Brian Ronan",
"avatar_template": "/user_avatar/discuss.huggingface.co/brianronan/{size}/30065_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 224698,
"name": "Jared Sulzdorf",
"username": "jsulz",
"avatar_template": "/user_avatar/discuss.huggingface.co/jsulz/{size}/28279_2.png",
"created_at": "2025-05-29T16:45:58.783Z",
"cooked": "<p>Just noting for the followers of this thread that the issue raised here by <a class=\"mention\" href=\"/u/marked23\">@marked23</a> is being handled over here - <a href=\"https://github.com/huggingface/xet-core/issues/351\" class=\"inline-onebox\">Certificate Verify Failed cas-server vs. cas-bridge · Issue #351 · huggingface/xet-core · GitHub</a> - and currently seems unrelated to any issues around whitelisting domains.</p>",
"post_number": 17,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-05-29T16:45:58.783Z",
"reply_count": 0,
"reply_to_post_number": 16,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 11,
"readers_count": 10,
"score": 87.2,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Jared Sulzdorf",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/xet-core/issues/351",
"internal": false,
"reflection": false,
"title": "Certificate Verify Failed cas-server vs. cas-bridge · Issue #351 · huggingface/xet-core · GitHub",
"clicks": 61
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 54269,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/17",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 60646,
"username": "marked23",
"name": "Mark",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/e95f7d/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230377,
"name": "Mario Vela",
"username": "mariovela",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/ed8c4c/{size}.png",
"created_at": "2025-07-01T14:08:50.609Z",
"cooked": "<p>This was working for us but recently started failing with timeouts whenever we use huggingface_hub (via python or CLI).<br>\nI noticed we can still download using <code>curl -L https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/resolve/main/model.safetensors?download=true --output model.safetensors</code> but we cannot using</p>\n<pre><code class=\"lang-auto\">from sentence_transformers import SentenceTransformer\nmodel = SentenceTransformer('all-MiniLM-L6-v2')\n</code></pre>\n<p>Nor using</p>\n<pre><code class=\"lang-auto\">huggingface-cli download sentence-transformers/all-MiniLM-L6-v2\n</code></pre>\n<p>Both of these just hang like:</p>\n<pre><code class=\"lang-auto\">huggingface-cli download sentence-transformers/all-MiniLM-L6-v2 --max-workers 1\nFetching 30 files: 0%| | 0/30 [00:00<?, ?it/s]Downloading 'model.safetensors' to '/home/jupyter/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/blobs/53aa51172d142c89d9012cce15ae4d6cc0ca6895895114379cacb4fab128d9db.incomplete'\n\nmodel.safetensors: 0%| | 0.00/90.9M [00:00<?, ?B/s]\n\"timestamp\":\"2025-07-01T13:40:33.080005Z\",\"level\":\"WARN\",\"fields\":{\"message\":\"Reqwest(reqwest::Error { kind: Request, url: \\\"https://cas-server.xethub.hf.co/reconstruction/789fdf16a3e59f4fbfb6002967ecee539a198dadb5be74ca549aa7dc9b1b55fb\\\", source: hyper_util::client::legacy::Error(Connect, ConnectError(\\\"tcp connect error\\\", Os { code: 110, kind: TimedOut, message: \\\"Connection timed out\\\" })) }). Retrying...\"},\"filename\":\"/home/runner/work/xet-core/xet-core/cas_client/src/http_client.rs\",\"line_number\":200}\n{\"timestamp\":\"2025-07-01T13:40:33.080067Z\",\"level\":\"WARN\",\"fields\":{\"message\":\"Retry attempt #0. Sleeping 2.851275886s before the next attempt\"},\"filename\":\"/root/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/reqwest-retry-0.7.0/src/middleware.rs\",\"line_number\":171}\n{\"timestamp\":\"2025-07-01T13:58:03.703922Z\",\"level\":\"WARN\",\"fields\":{\"message\":\"Reqwest(reqwest::Error { kind: Request, url: \\\"https://cas-server.xethub.hf.co/reconstruction/789fdf16a3e59f4fbfb6002967ecee539a198dadb5be74ca549aa7dc9b1b55fb\\\", source: hyper_util::client::legacy::Error(Connect, ConnectError(\\\"tcp connect error\\\", Os { code: 110, kind: TimedOut, message: \\\"Connection timed out\\\" })) }). Retrying...\"},\"filename\":\"/home/runner/work/xet-core/xet-core/cas_client/src/http_client.rs\",\"line_number\":200}\n{\"timestamp\":\"2025-07-01T13:58:03.703998Z\",\"level\":\"WARN\",\"fields\":{\"message\":\"Retry attempt #1. Sleeping 2.339135315s before the next attempt\"},\"filename\":\"/root/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/reqwest-retry-0.7.0/src/middleware.rs\",\"line_number\":171}\n</code></pre>\n<p>It just hangs and times out for the <code>model.safetensors</code> file.</p>\n<p>We have allowlisted:</p>\n<pre><code class=\"lang-auto\">cdn-lfs-us-1.hf.co\ncdn-lfs-eu-1.hf.co\ncdn-lfs.hf.co\ncas-bridge.xethub.hf.co\n</code></pre>\n<p>Any ideas?<br>\nIt seems to be going to a cloudfront IP at some point, but I do not know what for and if it is something that can be stopped.</p>",
"post_number": 18,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-07-01T15:09:28.358Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 54,
"reads": 9,
"readers_count": 8,
"score": 261.8,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Mario Vela",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98369,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/18",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230383,
"name": "Jared Sulzdorf",
"username": "jsulz",
"avatar_template": "/user_avatar/discuss.huggingface.co/jsulz/{size}/28279_2.png",
"created_at": "2025-07-01T15:15:41.358Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/mariovela\">@mariovela</a></p>\n<p>Could you try allowlisting the following URLs in addition to the current domains you’ve allowlisted:</p>\n<pre><code class=\"lang-auto\">transfer.xethub.hf.co\ncas-server.xethub.hf.co\n</code></pre>\n<p>Both are used when downloading from/uploading to Xet-enabled repositories when <code>hf-xet</code> is installed.</p>\n<p>See <a class=\"mention\" href=\"/u/brianronan\">@brianronan</a>’s <a href=\"https://discuss.huggingface.co/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/15\">comment above</a></p>",
"post_number": 19,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-07-01T15:15:41.358Z",
"reply_count": 1,
"reply_to_post_number": 18,
"quote_count": 0,
"incoming_link_count": 23,
"reads": 9,
"readers_count": 8,
"score": 136.8,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Jared Sulzdorf",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 54269,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/19",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 98369,
"username": "mariovela",
"name": "Mario Vela",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/ed8c4c/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230384,
"name": "Mario Vela",
"username": "mariovela",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/ed8c4c/{size}.png",
"created_at": "2025-07-01T15:18:30.779Z",
"cooked": "<p>My bad! That works! Thank you! <img src=\"https://emoji.discourse-cdn.com/apple/smiley.png?v=14\" title=\":smiley:\" class=\"emoji\" alt=\":smiley:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 20,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-07-01T15:18:30.779Z",
"reply_count": 0,
"reply_to_post_number": 19,
"quote_count": 0,
"incoming_link_count": 19,
"reads": 9,
"readers_count": 8,
"score": 156.8,
"yours": false,
"topic_id": 30486,
"topic_slug": "how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist",
"display_username": "Mario Vela",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98369,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-get-a-list-of-all-huggingface-download-redirections-to-whitelist/30486/20",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 54269,
"username": "jsulz",
"name": "Jared Sulzdorf",
"avatar_template": "/user_avatar/discuss.huggingface.co/jsulz/{size}/28279_2.png"
},
"action_code": null,
"via_email": null
}
] |
<p>I work inside a secure corporate VPN network, so I’m unable to download Huggingface models using <code>from_pretrained</code> commands. However, I can request the security team to whitelist certain URLs needed for my use-case.</p>
<p>The security team has already whitelisted the ‘<a href="http://huggingface.co">huggingface.co</a>’ and ‘<a href="http://cdn-lfs.huggingface.co">cdn-lfs.huggingface.co</a>’ URLs. I can now download the files from repo but the loading functions <code>from_pretrained</code> still don’t work.</p>
<p>I think it’s getting blocked while redirecting the requests internally. So, is there a way to know all (hop) URLs I can request to whitelist to make the load functions work?</p>
<p>Thanks in advance.</p>
|
<p>Note that for security reasons, we recently updated the domain for our CDN; in order to be able to download files you also need to whitelist the following domains:</p>
<ul>
<li><a href="http://cdn-lfs-us-1.hf.co">cdn-lfs-us-1.hf.co</a></li>
<li><a href="http://cdn-lfs-eu-1.hf.co">cdn-lfs-eu-1.hf.co</a></li>
<li><a href="http://cdn-lfs.hf.co">cdn-lfs.hf.co</a></li>
<li><a href="http://cas-bridge.xethub.hf.co">cas-bridge.xethub.hf.co</a> (new as of 02/2025)</li>
</ul>
|
Smolagents WebSearchTool search for wrong query
|
https://discuss.huggingface.co/t/smolagents-websearchtool-search-for-wrong-query/161008
| 161,008
| 5
|
2025-06-28T13:19:56.214000Z
|
[
{
"id": 229876,
"name": "doradoradorayaki",
"username": "dorayaki78",
"avatar_template": "/user_avatar/discuss.huggingface.co/dorayaki78/{size}/50008_2.png",
"created_at": "2025-06-28T13:19:56.283Z",
"cooked": "<p>I tried the smolagents WebSearchTool to search some information, but it returns irrelevant information, I don’t know if there is a way to fine-tune the result or the query, attached is the code generated from smolagents and the result<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/4/84da5a510c0e506f919b55487112b61319e93076.png\" data-download-href=\"/uploads/short-url/iXgQnsOXVCnevzWXmpjF4nzdOGq.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/4/84da5a510c0e506f919b55487112b61319e93076.png\" alt=\"image\" data-base62-sha1=\"iXgQnsOXVCnevzWXmpjF4nzdOGq\" width=\"678\" height=\"500\" data-dominant-color=\"EBEBEB\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1129×832 49.1 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-28T13:19:56.283Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 50,
"reads": 5,
"readers_count": 4,
"score": 236,
"yours": false,
"topic_id": 161008,
"topic_slug": "smolagents-websearchtool-search-for-wrong-query",
"display_username": "doradoradorayaki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97781,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/smolagents-websearchtool-search-for-wrong-query/161008/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229928,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-28T21:36:53.903Z",
"cooked": "<p>The content seems strange, or rather, it looks like the query isn’t being passed…</p>\n<p>There are several implementations of search tools, but if it’s only happening with one of them, the search engine specifications may have changed and the library isn’t compatible.</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/smolagents/issues/1386\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/smolagents/issues/1386\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/smolagents</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/smolagents/issues/1386\" target=\"_blank\" rel=\"noopener\">WebSearchTool example from Guide Tour does not work</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-05-26\" data-time=\"20:56:43\" data-timezone=\"UTC\">08:56PM - 26 May 25 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-05-27\" data-time=\"07:27:09\" data-timezone=\"UTC\">07:27AM - 27 May 25 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/AlexiaJM\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/8/380a2be83fcc811dda3dce7bf110fd28c2bfc36e.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"826E60\">\n AlexiaJM\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n bug\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">**Describe the bug**\nThe example about web search from the Guided Tour does not <span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">work. I have internet access.\n\n**Code to reproduce the error**\n> from smolagents import WebSearchTool\n> search_tool = WebSearchTool()\n> print(search_tool(\"Who is the president of Russia?\"))\n\n**Error logs (if any)**\n> Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"env_home/lib/python3.10/site-packages/smolagents/tools.py\", line 205, in __call__\n outputs = self.forward(*args, **kwargs)\n File \"env_home/lib/python3.10/site-packages/smolagents/default_tools.py\", line 227, in forward\n raise Exception(\"No results found! Try a less restrictive/shorter query.\")\nException: No results found! Try a less restrictive/shorter query.\n\n**Packages version:**\nsmolagents==1.16.1</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-28T21:36:53.903Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 20.8,
"yours": false,
"topic_id": 161008,
"topic_slug": "smolagents-websearchtool-search-for-wrong-query",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/smolagents/issues/1386",
"internal": false,
"reflection": false,
"title": "WebSearchTool example from Guide Tour does not work · Issue #1386 · huggingface/smolagents · GitHub",
"clicks": 7
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/smolagents-websearchtool-search-for-wrong-query/161008/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230108,
"name": "doradoradorayaki",
"username": "dorayaki78",
"avatar_template": "/user_avatar/discuss.huggingface.co/dorayaki78/{size}/50008_2.png",
"created_at": "2025-06-30T10:03:47.381Z",
"cooked": "<p>Hi the problem is resolved, thanks for your response, it seems that the SSL or TLS handshake doesn’t work properly, and I tried to go to the duckduckgo website and it returns error. But now it is solved, the problem maybe lies in the date and time of the system which is still not in sync with my local time (as I am currently in a different time zone). The other approach is maybe to clear the SSL state</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-30T10:03:47.381Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 20.4,
"yours": false,
"topic_id": 161008,
"topic_slug": "smolagents-websearchtool-search-for-wrong-query",
"display_username": "doradoradorayaki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97781,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/smolagents-websearchtool-search-for-wrong-query/161008/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 230222,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-30T22:04:16.186Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-30T22:04:16.186Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 1,
"readers_count": 0,
"score": 25.2,
"yours": false,
"topic_id": 161008,
"topic_slug": "smolagents-websearchtool-search-for-wrong-query",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/smolagents-websearchtool-search-for-wrong-query/161008/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I tried the smolagents WebSearchTool to search some information, but it returns irrelevant information, I don’t know if there is a way to fine-tune the result or the query, attached is the code generated from smolagents and the result<br>
<div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/8/4/84da5a510c0e506f919b55487112b61319e93076.png" data-download-href="/uploads/short-url/iXgQnsOXVCnevzWXmpjF4nzdOGq.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/8/4/84da5a510c0e506f919b55487112b61319e93076.png" alt="image" data-base62-sha1="iXgQnsOXVCnevzWXmpjF4nzdOGq" width="678" height="500" data-dominant-color="EBEBEB"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1129×832 49.1 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
|
<p>Hi the problem is resolved, thanks for your response, it seems that the SSL or TLS handshake doesn’t work properly, and I tried to go to the duckduckgo website and it returns error. But now it is solved, the problem maybe lies in the date and time of the system which is still not in sync with my local time (as I am currently in a different time zone). The other approach is maybe to clear the SSL state</p>
|
Text-to-Sql model keeps missing “<” token
|
https://discuss.huggingface.co/t/text-to-sql-model-keeps-missing-token/158903
| 158,903
| 6
|
2025-06-11T11:05:53.474000Z
|
[
{
"id": 226936,
"name": "Brian Antao",
"username": "BrianAntao",
"avatar_template": "/user_avatar/discuss.huggingface.co/brianantao/{size}/49245_2.png",
"created_at": "2025-06-11T11:05:53.535Z",
"cooked": "<p>Hello all,<br>\nI trained the T5-base model using gretelai/synthetic_text_to_sql data set and then fine tuned it on my specific table schema and set of example queries.<br>\nWhen I test the fine-tuned model it keeps missing the “<” token in the generated query results.<br>\nI have played with various fine-tuning params – like number of epochs.<br>\nWhy thus the resultant model not know to use the “<” token ?<br>\nI added a couple of SQL examples with explicit “<” to the dataset but when I query back it gives me the resulting SQL <em>without</em> the “<” in it which is an incorrect SQL!<br>\nCheers.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-11T11:05:53.535Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 6,
"readers_count": 5,
"score": 86.2,
"yours": false,
"topic_id": 158903,
"topic_slug": "text-to-sql-model-keeps-missing-token",
"display_username": "Brian Antao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96674,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-to-sql-model-keeps-missing-token/158903/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226937,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-11T11:11:17.768Z",
"cooked": "<p>You may need to fine tune the system prompt or validate the generations afterwards with a judge.</p>\n<p>Leave a like if this helps at all.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-16T08:35:02.767Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 1.2,
"yours": false,
"topic_id": 158903,
"topic_slug": "text-to-sql-model-keeps-missing-token",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-to-sql-model-keeps-missing-token/158903/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226947,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-11T11:36:53.055Z",
"cooked": "<p>Hmm… Perhaps tokenizer vocab issue?<br>\n<a href=\"https://stackoverflow.com/questions/75851029/t5-fine-tuned-model-outputs-unk-instead-of-curly-braces-and-other-special-char\" class=\"onebox\" target=\"_blank\" rel=\"noopener\">https://stackoverflow.com/questions/75851029/t5-fine-tuned-model-outputs-unk-instead-of-curly-braces-and-other-special-char</a></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-11T11:36:53.055Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 1.2,
"yours": false,
"topic_id": 158903,
"topic_slug": "text-to-sql-model-keeps-missing-token",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://stackoverflow.com/questions/75851029/t5-fine-tuned-model-outputs-unk-instead-of-curly-braces-and-other-special-char",
"internal": false,
"reflection": false,
"title": null,
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-to-sql-model-keeps-missing-token/158903/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 230019,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-29T15:39:57.071Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-29T15:39:57.071Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 158903,
"topic_slug": "text-to-sql-model-keeps-missing-token",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/text-to-sql-model-keeps-missing-token/158903/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello all,<br>
I trained the T5-base model using gretelai/synthetic_text_to_sql data set and then fine tuned it on my specific table schema and set of example queries.<br>
When I test the fine-tuned model it keeps missing the “<” token in the generated query results.<br>
I have played with various fine-tuning params – like number of epochs.<br>
Why thus the resultant model not know to use the “<” token ?<br>
I added a couple of SQL examples with explicit “<” to the dataset but when I query back it gives me the resulting SQL <em>without</em> the “<” in it which is an incorrect SQL!<br>
Cheers.</p>
|
<p>Hmm… Perhaps tokenizer vocab issue?<br>
<a href="https://stackoverflow.com/questions/75851029/t5-fine-tuned-model-outputs-unk-instead-of-curly-braces-and-other-special-char" class="onebox" target="_blank" rel="noopener">https://stackoverflow.com/questions/75851029/t5-fine-tuned-model-outputs-unk-instead-of-curly-braces-and-other-special-char</a></p>
|
WebSearchTool error
|
https://discuss.huggingface.co/t/websearchtool-error/160510
| 160,510
| 5
|
2025-06-24T09:42:36.600000Z
|
[
{
"id": 229136,
"name": "doradoradorayaki",
"username": "dorayaki78",
"avatar_template": "/user_avatar/discuss.huggingface.co/dorayaki78/{size}/50008_2.png",
"created_at": "2025-06-24T09:42:36.678Z",
"cooked": "<p>Hi I tried to use WebSearchTool from smolagents and got this kind of error, I’m using ollama with model qwen2.5 7b, can anyone help me</p>\n<p>Code execution failed at line ‘music_recommendations = web_search(query=“best party music”)’ due to: SSLError:<br>\nHTTPSConnectionPool(host=‘<a href=\"http://lite.duckduckgo.com\" rel=\"noopener nofollow ugc\">lite.duckduckgo.com</a>’, port=443): Max retries exceeded with url: /lite/?q=best+party+music<br>\n(Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:<br>\nself-signed certificate (_ssl.c:1028)’)))</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8.png\" data-download-href=\"/uploads/short-url/xdQheSMuZuqsIBDD2m3cQDxXHrq.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8_2_690x227.png\" alt=\"image\" data-base62-sha1=\"xdQheSMuZuqsIBDD2m3cQDxXHrq\" width=\"690\" height=\"227\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8_2_690x227.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8_2_1035x340.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8.png 2x\" data-dominant-color=\"E5E5E3\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1177×388 27.7 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-24T09:44:33.658Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 62,
"reads": 8,
"readers_count": 7,
"score": 291.6,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "doradoradorayaki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://lite.duckduckgo.com",
"internal": false,
"reflection": false,
"title": "DuckDuckGo",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97781,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229169,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-24T13:45:17.856Z",
"cooked": "<p>I think this might be an SSL error caused by a proxy, VPN, cloud, or internal network firewall, but it’s in the library code…</p>\n<p>It might be difficult to work around.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/smolagents/reference/tools#smolagents.WebSearchTool\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/smolagents/reference/tools#smolagents.WebSearchTool\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://huggingface.co/docs/smolagents/reference/tools#smolagents.WebSearchTool\" target=\"_blank\" rel=\"noopener\">Tools</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<p><a href=\"https://stackoverflow.com/questions/51925384/unable-to-get-local-issuer-certificate-when-using-requests\" class=\"onebox\" target=\"_blank\" rel=\"noopener\">https://stackoverflow.com/questions/51925384/unable-to-get-local-issuer-certificate-when-using-requests</a></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-24T13:45:17.856Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 11.4,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://stackoverflow.com/questions/51925384/unable-to-get-local-issuer-certificate-when-using-requests",
"internal": false,
"reflection": false,
"title": null,
"clicks": 3
},
{
"url": "https://huggingface.co/docs/smolagents/reference/tools#smolagents.WebSearchTool",
"internal": false,
"reflection": false,
"title": "Tools",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229242,
"name": "Damian Taubaso",
"username": "dtaubaso",
"avatar_template": "/user_avatar/discuss.huggingface.co/dtaubaso/{size}/50040_2.png",
"created_at": "2025-06-24T20:34:07.645Z",
"cooked": "<p>I’m having a similar error with DuckDuckGo<br>\nCode execution failed at line ‘results_retry = web_search(query=simpler_query)’<br>\ndue to: DuckDuckGoSearchException: <a href=\"https://lite.duckduckgo.com/lite/\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">DuckDuckGo</a><br>\nRuntimeError: error sending request for url (<a href=\"https://lite.duckduckgo.com/lite/\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">DuckDuckGo</a>):<br>\noperation timed out</p>\n<p>Caused by:<br>\noperation timed out</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-24T20:34:07.645Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 21.6,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "Damian Taubaso",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://lite.duckduckgo.com/lite/",
"internal": false,
"reflection": false,
"title": "DuckDuckGo",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97828,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229257,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-25T00:22:50.786Z",
"cooked": "<p>Hmm… Perhaps DDG problem…?</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://github.com/open-webui/open-webui/discussions/5191\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/open-webui/open-webui/discussions/5191\" target=\"_blank\" rel=\"noopener\">GitHub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/344;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/f/2f41df55dba8efa65d6a790e50b2450f5404f2b7_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"F1EFED\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://github.com/open-webui/open-webui/discussions/5191\" target=\"_blank\" rel=\"noopener\">Can't Get Web Search DuckDuckGo Working · open-webui open-webui · Discussion...</a></h3>\n\n <p>Bug Report Installation Method pip install openwebui ollama Environment Open WebUI Version: [e.g., v0.3.11] Ollama (if applicable): [e.g., v0.2.0, v0.1.32-rc1] Operating System: [e.g., Windows 10, ...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Or perhaps:</p>\n<pre><code class=\"lang-auto\">pip install -U duckduckgo-search\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-25T02:47:51.070Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 1.4,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/open-webui/open-webui/discussions/5191",
"internal": false,
"reflection": false,
"title": "Can't Get Web Search DuckDuckGo Working · open-webui/open-webui · Discussion #5191 · GitHub",
"clicks": 3
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229523,
"name": "doradoradorayaki",
"username": "dorayaki78",
"avatar_template": "/user_avatar/discuss.huggingface.co/dorayaki78/{size}/50008_2.png",
"created_at": "2025-06-26T10:51:24.636Z",
"cooked": "<p>Hi, thanks for answering, I tried the StackOverflow solution already, the issue seems to be solved, but now I got max retries exceeded error, I still try to find the solution for it</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-26T10:51:24.636Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "doradoradorayaki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97781,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229524,
"name": "doradoradorayaki",
"username": "dorayaki78",
"avatar_template": "/user_avatar/discuss.huggingface.co/dorayaki78/{size}/50008_2.png",
"created_at": "2025-06-26T10:52:55.396Z",
"cooked": "<p>have you figured out the solution yet, cause I solved the SSL issue already but stuck with the same problem as you</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-26T10:52:55.396Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "doradoradorayaki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97781,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 97828,
"username": "dtaubaso",
"name": "Damian Taubaso",
"avatar_template": "/user_avatar/discuss.huggingface.co/dtaubaso/{size}/50040_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229533,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-26T12:41:36.577Z",
"cooked": "<p>Hmm… For example, how about with <code>WebSearchTool(engine=\"bing\")</code> ?</p><aside class=\"onebox githubblob\" data-onebox-src=\"https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/smolagents</a>\n </header>\n\n <article class=\"onebox-body\">\n <h4><a href=\"https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259\" target=\"_blank\" rel=\"noopener\">src/smolagents/default_tools.py</a></h4>\n\n<div class=\"git-blob-info\">\n <a href=\"https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259\" rel=\"noopener\"><code>v1.19.0</code></a>\n</div>\n\n\n\n <pre class=\"onebox\"><code class=\"lang-py\">\n <ol class=\"start lines\" start=\"249\" style=\"counter-reset: li-counter 248 ;\">\n <li> if not results:</li>\n <li> return \"No results found.\"</li>\n <li> return \"## Search Results\\n\\n\" + \"\\n\\n\".join(</li>\n <li> [</li>\n <li> f\"{idx}. [{result['title']}]({result['url']})\\n{result['description']}\"</li>\n <li> for idx, result in enumerate(results, start=1)</li>\n <li> ]</li>\n <li> )</li>\n <li></li>\n <li></li>\n <li class=\"selected\">class WebSearchTool(Tool):</li>\n <li> name = \"web_search\"</li>\n <li> description = \"Performs a web search for a query and returns a string of the top search results formatted as markdown with titles, links, and descriptions.\"</li>\n <li> inputs = {\"query\": {\"type\": \"string\", \"description\": \"The search query to perform.\"}}</li>\n <li> output_type = \"string\"</li>\n <li></li>\n <li> def __init__(self, max_results: int = 10, engine: str = \"duckduckgo\"):</li>\n <li> super().__init__()</li>\n <li> self.max_results = max_results</li>\n <li> self.engine = engine</li>\n <li></li>\n </ol>\n </code></pre>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 7,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-26T12:41:59.427Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 26.2,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259",
"internal": false,
"reflection": false,
"title": "smolagents/src/smolagents/default_tools.py at v1.19.0 · huggingface/smolagents · GitHub",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229875,
"name": "doradoradorayaki",
"username": "dorayaki78",
"avatar_template": "/user_avatar/discuss.huggingface.co/dorayaki78/{size}/50008_2.png",
"created_at": "2025-06-28T13:06:22.071Z",
"cooked": "<p>I tried it, it is working now haha, at least it can surf the internet, but the result still need to be finetuned i think, thanks for the recommendation <img src=\"https://emoji.discourse-cdn.com/apple/+1.png?v=14\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\"><img src=\"https://emoji.discourse-cdn.com/apple/grinning_face.png?v=14\" title=\":grinning_face:\" class=\"emoji\" alt=\":grinning_face:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 8,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-28T13:06:22.071Z",
"reply_count": 0,
"reply_to_post_number": 7,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "doradoradorayaki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97781,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/8",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229941,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-29T01:06:38.554Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 9,
"post_type": 3,
"posts_count": 9,
"updated_at": "2025-06-29T01:06:38.554Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 160510,
"topic_slug": "websearchtool-error",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/websearchtool-error/160510/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi I tried to use WebSearchTool from smolagents and got this kind of error, I’m using ollama with model qwen2.5 7b, can anyone help me</p>
<p>Code execution failed at line ‘music_recommendations = web_search(query=“best party music”)’ due to: SSLError:<br>
HTTPSConnectionPool(host=‘<a href="http://lite.duckduckgo.com" rel="noopener nofollow ugc">lite.duckduckgo.com</a>’, port=443): Max retries exceeded with url: /lite/?q=best+party+music<br>
(Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:<br>
self-signed certificate (_ssl.c:1028)’)))</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8.png" data-download-href="/uploads/short-url/xdQheSMuZuqsIBDD2m3cQDxXHrq.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8_2_690x227.png" alt="image" data-base62-sha1="xdQheSMuZuqsIBDD2m3cQDxXHrq" width="690" height="227" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8_2_690x227.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8_2_1035x340.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/e/8/e8d856dfb06d808390c3f12c8244e1fce0721aa8.png 2x" data-dominant-color="E5E5E3"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1177×388 27.7 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
|
<p>Hmm… For example, how about with <code>WebSearchTool(engine="bing")</code> ?</p><aside class="onebox githubblob" data-onebox-src="https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259">
<header class="source">
<a href="https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259" target="_blank" rel="noopener">github.com/huggingface/smolagents</a>
</header>
<article class="onebox-body">
<h4><a href="https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259" target="_blank" rel="noopener">src/smolagents/default_tools.py</a></h4>
<div class="git-blob-info">
<a href="https://github.com/huggingface/smolagents/blob/v1.19.0/src/smolagents/default_tools.py#L259" rel="noopener"><code>v1.19.0</code></a>
</div>
<pre class="onebox"><code class="lang-py">
<ol class="start lines" start="249" style="counter-reset: li-counter 248 ;">
<li> if not results:</li>
<li> return "No results found."</li>
<li> return "## Search Results\n\n" + "\n\n".join(</li>
<li> [</li>
<li> f"{idx}. [{result['title']}]({result['url']})\n{result['description']}"</li>
<li> for idx, result in enumerate(results, start=1)</li>
<li> ]</li>
<li> )</li>
<li></li>
<li></li>
<li class="selected">class WebSearchTool(Tool):</li>
<li> name = "web_search"</li>
<li> description = "Performs a web search for a query and returns a string of the top search results formatted as markdown with titles, links, and descriptions."</li>
<li> inputs = {"query": {"type": "string", "description": "The search query to perform."}}</li>
<li> output_type = "string"</li>
<li></li>
<li> def __init__(self, max_results: int = 10, engine: str = "duckduckgo"):</li>
<li> super().__init__()</li>
<li> self.max_results = max_results</li>
<li> self.engine = engine</li>
<li></li>
</ol>
</code></pre>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
How can I search models by architecture?
|
https://discuss.huggingface.co/t/how-can-i-search-models-by-architecture/160965
| 160,965
| 5
|
2025-06-28T02:18:39.732000Z
|
[
{
"id": 229814,
"name": "Kim Byoungkwon",
"username": "ssamt",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/ba8739/{size}.png",
"created_at": "2025-06-28T02:18:39.807Z",
"cooked": "<p>Namely, I need a model that satisfies a few conditions, and one of them is that it has LlamaForCausalLM architecture. But I can’t find any interface that allows me to filter for such models, or list them. Any good ways to do this?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-28T02:18:39.807Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 8,
"readers_count": 7,
"score": 91.6,
"yours": false,
"topic_id": 160965,
"topic_slug": "how-can-i-search-models-by-architecture",
"display_username": "Kim Byoungkwon",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98114,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-can-i-search-models-by-architecture/160965/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229821,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-28T03:56:51.617Z",
"cooked": "<p>Since <code>pipeline_tag</code> is automatically assigned by Hugging Face Hub, it is possible to search by pipeline, but in the case of Transformers, <code>pipeline_tag</code> is determined <em>by the task name</em>, so there is currently no established method for searching by model architecture. Incidentally, <a href=\"https://huggingface.co/models?other=diffusers%3AFluxKontextPipeline\">in the case of Diffusers models, the architecture name is included in <code>diffusers:</code>, so it is possible</a>.</p>\n<p>If the model author has assigned tags themselves, <a href=\"https://huggingface.co/models?other=gemma3n\">you can search by specifying them with <code>other=</code></a>.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-28T03:59:06.194Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 8,
"readers_count": 7,
"score": 21.6,
"yours": false,
"topic_id": 160965,
"topic_slug": "how-can-i-search-models-by-architecture",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/models?other=gemma3n",
"internal": false,
"reflection": false,
"title": "Models - Hugging Face",
"clicks": 2
},
{
"url": "https://huggingface.co/models?other=diffusers%3AFluxKontextPipeline",
"internal": false,
"reflection": false,
"title": "Models - Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-can-i-search-models-by-architecture/160965/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229822,
"name": "Kim Byoungkwon",
"username": "ssamt",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/ba8739/{size}.png",
"created_at": "2025-06-28T04:00:19.338Z",
"cooked": "<p>Searching with <code>other=llama</code> worked well enough for me, thank you so much!</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-28T04:00:19.338Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 160965,
"topic_slug": "how-can-i-search-models-by-architecture",
"display_username": "Kim Byoungkwon",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98114,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-can-i-search-models-by-architecture/160965/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229870,
"name": "Felicity Wood",
"username": "Felicitywood",
"avatar_template": "/user_avatar/discuss.huggingface.co/felicitywood/{size}/49463_2.png",
"created_at": "2025-06-28T12:09:39.891Z",
"cooked": "<p>There no direct filter for architecture. yet, search llama in the hub, it might work</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-28T12:09:39.891Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 160965,
"topic_slug": "how-can-i-search-models-by-architecture",
"display_username": "Felicity Wood",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97008,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-can-i-search-models-by-architecture/160965/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229937,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-29T00:09:42.459Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-29T00:09:42.459Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 160965,
"topic_slug": "how-can-i-search-models-by-architecture",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-can-i-search-models-by-architecture/160965/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Namely, I need a model that satisfies a few conditions, and one of them is that it has LlamaForCausalLM architecture. But I can’t find any interface that allows me to filter for such models, or list them. Any good ways to do this?</p>
|
<p>Since <code>pipeline_tag</code> is automatically assigned by Hugging Face Hub, it is possible to search by pipeline, but in the case of Transformers, <code>pipeline_tag</code> is determined <em>by the task name</em>, so there is currently no established method for searching by model architecture. Incidentally, <a href="https://huggingface.co/models?other=diffusers%3AFluxKontextPipeline">in the case of Diffusers models, the architecture name is included in <code>diffusers:</code>, so it is possible</a>.</p>
<p>If the model author has assigned tags themselves, <a href="https://huggingface.co/models?other=gemma3n">you can search by specifying them with <code>other=</code></a>.</p>
|
ONNX export failed for Qwen/Qwen3-Embedding-0.6B with “invalid unordered_map<K, T> key”
|
https://discuss.huggingface.co/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909
| 160,909
| 59
|
2025-06-27T14:18:15.386000Z
|
[
{
"id": 229721,
"name": "Nikolskiy",
"username": "Colegero",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/c/eada6e/{size}.png",
"created_at": "2025-06-27T14:18:15.450Z",
"cooked": "<p>Hello everyone,</p>\n<p>I am trying to export the “Qwen/Qwen3-Embedding-0.6B” model to ONNX using the “optimum” library. According to the Optimum documentation, the “Qwen3” architecture is supported for ONNX export.</p>\n<p>However, the export process fails with a error: “invalid unordered_map<K, T> key”</p>\n<pre><code class=\"lang-auto\">from optimum.exporters.onnx import main_export\nimport os\n\nmodel_id = \"Qwen/Qwen3-Embedding-0.6B\"\noutput_dir = \"qwen3_embedding_onnx_from_script\"\nos.makedirs(output_dir, exist_ok=True)\n\nprint(f\"start export '{model_id}' \")\n\ntry:\n main_export(\n model_id,\n output=output_dir,\n task=\"feature-extraction\",\n trust_remote_code=True,\n opset=20\n )\n print(f\"Model '{model_id}' finish '{output_dir}'\")\n\nexcept Exception as e:\n print(f\"error: {e}\")\n</code></pre>\n<ul>\n<li>I have tried using both <code>task='feature-extraction'</code> and <code>task='default'</code> (by letting <code>optimum</code> infer it automatically).</li>\n<li>Both attempts result in the same <code>invalid unordered_map<K, T> key</code> error.<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06.png\" data-download-href=\"/uploads/short-url/2twKVC9pG3QKpFmNZe7iMpYHVAy.png?dl=1\" title=\"qwen3\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06_2_679x500.png\" alt=\"qwen3\" data-base62-sha1=\"2twKVC9pG3QKpFmNZe7iMpYHVAy\" width=\"679\" height=\"500\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06_2_679x500.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06_2_1018x750.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06.png 2x\" data-dominant-color=\"0E121C\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">qwen3</span><span class=\"informations\">1289×949 70.2 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></li>\n</ul>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T14:18:15.450Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 249,
"reads": 9,
"readers_count": 8,
"score": 1186.6,
"yours": false,
"topic_id": 160909,
"topic_slug": "onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key",
"display_username": "Nikolskiy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98077,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229729,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-27T14:56:36.578Z",
"cooked": "<p>This seems pretty difficult to get working. I failed too. I don’t want to reinstall PyTorch…<img src=\"https://emoji.discourse-cdn.com/apple/sob.png?v=14\" title=\":sob:\" class=\"emoji\" alt=\":sob:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\"># pip install -U optimum[onnxruntime]\n# pip install -U accelerate transformers sentence-transformers\n\nfrom optimum.exporters.onnx import main_export\nimport os\n\nmodel_id = \"Qwen/Qwen3-Embedding-0.6B\"\noutput_dir = \"qwen3_embedding_onnx_from_script\"\nos.makedirs(output_dir, exist_ok=True)\n\nprint(f\"start export '{model_id}' \")\n\ntry:\n main_export(\n model_id,\n output=output_dir,\n task=\"feature-extraction\",\n trust_remote_code=True,\n opset=20 # opset=17 with PyTorch 1.x may work? https://huggingface.co/zhiqing/Qwen3-Embedding-0.6B-ONNX/discussions/1 https://github.com/pytorch/pytorch/issues/120559\n # With 2.x, \"error: Exporting the operator 'aten::__ior_' to ONNX opset version 20 is not supported.\"\n )\n print(f\"Model '{model_id}' finish '{output_dir}'\")\n\nexcept Exception as e:\n print(f\"error: {e}\")\n</code></pre>\n<blockquote>\n<p><code>invalid unordered_map<K, T> key</code> error.</p>\n</blockquote>\n<p>Seems 2.x issue, too…</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/onnx/onnx/issues/5862\">\n <header class=\"source\">\n\n <a href=\"https://github.com/onnx/onnx/issues/5862\" target=\"_blank\" rel=\"noopener\">github.com/onnx/onnx</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/onnx/onnx/issues/5862\" target=\"_blank\" rel=\"noopener\"> unordered_map<K, T> key</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-01-18\" data-time=\"13:10:26\" data-timezone=\"UTC\">01:10PM - 18 Jan 24 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-01-18\" data-time=\"17:32:10\" data-timezone=\"UTC\">05:32PM - 18 Jan 24 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/visin109\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/9/1/9123f95eafc8cc1d7a7fab44c7102290c1dca05d.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"EAD9E7\">\n visin109\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n bug\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\"># Bug Report\n\n**Error description:**\n\n```\n[188](file:///C:/Users/P.Vijay%20<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:188) @_beartype.beartype\n [189](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:189) def export(\n [190](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:190) model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],\n (...)\n [206](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:206) export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,\n...\n [511](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:511) '(vmap, grad, jvp, jacrev, ...), it must override the setup_context '\n [512](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:512) 'staticmethod. For more details, please see '\n [513](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:513) 'https://pytorch.org/docs/master/notes/extending.func.html')\n\n**RuntimeError: invalid unordered_map<K, T> key**\n```\n\n**System information**\n- OS Platform and Distribution: Windows 64-bit\n- ONNX version :1.15\n- Python version:3.10\n- Torch version: 2.0.1 + cpu\n\n\n**- Code**\n```\nbatch_size = 1\n\nchannels = 3 # Adjust this based on your model's expected number of input channels\n\ndepth = 16 # This is an example value; adjust based on your model's requirements\n\nheight = 224\n\nwidth = 224\n\nx = torch.randn(batch_size, channels, depth, height, width, requires_grad=True).to('cpu') \n\ntorch.onnx.export(torch_model, x, \"super_resolution.onnx\", export_params=True, do_constant_folding=False, keep_initializers_as_inputs=True, input_names = ['input'], output_names = ['output'],dynamic_axes={'input' : {0 : 'batch_size'}, 'output' : {0 : 'batch_size'}})`\n```\n**Expected behavior**\nmodel should be converted to ONNX without any errors</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T15:00:01.857Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 8,
"readers_count": 7,
"score": 41.4,
"yours": false,
"topic_id": 160909,
"topic_slug": "onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/onnx/onnx/issues/5862",
"internal": false,
"reflection": false,
"title": null,
"clicks": 6
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229730,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-27T15:11:09.025Z",
"cooked": "<p>Probably, if a parameter that forces <code>attn_implementation=\"eager\"</code> at <code>model.from_pretrained()</code> part is implemented in Exporter, it will work with PyTorch 2.x as well…</p><aside class=\"onebox githubblob\" data-onebox-src=\"https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L340\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L340\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/optimum</a>\n </header>\n\n <article class=\"onebox-body\">\n <h4><a href=\"https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L340\" target=\"_blank\" rel=\"noopener\">optimum/exporters/onnx/__main__.py</a></h4>\n\n<div class=\"git-blob-info\">\n <a href=\"https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L340\" rel=\"noopener\"><code>main</code></a>\n</div>\n\n\n\n <pre class=\"onebox\"><code class=\"lang-py\">\n <ol class=\"start lines\" start=\"330\" style=\"counter-reset: li-counter 329 ;\">\n <li> autodetected_message = \"\"</li>\n <li> model_tasks = TasksManager.get_supported_tasks_for_model_type(</li>\n <li> model_type, exporter=\"onnx\", library_name=library_name</li>\n <li> )</li>\n <li> raise ValueError(</li>\n <li> f\"Asked to export a {model_type} model for the task {task}{autodetected_message}, but the Optimum ONNX exporter only supports the tasks {', '.join(model_tasks.keys())} for {model_type}. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task {task} to be supported in the ONNX export for {model_type}.\"</li>\n <li> )</li>\n <li></li>\n <li> # TODO: Fix in Transformers so that SdpaAttention class can be exported to ONNX.</li>\n <li> # This was fixed in transformers 4.42.0, we can remve it when minimum transformers version is updated to 4.42</li>\n <li class=\"selected\"> if model_type in SDPA_ARCHS_ONNX_EXPORT_NOT_SUPPORTED and is_transformers_version(\"<\", \"4.42\"):</li>\n <li> loading_kwargs[\"attn_implementation\"] = \"eager\"</li>\n <li></li>\n <li>with DisableCompileContextManager():</li>\n <li> model = TasksManager.get_model_from_task(</li>\n <li> task,</li>\n <li> model_name_or_path,</li>\n <li> subfolder=subfolder,</li>\n <li> revision=revision,</li>\n <li> cache_dir=cache_dir,</li>\n <li> token=token,</li>\n </ol>\n </code></pre>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T15:11:09.025Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 9,
"reads": 7,
"readers_count": 6,
"score": 46.2,
"yours": false,
"topic_id": 160909,
"topic_slug": "onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L340",
"internal": false,
"reflection": false,
"title": "optimum/optimum/exporters/onnx/__main__.py at main · huggingface/optimum · GitHub",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229733,
"name": "Nikolskiy",
"username": "Colegero",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/c/eada6e/{size}.png",
"created_at": "2025-06-27T15:41:18.226Z",
"cooked": "<p>Thank you for your help! Unfortunately, your suggestions didn’t work:</p>\n<ol>\n<li>Tried attn_implementation=“eager” - same “invalid unordered_map<K, T> key” error</li>\n<li>Tested opset from 16 to 20 - identical results</li>\n<li>Tried different export approaches (ORTModelForFeatureExtraction, torch.onnx.export) - same failure everywhere</li>\n</ol>\n<p>It seems the issue is deeper at the compatibility level between Qwen3 architecture and current PyTorch/ONNX versions. (((((</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T15:41:18.226Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 21.2,
"yours": false,
"topic_id": 160909,
"topic_slug": "onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key",
"display_username": "Nikolskiy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98077,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229791,
"name": "Nikolskiy",
"username": "Colegero",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/c/eada6e/{size}.png",
"created_at": "2025-06-27T22:39:09.088Z",
"cooked": "<p>Yeah, the error was indeed tied to torch 2.6.0. I installed this combo: pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1, and the issue is gone—thanks for the heads-up! Man, I’m so fed up with these constant PyTorch “rollercoasters” (((</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T22:39:09.088Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 6,
"readers_count": 5,
"score": 36,
"yours": false,
"topic_id": 160909,
"topic_slug": "onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key",
"display_username": "Nikolskiy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 98077,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909/5",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229861,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-28T10:40:04.437Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-06-28T10:40:04.437Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 5,
"readers_count": 4,
"score": 40.8,
"yours": false,
"topic_id": 160909,
"topic_slug": "onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/onnx-export-failed-for-qwen-qwen3-embedding-0-6b-with-invalid-unordered-map-k-t-key/160909/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello everyone,</p>
<p>I am trying to export the “Qwen/Qwen3-Embedding-0.6B” model to ONNX using the “optimum” library. According to the Optimum documentation, the “Qwen3” architecture is supported for ONNX export.</p>
<p>However, the export process fails with a error: “invalid unordered_map<K, T> key”</p>
<pre><code class="lang-auto">from optimum.exporters.onnx import main_export
import os
model_id = "Qwen/Qwen3-Embedding-0.6B"
output_dir = "qwen3_embedding_onnx_from_script"
os.makedirs(output_dir, exist_ok=True)
print(f"start export '{model_id}' ")
try:
main_export(
model_id,
output=output_dir,
task="feature-extraction",
trust_remote_code=True,
opset=20
)
print(f"Model '{model_id}' finish '{output_dir}'")
except Exception as e:
print(f"error: {e}")
</code></pre>
<ul>
<li>I have tried using both <code>task='feature-extraction'</code> and <code>task='default'</code> (by letting <code>optimum</code> infer it automatically).</li>
<li>Both attempts result in the same <code>invalid unordered_map<K, T> key</code> error.<br>
<div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06.png" data-download-href="/uploads/short-url/2twKVC9pG3QKpFmNZe7iMpYHVAy.png?dl=1" title="qwen3" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06_2_679x500.png" alt="qwen3" data-base62-sha1="2twKVC9pG3QKpFmNZe7iMpYHVAy" width="679" height="500" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06_2_679x500.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06_2_1018x750.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/1/1/115ad8f772757c039245fd2009fff0dfe7370f06.png 2x" data-dominant-color="0E121C"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">qwen3</span><span class="informations">1289×949 70.2 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></li>
</ul>
|
<p>This seems pretty difficult to get working. I failed too. I don’t want to reinstall PyTorch…<img src="https://emoji.discourse-cdn.com/apple/sob.png?v=14" title=":sob:" class="emoji" alt=":sob:" loading="lazy" width="20" height="20"></p>
<pre data-code-wrap="py"><code class="lang-py"># pip install -U optimum[onnxruntime]
# pip install -U accelerate transformers sentence-transformers
from optimum.exporters.onnx import main_export
import os
model_id = "Qwen/Qwen3-Embedding-0.6B"
output_dir = "qwen3_embedding_onnx_from_script"
os.makedirs(output_dir, exist_ok=True)
print(f"start export '{model_id}' ")
try:
main_export(
model_id,
output=output_dir,
task="feature-extraction",
trust_remote_code=True,
opset=20 # opset=17 with PyTorch 1.x may work? https://huggingface.co/zhiqing/Qwen3-Embedding-0.6B-ONNX/discussions/1 https://github.com/pytorch/pytorch/issues/120559
# With 2.x, "error: Exporting the operator 'aten::__ior_' to ONNX opset version 20 is not supported."
)
print(f"Model '{model_id}' finish '{output_dir}'")
except Exception as e:
print(f"error: {e}")
</code></pre>
<blockquote>
<p><code>invalid unordered_map<K, T> key</code> error.</p>
</blockquote>
<p>Seems 2.x issue, too…</p><aside class="onebox githubissue" data-onebox-src="https://github.com/onnx/onnx/issues/5862">
<header class="source">
<a href="https://github.com/onnx/onnx/issues/5862" target="_blank" rel="noopener">github.com/onnx/onnx</a>
</header>
<article class="onebox-body">
<div class="github-row">
<div class="github-icon-container" title="Issue" data-github-private-repo="false">
<svg width="60" height="60" class="github-icon" viewBox="0 0 14 16" aria-hidden="true"><path fill-rule="evenodd" d="M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z"></path></svg>
</div>
<div class="github-info-container">
<h4>
<a href="https://github.com/onnx/onnx/issues/5862" target="_blank" rel="noopener"> unordered_map<K, T> key</a>
</h4>
<div class="github-info">
<div class="date">
opened <span class="discourse-local-date" data-format="ll" data-date="2024-01-18" data-time="13:10:26" data-timezone="UTC">01:10PM - 18 Jan 24 UTC</span>
</div>
<div class="date">
closed <span class="discourse-local-date" data-format="ll" data-date="2024-01-18" data-time="17:32:10" data-timezone="UTC">05:32PM - 18 Jan 24 UTC</span>
</div>
<div class="user">
<a href="https://github.com/visin109" target="_blank" rel="noopener">
<img alt="" src="https://us1.discourse-cdn.com/hellohellohello/original/3X/9/1/9123f95eafc8cc1d7a7fab44c7102290c1dca05d.png" class="onebox-avatar-inline" width="20" height="20" data-dominant-color="EAD9E7">
visin109
</a>
</div>
</div>
<div class="labels">
<span style="display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;">
bug
</span>
</div>
</div>
</div>
<div class="github-row">
<p class="github-body-container"># Bug Report
**Error description:**
```
[188](file:///C:/Users/P.Vijay%20<span class="show-more-container"><a href="" rel="noopener" class="show-more">…</a></span><span class="excerpt hidden">Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:188) @_beartype.beartype
[189](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:189) def export(
[190](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:190) model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],
(...)
[206](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:206) export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,
...
[511](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:511) '(vmap, grad, jvp, jacrev, ...), it must override the setup_context '
[512](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:512) 'staticmethod. For more details, please see '
[513](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:513) 'https://pytorch.org/docs/master/notes/extending.func.html')
**RuntimeError: invalid unordered_map<K, T> key**
```
**System information**
- OS Platform and Distribution: Windows 64-bit
- ONNX version :1.15
- Python version:3.10
- Torch version: 2.0.1 + cpu
**- Code**
```
batch_size = 1
channels = 3 # Adjust this based on your model's expected number of input channels
depth = 16 # This is an example value; adjust based on your model's requirements
height = 224
width = 224
x = torch.randn(batch_size, channels, depth, height, width, requires_grad=True).to('cpu')
torch.onnx.export(torch_model, x, "super_resolution.onnx", export_params=True, do_constant_folding=False, keep_initializers_as_inputs=True, input_names = ['input'], output_names = ['output'],dynamic_axes={'input' : {0 : 'batch_size'}, 'output' : {0 : 'batch_size'}})`
```
**Expected behavior**
model should be converted to ONNX without any errors</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Scheduling failure: unable to schedule
|
https://discuss.huggingface.co/t/scheduling-failure-unable-to-schedule/160642
| 160,642
| 64
|
2025-06-25T14:19:57.042000Z
|
[
{
"id": 229359,
"name": "Alban Huntziger",
"username": "Albaninho10",
"avatar_template": "/user_avatar/discuss.huggingface.co/albaninho10/{size}/50078_2.png",
"created_at": "2025-06-25T14:19:57.111Z",
"cooked": "<p>Hello,</p>\n<p>I want to deploy my model but I always get this error after +/- 20 minutes of “deployment”:</p>\n<p>Endpoint encountered an error.<br>\nYou can try restarting it using the “retry” button above. Check [ logs] for more details.<br>\n[Server message]Endpoint failed to start<br>\nScheduling failure: unable to schedule</p>\n<p>And in the logs I get this error:</p>\n<p><code>Error 502 while fetching logs for \"mon-modele-bricks-hiv\":</code></p>\n<p>Has this ever happened to anyone?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-25T14:19:57.111Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 24,
"reads": 7,
"readers_count": 6,
"score": 181.4,
"yours": false,
"topic_id": 160642,
"topic_slug": "scheduling-failure-unable-to-schedule",
"display_username": "Alban Huntziger",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/ajay-hinduja-geneva-switzerland-swiss-scheduling-failure-unable-to-schedule-error/162031/2",
"internal": true,
"reflection": true,
"title": "Ajay Hinduja Geneva, Switzerland (Swiss): \"Scheduling Failure: Unable to Schedule\" Error",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97887,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/scheduling-failure-unable-to-schedule/160642/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229368,
"name": "Megan Riley",
"username": "meganariley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png",
"created_at": "2025-06-25T15:03:38.762Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/albaninho10\">@Albaninho10</a> Thank you for reporting! We’re investigating now.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-25T15:03:38.762Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 160642,
"topic_slug": "scheduling-failure-unable-to-schedule",
"display_username": "Megan Riley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 31941,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/scheduling-failure-unable-to-schedule/160642/2",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229578,
"name": "Megan Riley",
"username": "meganariley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png",
"created_at": "2025-06-26T20:18:28.866Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/albaninho10\">@Albaninho10</a> Thank you for waiting! This error message is related to availability of the GPU instance at the time of deployment - this can be resolved by selecting a different instance or region if possible.</p>\n<p>We’ve added updating this error message so that it’s clearer on the roadmap, though there’s no ETA just yet. Please let us know if you have any feedback about Inference Endpoints - we’re all ears!</p>\n<p>I also wanted to mention our <a href=\"https://endpoints.huggingface.co/catalog\">Model Catalog</a>, which has ready-to-deploy models that require no additional customization and deployment is verified by Hugging Face.</p>\n<p>Let us know if you have other questions.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-26T20:18:28.866Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 26.2,
"yours": false,
"topic_id": 160642,
"topic_slug": "scheduling-failure-unable-to-schedule",
"display_username": "Megan Riley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://endpoints.huggingface.co/catalog",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 31941,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/scheduling-failure-unable-to-schedule/160642/3",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229600,
"name": "Andrew Scott",
"username": "Pimpcat-AU",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png",
"created_at": "2025-06-27T00:31:07.836Z",
"cooked": "<p>I’ve seen similar issues with deployment failures related to GPU availability. From what you’re describing, it seems like the GPU instance may not be available when the model tries to deploy, which causes the 502 error. One possible solution is to try selecting a different instance type or region during deployment to ensure that there are available GPU resources at the time of deployment. Also, double check if there’s any region specific resource limitation that might be causing the issue.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T00:31:33.137Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 160642,
"topic_slug": "scheduling-failure-unable-to-schedule",
"display_username": "Andrew Scott",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96276,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/scheduling-failure-unable-to-schedule/160642/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229660,
"name": "Alban Huntziger",
"username": "Albaninho10",
"avatar_template": "/user_avatar/discuss.huggingface.co/albaninho10/{size}/50078_2.png",
"created_at": "2025-06-27T07:44:09.723Z",
"cooked": "<p>Thanks for the reply, indeed by changing region and GPU the model is deployed correctly !</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-27T07:44:09.723Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 36,
"yours": false,
"topic_id": 160642,
"topic_slug": "scheduling-failure-unable-to-schedule",
"display_username": "Alban Huntziger",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97887,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/scheduling-failure-unable-to-schedule/160642/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 31941,
"username": "meganariley",
"name": "Megan Riley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229779,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-27T19:44:53.671Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-06-27T19:44:53.671Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 160642,
"topic_slug": "scheduling-failure-unable-to-schedule",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/scheduling-failure-unable-to-schedule/160642/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello,</p>
<p>I want to deploy my model but I always get this error after +/- 20 minutes of “deployment”:</p>
<p>Endpoint encountered an error.<br>
You can try restarting it using the “retry” button above. Check [ logs] for more details.<br>
[Server message]Endpoint failed to start<br>
Scheduling failure: unable to schedule</p>
<p>And in the logs I get this error:</p>
<p><code>Error 502 while fetching logs for "mon-modele-bricks-hiv":</code></p>
<p>Has this ever happened to anyone?</p>
|
<p>Hi <a class="mention" href="/u/albaninho10">@Albaninho10</a> Thank you for waiting! This error message is related to availability of the GPU instance at the time of deployment - this can be resolved by selecting a different instance or region if possible.</p>
<p>We’ve added updating this error message so that it’s clearer on the roadmap, though there’s no ETA just yet. Please let us know if you have any feedback about Inference Endpoints - we’re all ears!</p>
<p>I also wanted to mention our <a href="https://endpoints.huggingface.co/catalog">Model Catalog</a>, which has ready-to-deploy models that require no additional customization and deployment is verified by Hugging Face.</p>
<p>Let us know if you have other questions.</p>
|
Inference result not aligned with local version of same model and revision
|
https://discuss.huggingface.co/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514
| 160,514
| 64
|
2025-06-24T10:46:33.697000Z
|
[
{
"id": 229141,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T10:46:33.757Z",
"cooked": "<p>Hello,<br>\nI am trying to run this embedding model “sentence-transformers/LaBSE” with revision=“836121a0533e5664b21c7aacc5d22951f2b8b25b” on the Inference Endpoints.</p>\n<p>I have a result, but the embeddings numbers are different from the local execution. And not even correlated using cosine similarity.</p>\n<p>Any idea what’s going on ?</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/6/8/684837f333df2812ea88220197145eda516e3bb5.png\" data-download-href=\"/uploads/short-url/eSwnPT9NL9PZtXrTRXUfN2bWNPT.png?dl=1\" title=\"Screen Shot 2025-06-24 at 12.45.53 PM\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/8/684837f333df2812ea88220197145eda516e3bb5_2_642x500.png\" alt=\"Screen Shot 2025-06-24 at 12.45.53 PM\" data-base62-sha1=\"eSwnPT9NL9PZtXrTRXUfN2bWNPT\" width=\"642\" height=\"500\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/8/684837f333df2812ea88220197145eda516e3bb5_2_642x500.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/8/684837f333df2812ea88220197145eda516e3bb5_2_963x750.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/6/8/684837f333df2812ea88220197145eda516e3bb5.png 2x\" data-dominant-color=\"FAFAFA\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Screen Shot 2025-06-24 at 12.45.53 PM</span><span class=\"informations\">1089×847 78.8 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from abc import ABC, abstractmethod\nimport numpy as np\nimport requests\nfrom sentence_transformers import SentenceTransformer\nfrom sbw_fiabilis.logger import get_logger, set_level\nimport os\nfrom dotenv import load_dotenv\n\nlogger = get_logger()\n\n\nclass EmbeddingInterface(ABC):\n \"\"\"Interface abstraite pour les services d'embedding.\"\"\"\n \n @abstractmethod\n def encode(self, texts, batch_size=None, show_progress_bar=False):\n pass\n\n\nclass LocalEmbeddingService(EmbeddingInterface):\n \"\"\"Implémentation locale utilisant SentenceTransformer.\"\"\"\n \n def __init__(self):\n WORKING_DIR = os.getenv(\"WORKING_DIR\", os.path.join(os.path.dirname(__file__), \"../../data/working_dir\"))\n HF_HOME = os.path.join(WORKING_DIR, \".hf\")\n os.environ[\"HF_HOME\"] = HF_HOME\n\n self.model = SentenceTransformer(\"sentence-transformers/LaBSE\", revision=\"836121a0533e5664b21c7aacc5d22951f2b8b25b\", cache_folder=HF_HOME)\n logger.info(f\"LocalEmbeddingService configuré\")\n \n def encode(self, texts, batch_size=32, show_progress_bar=False):\n return self.model.encode(texts, batch_size=batch_size, show_progress_bar=show_progress_bar)\n\n\nclass APIEmbeddingService(EmbeddingInterface):\n \"\"\"Implémentation utilisant l'API Hugging Face.\"\"\"\n \n def __init__(self):\n self.api_url = os.getenv(\"EMBEDDING_API_URL\")\n self.api_key = os.getenv(\"EMBEDDING_API_KEY\")\n if not self.api_url or not self.api_key:\n raise ValueError(\"EMBEDDING_API_URL et EMBEDDING_API_KEY doivent être définis\")\n self.headers = {\n \"Accept\": \"application/json\",\n \"Authorization\": f\"Bearer {self.api_key}\",\n \"Content-Type\": \"application/json\"\n }\n logger.info(f\"ApiEmbeddingService configuré\")\n \n def _query_api(self, payload):\n try:\n response = requests.post(self.api_url, headers=self.headers, json=payload, timeout=30)\n response.raise_for_status()\n return response.json()\n except requests.exceptions.RequestException as e:\n logger.error(f\"Erreur lors de la requête API: {e}\")\n raise\n \n def encode(self, texts, batch_size=32, show_progress_bar=False):\n if not texts:\n return np.array([])\n \n all_embeddings = []\n total_texts = len(texts)\n \n logger.info(f\"Encodage via API: {total_texts} textes en lots de {batch_size}\")\n \n for i in range(0, total_texts, batch_size):\n batch = texts[i:i + batch_size]\n \n payload = {\n \"inputs\": batch,\n \"parameters\": {}\n }\n \n response = self._query_api(payload)\n \n # Gestion des différents formats de réponse API\n if isinstance(response, list):\n batch_embeddings = response\n elif isinstance(response, dict) and \"embeddings\" in response:\n batch_embeddings = response[\"embeddings\"]\n else:\n raise ValueError(f\"Format de réponse API inattendu: {type(response)}\")\n \n all_embeddings.extend(batch_embeddings)\n \n logger.info(f\" Lot traité: {min(i + batch_size, total_texts)}/{total_texts}\")\n \n return all_embeddings\n\n\n\n\n\ndef test():\n logger = get_logger()\n set_level(\"DEBUG\")\n\n load_dotenv()\n\n texts = [\"toto\", \"tata\"]\n\n service = LocalEmbeddingService()\n embeddings = service.encode(texts)\n logger.info(embeddings[0][:5])\n logger.info(embeddings[1][:5])\n\n service = APIEmbeddingService()\n embeddings = service.encode(texts)\n logger.info(embeddings[0][:5])\n logger.info(embeddings[1][:5])\n\nif __name__ == \"__main__\":\n test()\n</code></pre>",
"post_number": 1,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T10:46:33.757Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 28,
"reads": 11,
"readers_count": 10,
"score": 152.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229158,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T13:07:12.033Z",
"cooked": "<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/4/343b5859bea8baa7a05598e432fd3559f541f06f.png\" data-download-href=\"/uploads/short-url/7s3Yr70mGpIeWbpkaQ1KR08N0tN.png?dl=1\" title=\"Screen Shot 2025-06-24 at 12.45.37 PM\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/4/343b5859bea8baa7a05598e432fd3559f541f06f_2_690x367.png\" alt=\"Screen Shot 2025-06-24 at 12.45.37 PM\" data-base62-sha1=\"7s3Yr70mGpIeWbpkaQ1KR08N0tN\" width=\"690\" height=\"367\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/4/343b5859bea8baa7a05598e432fd3559f541f06f_2_690x367.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/4/343b5859bea8baa7a05598e432fd3559f541f06f_2_1035x550.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/4/343b5859bea8baa7a05598e432fd3559f541f06f_2_1380x734.png 2x\" data-dominant-color=\"FAFAFB\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Screen Shot 2025-06-24 at 12.45.37 PM</span><span class=\"informations\">1601×853 111 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T13:07:12.033Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 17,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/2",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229160,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T13:09:11.456Z",
"cooked": "<p>The result with different embeddings.</p>\n<pre><code class=\"lang-auto\">INFO - Logger level set to INFO\nINFO - Logger level set to DEBUG\nINFO - LocalEmbeddingService configuré\nINFO - [ 0.02300638 -0.07002795 -0.01850945 -0.03634194 0.0507826 ]\nINFO - [-0.03088209 -0.05037568 -0.00730146 -0.0068823 0.03126564]\nINFO - ApiEmbeddingService configuré\nINFO - Encodage via API: 2 textes en lots de 32\nINFO - Lot traité: 2/2\nINFO - [0.0077932924, 0.015989138, 0.010355308, 0.0026318827, 0.019499298]\nINFO - [-0.007399403, -0.03194063, -0.016836794, 0.022840464, 0.001694431]\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T13:09:11.456Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 17,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/3",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229176,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-24T13:54:28.398Z",
"cooked": "<p>If you select anything other than “Custom,” I think the contents of <code>handler.py</code> will be ignored. In this case, I think model will be executed with the default arguments of the default pipeline. That may be why there is a difference from the local code.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/inference-endpoints/guides/custom_handler\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/inference-endpoints/guides/custom_handler\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/a/4ab5b454b8210697406807d06e431ec677069516_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F1EFE9\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/inference-endpoints/guides/custom_handler\" target=\"_blank\" rel=\"noopener\">Create custom Inference Handler</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 4,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T13:54:28.398Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 10,
"readers_count": 9,
"score": 12,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/inference-endpoints/guides/custom_handler",
"internal": false,
"reflection": false,
"title": "Create custom Inference Handler",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229183,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T14:13:40.723Z",
"cooked": "<p>Thank you John for helping.<br>\nI am not using this way of running an endpoint, I am using the no-code approach and the UI is showing the right model with the right version (screenshots).</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T14:13:40.723Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229186,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-24T14:22:07.337Z",
"cooked": "<p>This means that either the library (in this case, TGI and SentenceTransformers) is installed locally or on the endpoint, or the code for the template is simply buggy…<br>\nIf the repository version specification does not work, that may also be a bug, but if that is the only issue, the cosine similarity should not be extremely off.</p>\n<p>As shown below, a fairly old version of the library is used in the endpoint. Of course, it is possible to update it manually…</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/inference-endpoints/others/runtime\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/inference-endpoints/others/runtime\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/a/4ab5b454b8210697406807d06e431ec677069516_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F1EFE9\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/inference-endpoints/others/runtime\" target=\"_blank\" rel=\"noopener\">Inference Endpoints Version</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 6,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T14:22:07.337Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/inference-endpoints/others/runtime",
"internal": false,
"reflection": false,
"title": "Inference Endpoints Version",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229187,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T14:25:36.828Z",
"cooked": "<p>Indeed the log of the replica doesn’t really seems to take into account any of the params provided in the UI.</p>\n<p>The log of the replica :</p>\n<blockquote>\n<p>Args { model_id: “/rep****ory”, revision: None, tokenization_workers: None, dtype: None, pooling: None, max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, default_prompt_name: None, default_prompt: None, hf_api_token: None, hf_token: None, hostname: “r-rpelissier-sbw-fidi-labse-58w96y74-e4770-0t00y”, port: 80, uds_path: “/tmp/text-embeddings-inference-server”, huggingface_hub_cache: Some(“/repository/cache”), payload_limit: 2000000, api_key: None, json_output: true, disable_spans: false, otlp_endpoint: None, otlp_service_name: “text-embeddings-inference.server”, cors_allow_origin: None }</p>\n</blockquote>",
"post_number": 7,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T14:26:16.484Z",
"reply_count": 0,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229189,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T14:31:31.849Z",
"cooked": "<p>Too bad, if I need to debug this (a paid service).<br>\nThe purpose of a managed service is to ignore the underlying complexity of provisioning, maintaining versions… I am really disappointed by what seems to be a “tools for POC” but not a production ready service.<br>\nAnd having a mailto:… (that attempt to open my mail desktop app instead of gmail) as the only way to reach the support was another proof that this is not too serious.</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T14:32:10.122Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/8",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229190,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-24T14:37:01.619Z",
"cooked": "<p>If it’s for a paid service, using Expert Support is probably the fastest and most reliable option, especially if it seems like a bug.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/support\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/support\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/373;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/8/782ff89d3542148adb93bb9b8412f6f62e9af29e_2_690x373.png\" class=\"thumbnail\" data-dominant-color=\"F1F0F0\" width=\"690\" height=\"373\"></div>\n\n<h3><a href=\"https://huggingface.co/support\" target=\"_blank\" rel=\"noopener\">Expert Support – Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>BTW, on my local PC:</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">from sentence_transformers import SentenceTransformer # sentence-transformers 4.0.1\nimport torch\nsentences = [\"This is an example sentence\", \"Each sentence is converted\"]\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nprint(f\"Running on {device}.\") # Running on cuda.\n\nmodel = SentenceTransformer(\"sentence-transformers/LaBSE\").to(device)\nembeddings = model.encode(sentences)\nprint(\"main:\", embeddings)\n#main: [[ 0.02882478 -0.00602382 -0.05947006 ... -0.03002249 -0.029607\n# 0.00067482]\n# [-0.05550233 0.02546483 -0.02157256 ... 0.02932105 0.01150041\n# -0.00848792]]\n\nmodel = SentenceTransformer(\"sentence-transformers/LaBSE\", revision=\"836121a0533e5664b21c7aacc5d22951f2b8b25b\").to(device)\nembeddings = model.encode(sentences)\nprint(\"836121a0533e5664b21c7aacc5d22951f2b8b25b:\", embeddings)\n#836121a0533e5664b21c7aacc5d22951f2b8b25b: [[ 0.02882478 -0.00602382 -0.05947006 ... -0.03002249 -0.029607\n# 0.00067482]\n# [-0.05550233 0.02546483 -0.02157256 ... 0.02932105 0.01150041\n# -0.00848792]]\n\nmodel.to(\"cpu\")\nembeddings = model.encode(sentences)\nprint(\"On CPU:\", embeddings)\n#On CPU: [[ 0.02882476 -0.00602385 -0.05947007 ... -0.03002251 -0.02960699\n# 0.00067482]\n# [-0.05550234 0.02546484 -0.02157255 ... 0.02932107 0.01150037\n# -0.00848786]]\n</code></pre>",
"post_number": 9,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T14:37:01.619Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/support",
"internal": false,
"reflection": false,
"title": "Expert Support – Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229194,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-24T15:03:39.346Z",
"cooked": "<p>At least locally consistent. Thank you !</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-24T15:03:39.346Z",
"reply_count": 0,
"reply_to_post_number": 9,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/10",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229349,
"name": "Erik Kaunismäki",
"username": "erikkaum",
"avatar_template": "/user_avatar/discuss.huggingface.co/erikkaum/{size}/29571_2.png",
"created_at": "2025-06-25T13:34:16.110Z",
"cooked": "<p>Hi rpelissier <img src=\"https://emoji.discourse-cdn.com/apple/waving_hand.png?v=14\" title=\":waving_hand:\" class=\"emoji\" alt=\":waving_hand:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>Sorry about the hassle here. I did a deep dive on issue and I think I know what’s going on: the model deployed in your inference endpoint uses the <a href=\"https://github.com/huggingface/text-embeddings-inference/\">TEI server engine</a>. Whereas the local example uses sentence-transformers. And unfortunately there’s a mismatch between the implementations. This model is one of the few that uses a Dense module, which is supported in sentence transformers but not in TEI.</p>\n<p>So when the model is ran with TEI (and therefore on inference endpoints), it’s equivalent to doing this in sentence transformers:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from sentence_transformers import SentenceTransformer\nimport torch\nsentences = [\"This is an example sentence\", \"Each sentence is converted\"]\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nprint(f\"Running on {device}.\")\n\nmodel = SentenceTransformer(\"sentence-transformers/LaBSE\").to(device)\nembeddings = model.encode(sentences)\nprint(\"default\", embeddings)\n\nedited_model = SentenceTransformer(\"sentence-transformers/LaBSE\").to(device)\ndel edited_model[2]\nembeddings = edited_model.encode(sentences)\nprint(\"del model[2]:\", embeddings)\n</code></pre>\n<p>this gives the output:</p>\n<pre><code class=\"lang-auto\">default [[ 0.02882483 -0.00602379 -0.05947006 ... -0.03002251 -0.029607\n 0.00067482]\n [-0.05550232 0.02546485 -0.02157257 ... 0.02932104 0.0115004\n -0.00848789]]\ndel model[2]: [[-0.00814162 0.01150823 -0.01516913 ... -0.02249936 0.02313923\n -0.02578063]\n [ 0.00584357 0.03796612 0.0039336 ... 0.03305857 0.03542801\n 0.0157448 ]]\n</code></pre>\n<p>where the former corresponds to the same results in the post above, and the latter should be similar to the model deployed on inference endpoints with TEI.</p>\n<p>This is indeed not ideal and I’ve notified the maintainers of TEI so they can work on either supporting the Dense feature or alternatively clearly showing that this model isn’t supported in TEI.</p>\n<p>As a potential solution, when you deploy this model on Inference Endpoints, you can select the “Default” container instead of the TEI one. The default container is a simple wrapper around the sentence transformers library, so it’s not as performant as TEI, but it should give you the correct embeddings.</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38.jpeg\" data-download-href=\"/uploads/short-url/xy9ZlUYMEuvitj2EqzObamLN61W.jpeg?dl=1\" title=\"Screenshot 2025-06-25 at 15.33.07\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_689x229.jpeg\" alt=\"Screenshot 2025-06-25 at 15.33.07\" data-base62-sha1=\"xy9ZlUYMEuvitj2EqzObamLN61W\" width=\"689\" height=\"229\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_689x229.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_1033x343.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_1378x458.jpeg 2x\" data-dominant-color=\"F1F2F3\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Screenshot 2025-06-25 at 15.33.07</span><span class=\"informations\">2558×852 125 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>Hopefully this helps <img src=\"https://emoji.discourse-cdn.com/apple/raising_hands.png?v=14\" title=\":raising_hands:\" class=\"emoji\" alt=\":raising_hands:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 11,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-25T13:34:16.110Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 6,
"readers_count": 5,
"score": 66.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Erik Kaunismäki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/text-embeddings-inference/",
"internal": false,
"reflection": false,
"title": "GitHub - huggingface/text-embeddings-inference: A blazing fast inference solution for text embeddings models",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 58545,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/11",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229355,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-25T13:59:29.994Z",
"cooked": "<p>Thank you, erikkaum!</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-25T13:59:29.994Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 51.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/12",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229506,
"name": "Renaud Pelissier",
"username": "rpelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png",
"created_at": "2025-06-26T09:08:21.026Z",
"cooked": "<p>Thank tou erikkaum, now I understand.<br>\nSo this feels like a serious bug to have an inference service ignoring some layers of the inference model. A big warning should show, at least.<br>\nI am sorry but to me it is a blocker for adoption of your product. It is a nice idea, but not reliable for production. I will give another try in 6 months. In the mean time I will go terraform and some autoscalable docker container. (No so easy though, I have been working on it for the past couple of day, and autoscaling with caching the model weights and with enough CPU, is not really what it was designed for.</p>",
"post_number": 13,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-26T09:08:21.026Z",
"reply_count": 1,
"reply_to_post_number": 11,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 66.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Renaud Pelissier",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97785,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/13",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 58545,
"username": "erikkaum",
"name": "Erik Kaunismäki",
"avatar_template": "/user_avatar/discuss.huggingface.co/erikkaum/{size}/29571_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229520,
"name": "Erik Kaunismäki",
"username": "erikkaum",
"avatar_template": "/user_avatar/discuss.huggingface.co/erikkaum/{size}/29571_2.png",
"created_at": "2025-06-26T09:54:34.426Z",
"cooked": "<p>Hi rpelissier,</p>\n<p>I totally understand and agree that it’s a serious bug.</p>\n<p>Also just as a heads up: if you deploy this model on your own infra with the <a href=\"https://github.com/huggingface/text-embeddings-inference\">text-embedding-inference server</a>, you’ll have the same bug.</p>\n<p>So when you deploy on your own infra make sure to use the sentence-transformer implementation so that the embeddings are correct <img src=\"https://emoji.discourse-cdn.com/apple/+1.png?v=14\" title=\":+1:\" class=\"emoji\" alt=\":+1:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 14,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-26T09:54:34.426Z",
"reply_count": 0,
"reply_to_post_number": 13,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 31.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Erik Kaunismäki",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/text-embeddings-inference",
"internal": false,
"reflection": false,
"title": "GitHub - huggingface/text-embeddings-inference: A blazing fast inference solution for text embeddings models",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 58545,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/14",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 97785,
"username": "rpelissier",
"name": "Renaud Pelissier",
"avatar_template": "/user_avatar/discuss.huggingface.co/rpelissier/{size}/50013_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229556,
"name": "Alvaro Bartolome",
"username": "alvarobartt",
"avatar_template": "/user_avatar/discuss.huggingface.co/alvarobartt/{size}/35126_2.png",
"created_at": "2025-06-26T16:33:19.049Z",
"cooked": "<p>Hey <a class=\"mention\" href=\"/u/rpelissier\">@rpelissier</a> thanks for reporting! We’ve just pushed the changes to fix that and handle the <code>2_Dense/</code> modules when available on the Hub, it’s still a work in progress at <a href=\"https://github.com/huggingface/text-embeddings-inference/pull/660\" class=\"inline-onebox\">Add `Dense`, `DenseLayer` and `DenseConfig` to handle `2_Dense/` by alvarobartt · Pull Request #660 · huggingface/text-embeddings-inference · GitHub</a> but we hope to release it soon, so stay tuned and we’ll ping you back <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=14\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>Also thanks a lot <a class=\"mention\" href=\"/u/erikkaum\">@erikkaum</a> for handling, <a class=\"mention\" href=\"/u/tomaarsen\">@tomaarsen</a> for the assistance while solving it and <a class=\"mention\" href=\"/u/narsil\">@Narsil</a> for the PR review!</p>",
"post_number": 15,
"post_type": 1,
"posts_count": 16,
"updated_at": "2025-06-26T16:33:19.049Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 76.2,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "Alvaro Bartolome",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/text-embeddings-inference/pull/660",
"internal": false,
"reflection": false,
"title": "Add `Dense`, `DenseLayer` and `DenseConfig` to handle `2_Dense/` by alvarobartt · Pull Request #660 · huggingface/text-embeddings-inference · GitHub",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 4853,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/15",
"reactions": [
{
"id": "clap",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229668,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-27T08:24:30.058Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 16,
"post_type": 3,
"posts_count": 16,
"updated_at": "2025-06-27T08:24:30.058Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 5.8,
"yours": false,
"topic_id": 160514,
"topic_slug": "inference-result-not-aligned-with-local-version-of-same-model-and-revision",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inference-result-not-aligned-with-local-version-of-same-model-and-revision/160514/16",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello,<br>
I am trying to run this embedding model “sentence-transformers/LaBSE” with revision=“836121a0533e5664b21c7aacc5d22951f2b8b25b” on the Inference Endpoints.</p>
<p>I have a result, but the embeddings numbers are different from the local execution. And not even correlated using cosine similarity.</p>
<p>Any idea what’s going on ?</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/6/8/684837f333df2812ea88220197145eda516e3bb5.png" data-download-href="/uploads/short-url/eSwnPT9NL9PZtXrTRXUfN2bWNPT.png?dl=1" title="Screen Shot 2025-06-24 at 12.45.53 PM" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/8/684837f333df2812ea88220197145eda516e3bb5_2_642x500.png" alt="Screen Shot 2025-06-24 at 12.45.53 PM" data-base62-sha1="eSwnPT9NL9PZtXrTRXUfN2bWNPT" width="642" height="500" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/8/684837f333df2812ea88220197145eda516e3bb5_2_642x500.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/8/684837f333df2812ea88220197145eda516e3bb5_2_963x750.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/6/8/684837f333df2812ea88220197145eda516e3bb5.png 2x" data-dominant-color="FAFAFA"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">Screen Shot 2025-06-24 at 12.45.53 PM</span><span class="informations">1089×847 78.8 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<pre data-code-wrap="python"><code class="lang-python">from abc import ABC, abstractmethod
import numpy as np
import requests
from sentence_transformers import SentenceTransformer
from sbw_fiabilis.logger import get_logger, set_level
import os
from dotenv import load_dotenv
logger = get_logger()
class EmbeddingInterface(ABC):
"""Interface abstraite pour les services d'embedding."""
@abstractmethod
def encode(self, texts, batch_size=None, show_progress_bar=False):
pass
class LocalEmbeddingService(EmbeddingInterface):
"""Implémentation locale utilisant SentenceTransformer."""
def __init__(self):
WORKING_DIR = os.getenv("WORKING_DIR", os.path.join(os.path.dirname(__file__), "../../data/working_dir"))
HF_HOME = os.path.join(WORKING_DIR, ".hf")
os.environ["HF_HOME"] = HF_HOME
self.model = SentenceTransformer("sentence-transformers/LaBSE", revision="836121a0533e5664b21c7aacc5d22951f2b8b25b", cache_folder=HF_HOME)
logger.info(f"LocalEmbeddingService configuré")
def encode(self, texts, batch_size=32, show_progress_bar=False):
return self.model.encode(texts, batch_size=batch_size, show_progress_bar=show_progress_bar)
class APIEmbeddingService(EmbeddingInterface):
"""Implémentation utilisant l'API Hugging Face."""
def __init__(self):
self.api_url = os.getenv("EMBEDDING_API_URL")
self.api_key = os.getenv("EMBEDDING_API_KEY")
if not self.api_url or not self.api_key:
raise ValueError("EMBEDDING_API_URL et EMBEDDING_API_KEY doivent être définis")
self.headers = {
"Accept": "application/json",
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json"
}
logger.info(f"ApiEmbeddingService configuré")
def _query_api(self, payload):
try:
response = requests.post(self.api_url, headers=self.headers, json=payload, timeout=30)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
logger.error(f"Erreur lors de la requête API: {e}")
raise
def encode(self, texts, batch_size=32, show_progress_bar=False):
if not texts:
return np.array([])
all_embeddings = []
total_texts = len(texts)
logger.info(f"Encodage via API: {total_texts} textes en lots de {batch_size}")
for i in range(0, total_texts, batch_size):
batch = texts[i:i + batch_size]
payload = {
"inputs": batch,
"parameters": {}
}
response = self._query_api(payload)
# Gestion des différents formats de réponse API
if isinstance(response, list):
batch_embeddings = response
elif isinstance(response, dict) and "embeddings" in response:
batch_embeddings = response["embeddings"]
else:
raise ValueError(f"Format de réponse API inattendu: {type(response)}")
all_embeddings.extend(batch_embeddings)
logger.info(f" Lot traité: {min(i + batch_size, total_texts)}/{total_texts}")
return all_embeddings
def test():
logger = get_logger()
set_level("DEBUG")
load_dotenv()
texts = ["toto", "tata"]
service = LocalEmbeddingService()
embeddings = service.encode(texts)
logger.info(embeddings[0][:5])
logger.info(embeddings[1][:5])
service = APIEmbeddingService()
embeddings = service.encode(texts)
logger.info(embeddings[0][:5])
logger.info(embeddings[1][:5])
if __name__ == "__main__":
test()
</code></pre>
|
<p>Hi rpelissier <img src="https://emoji.discourse-cdn.com/apple/waving_hand.png?v=14" title=":waving_hand:" class="emoji" alt=":waving_hand:" loading="lazy" width="20" height="20"></p>
<p>Sorry about the hassle here. I did a deep dive on issue and I think I know what’s going on: the model deployed in your inference endpoint uses the <a href="https://github.com/huggingface/text-embeddings-inference/">TEI server engine</a>. Whereas the local example uses sentence-transformers. And unfortunately there’s a mismatch between the implementations. This model is one of the few that uses a Dense module, which is supported in sentence transformers but not in TEI.</p>
<p>So when the model is ran with TEI (and therefore on inference endpoints), it’s equivalent to doing this in sentence transformers:</p>
<pre data-code-wrap="python"><code class="lang-python">from sentence_transformers import SentenceTransformer
import torch
sentences = ["This is an example sentence", "Each sentence is converted"]
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Running on {device}.")
model = SentenceTransformer("sentence-transformers/LaBSE").to(device)
embeddings = model.encode(sentences)
print("default", embeddings)
edited_model = SentenceTransformer("sentence-transformers/LaBSE").to(device)
del edited_model[2]
embeddings = edited_model.encode(sentences)
print("del model[2]:", embeddings)
</code></pre>
<p>this gives the output:</p>
<pre><code class="lang-auto">default [[ 0.02882483 -0.00602379 -0.05947006 ... -0.03002251 -0.029607
0.00067482]
[-0.05550232 0.02546485 -0.02157257 ... 0.02932104 0.0115004
-0.00848789]]
del model[2]: [[-0.00814162 0.01150823 -0.01516913 ... -0.02249936 0.02313923
-0.02578063]
[ 0.00584357 0.03796612 0.0039336 ... 0.03305857 0.03542801
0.0157448 ]]
</code></pre>
<p>where the former corresponds to the same results in the post above, and the latter should be similar to the model deployed on inference endpoints with TEI.</p>
<p>This is indeed not ideal and I’ve notified the maintainers of TEI so they can work on either supporting the Dense feature or alternatively clearly showing that this model isn’t supported in TEI.</p>
<p>As a potential solution, when you deploy this model on Inference Endpoints, you can select the “Default” container instead of the TEI one. The default container is a simple wrapper around the sentence transformers library, so it’s not as performant as TEI, but it should give you the correct embeddings.</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38.jpeg" data-download-href="/uploads/short-url/xy9ZlUYMEuvitj2EqzObamLN61W.jpeg?dl=1" title="Screenshot 2025-06-25 at 15.33.07"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_689x229.jpeg" alt="Screenshot 2025-06-25 at 15.33.07" data-base62-sha1="xy9ZlUYMEuvitj2EqzObamLN61W" width="689" height="229" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_689x229.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_1033x343.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/eb244e306eb3c5701a04f6566ced5e82ff430d38_2_1378x458.jpeg 2x" data-dominant-color="F1F2F3"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">Screenshot 2025-06-25 at 15.33.07</span><span class="informations">2558×852 125 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<p>Hopefully this helps <img src="https://emoji.discourse-cdn.com/apple/raising_hands.png?v=14" title=":raising_hands:" class="emoji" alt=":raising_hands:" loading="lazy" width="20" height="20"></p>
|
What are the latest Open Source Speech To Text Models with a focus on real-time
|
https://discuss.huggingface.co/t/what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time/160530
| 160,530
| 13
|
2025-06-24T13:20:05.312000Z
|
[
{
"id": 229163,
"name": "Dizzy",
"username": "Dizzy22",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/9fc29f/{size}.png",
"created_at": "2025-06-24T13:20:05.395Z",
"cooked": "<p>Hey, do you know current models that can also be executed locally, i.e. not in the cloud</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-24T13:24:11.780Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 242,
"reads": 10,
"readers_count": 9,
"score": 1157,
"yours": false,
"topic_id": 160530,
"topic_slug": "what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time",
"display_username": "Dizzy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97797,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time/160530/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229166,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-24T13:33:46.771Z",
"cooked": "<p>When it comes to locally executable models, the Whisper series seems to have a lot of know-how. However, there are other options as well.</p>\n<p>In terms of speed, FastRTC excels in real-time performance, but it’s quite specialized. Or rather, it’s cloud-based?</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces/hf-audio/open_asr_leaderboard\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces/hf-audio/open_asr_leaderboard\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/9/c949282c0a62ad23d194416d8c1380be4fd90f6f_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"985D98\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces/hf-audio/open_asr_leaderboard\" target=\"_blank\" rel=\"noopener\">Open ASR Leaderboard - a Hugging Face Space by hf-audio</a></h3>\n\n <p>Request evaluation of a new speech model by selecting the model name and datasets. Get a confirmation message once your request is submitted.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces?sort=trending&search=asr\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces?sort=trending&search=asr\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/f/3f219d23b16d4a243a12070474512a6d6730c841.png\" class=\"thumbnail\" data-dominant-color=\"F1F1F1\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces?sort=trending&search=asr\" target=\"_blank\" rel=\"noopener\">Spaces - Hugging Face</a></h3>\n\n <p>Discover amazing ML apps made by the community</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/gradio-app/fastrtc\">\n <header class=\"source\">\n\n <a href=\"https://github.com/gradio-app/fastrtc\" target=\"_blank\" rel=\"noopener\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/ebc99f1e681ae5b00e0ee4253ba86a22794aaa63_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"F8F5EF\">\n\n <h3><a href=\"https://github.com/gradio-app/fastrtc\" target=\"_blank\" rel=\"noopener\">GitHub - gradio-app/fastrtc: The python library for real-time communication</a></h3>\n\n <p><span class=\"github-repo-description\">The python library for real-time communication</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-24T13:34:00.248Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 10,
"readers_count": 9,
"score": 62,
"yours": false,
"topic_id": 160530,
"topic_slug": "what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/hf-audio/open_asr_leaderboard",
"internal": false,
"reflection": false,
"title": "Open ASR Leaderboard - a Hugging Face Space by hf-audio",
"clicks": 50
},
{
"url": "https://github.com/gradio-app/fastrtc",
"internal": false,
"reflection": false,
"title": "GitHub - gradio-app/fastrtc: The python library for real-time communication",
"clicks": 8
},
{
"url": "https://huggingface.co/spaces?sort=trending&search=asr",
"internal": false,
"reflection": false,
"title": "Spaces - Hugging Face",
"clicks": 5
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time/160530/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229304,
"name": "Dizzy",
"username": "Dizzy22",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/9fc29f/{size}.png",
"created_at": "2025-06-25T06:49:23.774Z",
"cooked": "<p>Yes, I already have Whisper on my shortlist and it seems to be the best option. I’ve also heard about</p>\n<ul>\n<li>Kaldi</li>\n<li>DeepSpeech</li>\n<li>Vosk</li>\n<li>SpeechBrain</li>\n</ul>\n<p>Do you have any experience with these?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-25T06:51:10.213Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 160530,
"topic_slug": "what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time",
"display_username": "Dizzy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97797,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time/160530/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 229326,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-25T10:24:00.941Z",
"cooked": "<blockquote>\n<p>Do you have any experience with these?</p>\n</blockquote>\n<p>No.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-25T10:24:00.941Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 5,
"readers_count": 4,
"score": 26,
"yours": false,
"topic_id": 160530,
"topic_slug": "what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time/160530/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229479,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-26T07:20:22.681Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-26T07:20:22.681Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 5.8,
"yours": false,
"topic_id": 160530,
"topic_slug": "what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-latest-open-source-speech-to-text-models-with-a-focus-on-real-time/160530/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hey, do you know current models that can also be executed locally, i.e. not in the cloud</p>
|
<p>When it comes to locally executable models, the Whisper series seems to have a lot of know-how. However, there are other options as well.</p>
<p>In terms of speed, FastRTC excels in real-time performance, but it’s quite specialized. Or rather, it’s cloud-based?</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces/hf-audio/open_asr_leaderboard">
<header class="source">
<a href="https://huggingface.co/spaces/hf-audio/open_asr_leaderboard" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/9/c949282c0a62ad23d194416d8c1380be4fd90f6f_2_690x372.png" class="thumbnail" data-dominant-color="985D98" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces/hf-audio/open_asr_leaderboard" target="_blank" rel="noopener">Open ASR Leaderboard - a Hugging Face Space by hf-audio</a></h3>
<p>Request evaluation of a new speech model by selecting the model name and datasets. Get a confirmation message once your request is submitted.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces?sort=trending&search=asr">
<header class="source">
<a href="https://huggingface.co/spaces?sort=trending&search=asr" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/3/f/3f219d23b16d4a243a12070474512a6d6730c841.png" class="thumbnail" data-dominant-color="F1F1F1" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces?sort=trending&search=asr" target="_blank" rel="noopener">Spaces - Hugging Face</a></h3>
<p>Discover amazing ML apps made by the community</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox githubrepo" data-onebox-src="https://github.com/gradio-app/fastrtc">
<header class="source">
<a href="https://github.com/gradio-app/fastrtc" target="_blank" rel="noopener">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/b/ebc99f1e681ae5b00e0ee4253ba86a22794aaa63_2_690x344.png" class="thumbnail" data-dominant-color="F8F5EF">
<h3><a href="https://github.com/gradio-app/fastrtc" target="_blank" rel="noopener">GitHub - gradio-app/fastrtc: The python library for real-time communication</a></h3>
<p><span class="github-repo-description">The python library for real-time communication</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Unauthorized Access Token
|
https://discuss.huggingface.co/t/unauthorized-access-token/160609
| 160,609
| 5
|
2025-06-25T09:01:15.843000Z
|
[
{
"id": 229317,
"name": "Philip Mockridge",
"username": "FreeRoss",
"avatar_template": "/user_avatar/discuss.huggingface.co/freeross/{size}/50057_2.png",
"created_at": "2025-06-25T09:01:15.929Z",
"cooked": "<p>Hi,</p>\n<p>Thanks in advance if you’re able to help out.</p>\n<ul>\n<li><strong>All</strong> the code that leads to the problem:</li>\n</ul>\n<pre><code class=\"lang-auto\">curl -H \"Authorization: Bearer hf_<...>bfQ\" https://huggingface.co/api/whoami\n</code></pre>\n<ul>\n<li>The <strong>full error message</strong>:</li>\n</ul>\n<pre><code class=\"lang-auto\">{\"error\":\"Invalid credentials in Authorization header\"}\n</code></pre>\n<ul>\n<li>\n<p>Provide the version of the library you are using:<br>\nI’m not using a library for this</p>\n</li>\n<li>\n<p>If you have tried something in particular to solve your problem, don’t hesitate to mention it as well:<br>\nI tried to use the credentials initially in an n8n workflow → http request node. The curl is the simplest way to express this problem.<br>\nPlease find attached shot of the tokens I setup:<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35.png\" data-download-href=\"/uploads/short-url/64QPWVeG7fB9CL7BuCvBerWNViZ.png?dl=1\" title=\"Huggingface access tokens - Screenshot from 2025-06-25 16-55-52\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_690x211.png\" alt=\"Huggingface access tokens - Screenshot from 2025-06-25 16-55-52\" data-base62-sha1=\"64QPWVeG7fB9CL7BuCvBerWNViZ\" width=\"690\" height=\"211\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_690x211.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_1035x316.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_1380x422.png 2x\" data-dominant-color=\"F9F9FA\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Huggingface access tokens - Screenshot from 2025-06-25 16-55-52</span><span class=\"informations\">1496×459 61.3 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n</li>\n</ul>\n<p>The error message is clear as to what the problem is (unauthorized). What I do not know is why and/or why Huggingface server interprets the access token as anauthorized?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-25T09:01:15.929Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 38,
"reads": 11,
"readers_count": 10,
"score": 197.2,
"yours": false,
"topic_id": 160609,
"topic_slug": "unauthorized-access-token",
"display_username": "Philip Mockridge",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97862,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unauthorized-access-token/160609/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229325,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-25T10:22:46.004Z",
"cooked": "<p>Try v2.</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">HF_TOKEN = \"hf_foobar\"\nimport subprocess\nsubprocess.run(f'curl -H \"Authorization: Bearer {HF_TOKEN}\" https://huggingface.co/api/whoami', shell=True)\n# {\"error\":\"Invalid credentials in Authorization header\"}\nsubprocess.run(f'curl -H \"Authorization: Bearer {HF_TOKEN}\" https://huggingface.co/api/whoami-v2', shell=True)\n# {\"type\":\"user\", ...\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-25T10:22:46.004Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 10,
"readers_count": 9,
"score": 7,
"yours": false,
"topic_id": 160609,
"topic_slug": "unauthorized-access-token",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unauthorized-access-token/160609/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229469,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-26T05:47:53.399Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-06-26T05:47:53.399Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 1.4,
"yours": false,
"topic_id": 160609,
"topic_slug": "unauthorized-access-token",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unauthorized-access-token/160609/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi,</p>
<p>Thanks in advance if you’re able to help out.</p>
<ul>
<li><strong>All</strong> the code that leads to the problem:</li>
</ul>
<pre><code class="lang-auto">curl -H "Authorization: Bearer hf_<...>bfQ" https://huggingface.co/api/whoami
</code></pre>
<ul>
<li>The <strong>full error message</strong>:</li>
</ul>
<pre><code class="lang-auto">{"error":"Invalid credentials in Authorization header"}
</code></pre>
<ul>
<li>
<p>Provide the version of the library you are using:<br>
I’m not using a library for this</p>
</li>
<li>
<p>If you have tried something in particular to solve your problem, don’t hesitate to mention it as well:<br>
I tried to use the credentials initially in an n8n workflow → http request node. The curl is the simplest way to express this problem.<br>
Please find attached shot of the tokens I setup:<br>
<div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35.png" data-download-href="/uploads/short-url/64QPWVeG7fB9CL7BuCvBerWNViZ.png?dl=1" title="Huggingface access tokens - Screenshot from 2025-06-25 16-55-52" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_690x211.png" alt="Huggingface access tokens - Screenshot from 2025-06-25 16-55-52" data-base62-sha1="64QPWVeG7fB9CL7BuCvBerWNViZ" width="690" height="211" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_690x211.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_1035x316.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/a/2a997579a8c1698a388d7b210ca7108389408e35_2_1380x422.png 2x" data-dominant-color="F9F9FA"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">Huggingface access tokens - Screenshot from 2025-06-25 16-55-52</span><span class="informations">1496×459 61.3 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
</li>
</ul>
<p>The error message is clear as to what the problem is (unauthorized). What I do not know is why and/or why Huggingface server interprets the access token as anauthorized?</p>
|
<p>Try v2.</p>
<pre data-code-wrap="py"><code class="lang-py">HF_TOKEN = "hf_foobar"
import subprocess
subprocess.run(f'curl -H "Authorization: Bearer {HF_TOKEN}" https://huggingface.co/api/whoami', shell=True)
# {"error":"Invalid credentials in Authorization header"}
subprocess.run(f'curl -H "Authorization: Bearer {HF_TOKEN}" https://huggingface.co/api/whoami-v2', shell=True)
# {"type":"user", ...
</code></pre>
|
Why does installing “CPU-only version of Transformers” install multiple GB of CUDA libs?
|
https://discuss.huggingface.co/t/why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs/160110
| 160,110
| 5
|
2025-06-20T17:29:08.026000Z
|
[
{
"id": 228619,
"name": "Faaiz Memon",
"username": "FaaizMemon",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/f/8e7dd6/{size}.png",
"created_at": "2025-06-20T17:29:08.083Z",
"cooked": "<p>The <a href=\"https://huggingface.co/docs/transformers/en/installation?cpu-only=PyTorch#python\">doc</a> suggests that installing with the commands:</p>\n<pre><code class=\"lang-auto\">pip install 'transformers[torch]'\nuv pip install 'transformers[torch]'\n</code></pre>\n<p>will get a CPU-only install (I don’t have a GPU). So why does it have to take >2GB of my disk space for CUDA-specific libraries? especially if I’m going to run this in a docker-type environment, I’m interested to know if it’s possible to install without the GBs of CUDA libraries. If that breaks the transformers functionality, I would be interested in editing the docs accordingly.</p>\n<p>I do realize that it’s getting installed because of the torch, not because of transformers itself, but it would be nice to know if there’s a way to slim this down when it’s not needed.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-20T17:30:57.867Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 114,
"reads": 7,
"readers_count": 6,
"score": 556.4,
"yours": false,
"topic_id": 160110,
"topic_slug": "why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs",
"display_username": "Faaiz Memon",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/en/installation?cpu-only=PyTorch#python",
"internal": false,
"reflection": false,
"title": "Installation",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90281,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs/160110/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228661,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-21T00:58:16.025Z",
"cooked": "<p>The Transoformers library also works with PyTorch for CPUs. However, if you install CUDA and then run <code>pip install torch</code>, the CUDA version will be installed. I think you can make it slimmer by installing PyTorch for CPU first somehow, and then installing Transoformers with <code>pip install transoformers</code>.<br>\n<a href=\"https://stackoverflow.com/questions/78947332/how-to-install-torch-without-nvidia\" class=\"onebox\" target=\"_blank\" rel=\"noopener\">https://stackoverflow.com/questions/78947332/how-to-install-torch-without-nvidia</a><br>\n<a href=\"https://stackoverflow.com/questions/51730880/where-do-i-get-a-cpu-only-version-of-pytorch\" class=\"onebox\" target=\"_blank\" rel=\"noopener\">https://stackoverflow.com/questions/51730880/where-do-i-get-a-cpu-only-version-of-pytorch</a></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-21T01:03:16.698Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 160110,
"topic_slug": "why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://stackoverflow.com/questions/78947332/how-to-install-torch-without-nvidia",
"internal": false,
"reflection": false,
"title": null,
"clicks": 15
},
{
"url": "https://stackoverflow.com/questions/51730880/where-do-i-get-a-cpu-only-version-of-pytorch",
"internal": false,
"reflection": false,
"title": null,
"clicks": 11
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs/160110/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229188,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-24T14:31:22.261Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-06-24T14:31:22.261Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 160110,
"topic_slug": "why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-does-installing-cpu-only-version-of-transformers-install-multiple-gb-of-cuda-libs/160110/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>The <a href="https://huggingface.co/docs/transformers/en/installation?cpu-only=PyTorch#python">doc</a> suggests that installing with the commands:</p>
<pre><code class="lang-auto">pip install 'transformers[torch]'
uv pip install 'transformers[torch]'
</code></pre>
<p>will get a CPU-only install (I don’t have a GPU). So why does it have to take >2GB of my disk space for CUDA-specific libraries? especially if I’m going to run this in a docker-type environment, I’m interested to know if it’s possible to install without the GBs of CUDA libraries. If that breaks the transformers functionality, I would be interested in editing the docs accordingly.</p>
<p>I do realize that it’s getting installed because of the torch, not because of transformers itself, but it would be nice to know if there’s a way to slim this down when it’s not needed.</p>
|
<p>The Transoformers library also works with PyTorch for CPUs. However, if you install CUDA and then run <code>pip install torch</code>, the CUDA version will be installed. I think you can make it slimmer by installing PyTorch for CPU first somehow, and then installing Transoformers with <code>pip install transoformers</code>.<br>
<a href="https://stackoverflow.com/questions/78947332/how-to-install-torch-without-nvidia" class="onebox" target="_blank" rel="noopener">https://stackoverflow.com/questions/78947332/how-to-install-torch-without-nvidia</a><br>
<a href="https://stackoverflow.com/questions/51730880/where-do-i-get-a-cpu-only-version-of-pytorch" class="onebox" target="_blank" rel="noopener">https://stackoverflow.com/questions/51730880/where-do-i-get-a-cpu-only-version-of-pytorch</a></p>
|
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
|
https://discuss.huggingface.co/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955
| 159,955
| 10
|
2025-06-19T11:58:46.833000Z
|
[
{
"id": 228375,
"name": "Adam BEN KHALIFA",
"username": "Adam-Ben-Khalifa",
"avatar_template": "/user_avatar/discuss.huggingface.co/adam-ben-khalifa/{size}/49687_2.png",
"created_at": "2025-06-19T11:58:46.893Z",
"cooked": "<p>Hi,</p>\n<p>I’m new to HF dataset and I tried to create datasets based on data versioned in lakeFS (MinIO S3 bucket as storage backend)<br>\nHere I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!<br>\nFrom what I understand, it is loading the images into cache then building the dataset.<br>\n– Please find bellow the execution screenshot –</p>\n<p>Is there a way to optimize this or am I doing something wrong?</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f.jpeg\" data-download-href=\"/uploads/short-url/rb3cpe8KbicCefVedeejVaoE9yf.jpeg?dl=1\" title=\"Sans-titre-2025-04-03-1529(4)\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_376x500.jpeg\" alt=\"Sans-titre-2025-04-03-1529(4)\" data-base62-sha1=\"rb3cpe8KbicCefVedeejVaoE9yf\" width=\"376\" height=\"500\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_376x500.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_564x750.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_752x1000.jpeg 2x\" data-dominant-color=\"191A1B\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Sans-titre-2025-04-03-1529(4)</span><span class=\"informations\">2179×2892 574 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-19T11:58:46.893Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 31,
"reads": 8,
"readers_count": 7,
"score": 171.6,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "Adam BEN KHALIFA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97330,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228381,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-19T12:45:45.961Z",
"cooked": "<p>Hmm… There is not much information available.</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/datasets/issues/6478\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/datasets/issues/6478\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/datasets</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/datasets/issues/6478\" target=\"_blank\" rel=\"noopener\">How to load data from lakefs</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2023-12-06\" data-time=\"09:04:11\" data-timezone=\"UTC\">09:04AM - 06 Dec 23 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-07-03\" data-time=\"19:13:56\" data-timezone=\"UTC\">07:13PM - 03 Jul 24 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/d710055071\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/5/d/5d4c722920d134ac1af01cb1b19f8cd71758070b.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"CDCDCA\">\n d710055071\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">My dataset is stored on the company's lakefs server. How can I write code to loa<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">d the dataset? It would be great if I could provide code examples or provide some references</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-19T12:45:45.961Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 21.4,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets/issues/6478",
"internal": false,
"reflection": false,
"title": "How to load data from lakefs · Issue #6478 · huggingface/datasets · GitHub",
"clicks": 3
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228459,
"name": "not-lain",
"username": "not-lain",
"avatar_template": "/user_avatar/discuss.huggingface.co/not-lain/{size}/23122_2.png",
"created_at": "2025-06-19T22:53:55.820Z",
"cooked": "<p><a class=\"mention\" href=\"/u/adam-ben-khalifa\">@Adam-Ben-Khalifa</a> you can try to load the data in streaming mode, also after you converted the data into the datasets library consider saving it locally or pushing it to the hub</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-19T22:53:55.820Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 36.4,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "not-lain",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 38692,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228562,
"name": "Adam BEN KHALIFA",
"username": "Adam-Ben-Khalifa",
"avatar_template": "/user_avatar/discuss.huggingface.co/adam-ben-khalifa/{size}/49687_2.png",
"created_at": "2025-06-20T11:04:13.918Z",
"cooked": "<p>I’m saving the dataset locally, the delay is only at the first time creating it.<br>\nAlso I tried streaming and multiprocessing but I’m not seeing a difference, take a look</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/0/f/0fa755999b80e79d7f2d7a0402d7b6e1b8195645.png\" data-download-href=\"/uploads/short-url/2etFO12yzCV9x6CwmFn2rwbcpfL.png?dl=1\" title=\"Capture d’écran du 2025-06-20 13-00-28\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/f/0fa755999b80e79d7f2d7a0402d7b6e1b8195645_2_605x499.png\" alt=\"Capture d’écran du 2025-06-20 13-00-28\" data-base62-sha1=\"2etFO12yzCV9x6CwmFn2rwbcpfL\" width=\"605\" height=\"499\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/f/0fa755999b80e79d7f2d7a0402d7b6e1b8195645_2_605x499.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/f/0fa755999b80e79d7f2d7a0402d7b6e1b8195645_2_907x748.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/0/f/0fa755999b80e79d7f2d7a0402d7b6e1b8195645.png 2x\" data-dominant-color=\"151515\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Capture d’écran du 2025-06-20 13-00-28</span><span class=\"informations\">1048×866 53.4 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-20T11:04:13.918Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 21.4,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "Adam BEN KHALIFA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97330,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 38692,
"username": "not-lain",
"name": "not-lain",
"avatar_template": "/user_avatar/discuss.huggingface.co/not-lain/{size}/23122_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228565,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-20T11:14:16.789Z",
"cooked": "<p><code>imagefolder</code> is mainly for small image datasets, so I don’t think it’s very fast.</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/datasets/issues/5317\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/datasets/issues/5317\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/datasets</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/datasets/issues/5317\" target=\"_blank\" rel=\"noopener\">`ImageFolder` performs poorly with large datasets</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2022-12-01\" data-time=\"00:04:21\" data-timezone=\"UTC\">12:04AM - 01 Dec 22 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/salieri\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/0/80a7c91e745418803661ab7a1bdd28d9f123b392.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"AA5341\">\n salieri\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">### Describe the bug\n\nWhile testing image dataset creation, I'm seeing significa<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">nt performance bottlenecks with imagefolders when scanning a directory structure with large number of images.\n\n\n## Setup\n* Nested directories (5 levels deep)\n* 3M+ images\n* 1 `metadata.jsonl` file\n\n\n## Performance Degradation Point 1\n\nDegradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85). \n\nOne scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance.\n\nAs a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal.\n\n\n## Performance Degradation Point 2\n\nThe second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`. \n\nIt runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out.\n\n### Steps to reproduce the bug\n\n```python\nfrom datasets import load_dataset\nimport os\nimport huggingface_hub\n\ndataset = load_dataset(\n 'imagefolder',\n data_dir='/some/path',\n # just to spell it out:\n split=None,\n drop_labels=True,\n keep_in_memory=False\n)\n\ndataset.push_to_hub('account/dataset', private=True)\n```\n\n### Expected behavior\n\nWhile it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets.\n\nOr perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does?\n\n### Environment info\n\n- `datasets` version: 2.7.1\n- Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5\n- Python version: 3.7.10\n- PyArrow version: 10.0.1\n- Pandas version: 1.3.5</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"60131\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/s/d9b06d/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/extremely-slow-data-loading-of-imagefolder/60131\">Extremely slow data loading of imagefolder</a> <a class=\"badge-category__wrapper \" href=\"/c/datasets/10\"><span data-category-id=\"10\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the datasets library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Datasets</span></span></a>\n </div>\n <blockquote>\n Hi, I’m new to the Huggingface’s Datasets and I’m trying to train controlnet for stablediffusion on a custom dataset with approximately 300k images, the size of these images is (768, 768). \nNow, I stuck in following lines of code: \ndataset = load_dataset(\"imagefolder\", data_dir=\"path/to/the/dataset\")\nprint(dataset['train'][0])\n\nThen, I have few questions. \n\nDoes imagefolder load images (load and decode) in memory at setup, if it is, can I disable it?\nAre there any implicit process Datasets do wh…\n </blockquote>\n</aside>\n<aside class=\"quote\" data-post=\"1\" data-topic=\"81265\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/amithm3/48/25714_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/standard-way-to-upload-huge-dataset/81265\">Standard way to upload huge dataset</a> <a class=\"badge-category__wrapper \" href=\"/c/datasets/10\"><span data-category-id=\"10\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the datasets library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Datasets</span></span></a>\n </div>\n <blockquote>\n I have a huge (100GB+) dataset of audio (.wav files) and its respective metadata I was able to easily load the dataset using load_dataset and uploaded it using push_to_hub which converts it to a parquet file what is the best way to upload such large dataset (particularly images and audio) I want to be able to use streaming with it And update metadata without having to reupload the entire dataset\n </blockquote>\n</aside>\n",
"post_number": 5,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-20T11:14:16.789Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 7,
"readers_count": 6,
"score": 46.4,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets/issues/5317",
"internal": false,
"reflection": false,
"title": "`ImageFolder` performs poorly with large datasets · Issue #5317 · huggingface/datasets · GitHub",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/extremely-slow-data-loading-of-imagefolder/60131",
"internal": true,
"reflection": false,
"title": "Extremely slow data loading of imagefolder",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/standard-way-to-upload-huge-dataset/81265",
"internal": true,
"reflection": false,
"title": "Standard way to upload huge dataset",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/5",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228574,
"name": "Adam BEN KHALIFA",
"username": "Adam-Ben-Khalifa",
"avatar_template": "/user_avatar/discuss.huggingface.co/adam-ben-khalifa/{size}/49687_2.png",
"created_at": "2025-06-20T11:47:07.871Z",
"cooked": "<p>This is helpful, I didn’t see these posts since I didn’t consider the data I’m testing with large (around 30k images ~ 9MB total)<br>\nI’ll check them and post an update<br>\nThanks!</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-20T11:47:07.871Z",
"reply_count": 0,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "Adam BEN KHALIFA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97330,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228972,
"name": "Adam BEN KHALIFA",
"username": "Adam-Ben-Khalifa",
"avatar_template": "/user_avatar/discuss.huggingface.co/adam-ben-khalifa/{size}/49687_2.png",
"created_at": "2025-06-23T12:37:39.183Z",
"cooked": "<h3><a name=\"p-228972-update-1\" class=\"anchor\" href=\"#p-228972-update-1\"></a>> Update</h3>\n<p>The bottleneck, from what I understand, was making one network request per file</p>\n<p>For 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance</p>\n<p>Using webDataset to transform the large number of files to few .tar files and passing “webdataset” instead of “imagefolder” to the load_dataset function worked perfectly (took only ~11s)</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-23T12:37:39.183Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 4,
"readers_count": 3,
"score": 40.8,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "Adam BEN KHALIFA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97330,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/8",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 229046,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-24T00:37:45.162Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 9,
"post_type": 3,
"posts_count": 8,
"updated_at": "2025-06-24T00:37:45.162Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 159955,
"topic_slug": "creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/creating-a-hf-dataset-from-lakefs-with-s3-storage-takes-too-much-time/159955/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi,</p>
<p>I’m new to HF dataset and I tried to create datasets based on data versioned in lakeFS (MinIO S3 bucket as storage backend)<br>
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!<br>
From what I understand, it is loading the images into cache then building the dataset.<br>
– Please find bellow the execution screenshot –</p>
<p>Is there a way to optimize this or am I doing something wrong?</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f.jpeg" data-download-href="/uploads/short-url/rb3cpe8KbicCefVedeejVaoE9yf.jpeg?dl=1" title="Sans-titre-2025-04-03-1529(4)" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_376x500.jpeg" alt="Sans-titre-2025-04-03-1529(4)" data-base62-sha1="rb3cpe8KbicCefVedeejVaoE9yf" width="376" height="500" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_376x500.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_564x750.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/e/be7a8311b749d9cd070515567fb14b218d9f192f_2_752x1000.jpeg 2x" data-dominant-color="191A1B"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">Sans-titre-2025-04-03-1529(4)</span><span class="informations">2179×2892 574 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
|
<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>
|
MCP Server Not Starting Despite GRADIO_MCP_SERVER=True in Gradio 5.27.1+
|
https://discuss.huggingface.co/t/mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1/160132
| 160,132
| 21
|
2025-06-20T22:52:02.647000Z
|
[
{
"id": 228653,
"name": "usman fawad",
"username": "usman69",
"avatar_template": "/user_avatar/discuss.huggingface.co/usman69/{size}/49822_2.png",
"created_at": "2025-06-20T22:52:02.733Z",
"cooked": "<p>I’m trying to expose my Gradio interface as an MCP server using the latest <code>gradio[mcp]</code> package (version 5.27.1). I’ve followed all the instructions in the MCP course and docs, including setting the environment variable before execution:</p>\n<pre><code class=\"lang-auto\">$env:GRADIO_MCP_SERVER=\"True\"\npy app.py\n</code></pre>\n<p>However, the server only outputs:</p>\n<pre><code class=\"lang-auto\">Running on local URL: http://127.0.0.1:7860\n</code></pre>\n<p>and I <strong>never see</strong> the expected line:</p>\n<pre><code class=\"lang-auto\">MCP server available at: http://127.0.0.1:7860/gradio_api/mcp/sse\n</code></pre>\n<p>I confirmed:</p>\n<ul>\n<li><code>gradio==5.27.1</code> is installed</li>\n<li><code>gradio-mcp</code> is also installed</li>\n<li>I’m not using <code>mcp_server=True</code> in <code>.launch()</code> (since it’s removed in v5)</li>\n<li>Tried both <code>py</code> and <code>python</code> after setting the environment variable</li>\n<li>Tested on a fresh virtual environment</li>\n</ul>\n<p>Still, the MCP server routes <code>/gradio_api/mcp/sse</code> and <code>/schema</code> never activate.</p>\n<p>Could someone from the Gradio or MCP team help confirm if this is a bug or if something changed in v5 that isn’t reflected in the documentation?</p>\n<p>Reference: <a href=\"https://huggingface.co/learn/mcp-course/unit2/gradio-server\" class=\"inline-onebox\">Building the Gradio MCP Server - Hugging Face MCP Course</a></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-20T22:53:23.192Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 158,
"reads": 12,
"readers_count": 11,
"score": 792.4,
"yours": false,
"topic_id": 160132,
"topic_slug": "mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1",
"display_username": "usman fawad",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/learn/mcp-course/unit2/gradio-server",
"internal": false,
"reflection": false,
"title": "Building the Gradio MCP Server - Hugging Face MCP Course",
"clicks": 6
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97500,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1/160132/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228668,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-21T01:34:23.344Z",
"cooked": "<p>Hmm… Perhaps this case?</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/gradio-app/gradio/issues/11225\">\n <header class=\"source\">\n\n <a href=\"https://github.com/gradio-app/gradio/issues/11225\" target=\"_blank\" rel=\"noopener\">github.com/gradio-app/gradio</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/gradio-app/gradio/issues/11225\" target=\"_blank\" rel=\"noopener\">Erro while Connectin MCP server</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-05-20\" data-time=\"03:47:35\" data-timezone=\"UTC\">03:47AM - 20 May 25 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-05-21\" data-time=\"16:08:23\" data-timezone=\"UTC\">04:08PM - 21 May 25 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/kauabh\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/0/b/0bfdc628537aee44b593654417148478fcd3cc97.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"4E535A\">\n kauabh\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n bug\n </span>\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n Priority\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">### Describe the bug\n\nWhile to trying to connect with Gradio MCP server [code](h<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">ttps://www.gradio.app/guides/building-mcp-server-with-gradio) Getting below error. Even though atleast in Gradio UI tool work as it suppose to be.\n\n```\nError in post_writer: Client error '404 Not Found' for url 'http://127.0.0.1:7860/gradio_api/mcp/gradio_api/mcp/messages/?session_id=ed478fc640e247fbbb6b171c58de322b'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404\n```\nPydantic-AI code used\n\n\n```\nfrom pydantic_ai import Agent\nfrom pydantic_ai.mcp import MCPServerHTTP\nimport asyncio \n\nserver = MCPServerHTTP(url='http://127.0.0.1:7860/gradio_api/mcp/sse') \nagent = Agent(model=model, mcp_servers=[server]) \n\n\nasync def main():\n async with agent.run_mcp_servers(): \n result = await agent.run('Count word for Hello')\n print(result.output)\n\nasyncio.run(main())\n```\nThe response on pydantic-ai gihub [issue](https://github.com/pydantic/pydantic-ai/issues/1757#event-17722963409) is following\n\nAny advice on this.\n\n`It looks like the URL is being built incorrectly: http://127.0.0.1:7860/gradio_api/mcp/gradio_api/mcp/messages/?session_id=ed478fc640e247fbbb6b171c58de322b should be http://127.0.0.1:7860/gradio_api/mcp/messages/?session_id=ed478fc640e247fbbb6b171c58de322b\n`\n\n\n\n\n### Have you searched existing issues? 🔎\n\n- [x] I have searched and found no existing issues\n\n### Reproduction\n\n```python\nimport gradio as gr\n\n```\n\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n```shell\nLatest Version of Gradio\n```</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p><a href=\"https://github.com/abidlabs\">abidlabs</a><br>\n<a href=\"https://github.com/gradio-app/gradio/issues/11225#issuecomment-2893381049\">on May 20, 2025</a><br>\nOk I’ve figured out the issue, it’s due to a breaking change introduced by the <code>mcp</code> package going from <code>mcp==1.8.1</code> to <code>mcp==1.9.0</code>. We’re going to be investigating further to figure out if this breaking change in <code>mcp</code> is intentional or a mistake, but in the meantime, I recommend pinning <code>mcp==1.8.1</code> as in this Space: <a href=\"https://huggingface.co/spaces/abidlabs/mcp_tools2\" class=\"inline-onebox\">mcp_tools - a Hugging Face Space by abidlabs</a></p>\n</blockquote>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-21T01:34:23.344Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 10,
"reads": 10,
"readers_count": 9,
"score": 67,
"yours": false,
"topic_id": 160132,
"topic_slug": "mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/gradio-app/gradio/issues/11225",
"internal": false,
"reflection": false,
"title": "Erro while Connectin MCP server · Issue #11225 · gradio-app/gradio · GitHub",
"clicks": 11
},
{
"url": "https://huggingface.co/spaces/abidlabs/mcp_tools2",
"internal": false,
"reflection": false,
"title": "mcp_tools - a Hugging Face Space by abidlabs",
"clicks": 10
},
{
"url": "https://github.com/gradio-app/gradio/issues/11225#issuecomment-2893381049",
"internal": false,
"reflection": false,
"title": "Erro while Connectin MCP server · Issue #11225 · gradio-app/gradio · GitHub",
"clicks": 1
},
{
"url": "https://github.com/abidlabs",
"internal": false,
"reflection": false,
"title": "abidlabs (Abubakar Abid) · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1/160132/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228737,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-21T16:06:35.150Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-06-21T16:06:35.150Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 6,
"readers_count": 5,
"score": 31.2,
"yours": false,
"topic_id": 160132,
"topic_slug": "mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/mcp-server-not-starting-despite-gradio-mcp-server-true-in-gradio-5-27-1/160132/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’m trying to expose my Gradio interface as an MCP server using the latest <code>gradio[mcp]</code> package (version 5.27.1). I’ve followed all the instructions in the MCP course and docs, including setting the environment variable before execution:</p>
<pre><code class="lang-auto">$env:GRADIO_MCP_SERVER="True"
py app.py
</code></pre>
<p>However, the server only outputs:</p>
<pre><code class="lang-auto">Running on local URL: http://127.0.0.1:7860
</code></pre>
<p>and I <strong>never see</strong> the expected line:</p>
<pre><code class="lang-auto">MCP server available at: http://127.0.0.1:7860/gradio_api/mcp/sse
</code></pre>
<p>I confirmed:</p>
<ul>
<li><code>gradio==5.27.1</code> is installed</li>
<li><code>gradio-mcp</code> is also installed</li>
<li>I’m not using <code>mcp_server=True</code> in <code>.launch()</code> (since it’s removed in v5)</li>
<li>Tried both <code>py</code> and <code>python</code> after setting the environment variable</li>
<li>Tested on a fresh virtual environment</li>
</ul>
<p>Still, the MCP server routes <code>/gradio_api/mcp/sse</code> and <code>/schema</code> never activate.</p>
<p>Could someone from the Gradio or MCP team help confirm if this is a bug or if something changed in v5 that isn’t reflected in the documentation?</p>
<p>Reference: <a href="https://huggingface.co/learn/mcp-course/unit2/gradio-server" class="inline-onebox">Building the Gradio MCP Server - Hugging Face MCP Course</a></p>
|
<p>Hmm… Perhaps this case?</p><aside class="onebox githubissue" data-onebox-src="https://github.com/gradio-app/gradio/issues/11225">
<header class="source">
<a href="https://github.com/gradio-app/gradio/issues/11225" target="_blank" rel="noopener">github.com/gradio-app/gradio</a>
</header>
<article class="onebox-body">
<div class="github-row">
<div class="github-icon-container" title="Issue" data-github-private-repo="false">
<svg width="60" height="60" class="github-icon" viewBox="0 0 14 16" aria-hidden="true"><path fill-rule="evenodd" d="M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z"></path></svg>
</div>
<div class="github-info-container">
<h4>
<a href="https://github.com/gradio-app/gradio/issues/11225" target="_blank" rel="noopener">Erro while Connectin MCP server</a>
</h4>
<div class="github-info">
<div class="date">
opened <span class="discourse-local-date" data-format="ll" data-date="2025-05-20" data-time="03:47:35" data-timezone="UTC">03:47AM - 20 May 25 UTC</span>
</div>
<div class="date">
closed <span class="discourse-local-date" data-format="ll" data-date="2025-05-21" data-time="16:08:23" data-timezone="UTC">04:08PM - 21 May 25 UTC</span>
</div>
<div class="user">
<a href="https://github.com/kauabh" target="_blank" rel="noopener">
<img alt="" src="https://us1.discourse-cdn.com/hellohellohello/original/3X/0/b/0bfdc628537aee44b593654417148478fcd3cc97.jpeg" class="onebox-avatar-inline" width="20" height="20" data-dominant-color="4E535A">
kauabh
</a>
</div>
</div>
<div class="labels">
<span style="display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;">
bug
</span>
<span style="display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;">
Priority
</span>
</div>
</div>
</div>
<div class="github-row">
<p class="github-body-container">### Describe the bug
While to trying to connect with Gradio MCP server [code](h<span class="show-more-container"><a href="" rel="noopener" class="show-more">…</a></span><span class="excerpt hidden">ttps://www.gradio.app/guides/building-mcp-server-with-gradio) Getting below error. Even though atleast in Gradio UI tool work as it suppose to be.
```
Error in post_writer: Client error '404 Not Found' for url 'http://127.0.0.1:7860/gradio_api/mcp/gradio_api/mcp/messages/?session_id=ed478fc640e247fbbb6b171c58de322b'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
```
Pydantic-AI code used
```
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP
import asyncio
server = MCPServerHTTP(url='http://127.0.0.1:7860/gradio_api/mcp/sse')
agent = Agent(model=model, mcp_servers=[server])
async def main():
async with agent.run_mcp_servers():
result = await agent.run('Count word for Hello')
print(result.output)
asyncio.run(main())
```
The response on pydantic-ai gihub [issue](https://github.com/pydantic/pydantic-ai/issues/1757#event-17722963409) is following
Any advice on this.
`It looks like the URL is being built incorrectly: http://127.0.0.1:7860/gradio_api/mcp/gradio_api/mcp/messages/?session_id=ed478fc640e247fbbb6b171c58de322b should be http://127.0.0.1:7860/gradio_api/mcp/messages/?session_id=ed478fc640e247fbbb6b171c58de322b
`
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Latest Version of Gradio
```</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<blockquote>
<p><a href="https://github.com/abidlabs">abidlabs</a><br>
<a href="https://github.com/gradio-app/gradio/issues/11225#issuecomment-2893381049">on May 20, 2025</a><br>
Ok I’ve figured out the issue, it’s due to a breaking change introduced by the <code>mcp</code> package going from <code>mcp==1.8.1</code> to <code>mcp==1.9.0</code>. We’re going to be investigating further to figure out if this breaking change in <code>mcp</code> is intentional or a mistake, but in the meantime, I recommend pinning <code>mcp==1.8.1</code> as in this Space: <a href="https://huggingface.co/spaces/abidlabs/mcp_tools2" class="inline-onebox">mcp_tools - a Hugging Face Space by abidlabs</a></p>
</blockquote>
|
Make “image” column appear first in dataset preview UI
|
https://discuss.huggingface.co/t/make-image-column-appear-first-in-dataset-preview-ui/159787
| 159,787
| 10
|
2025-06-18T09:22:03.753000Z
|
[
{
"id": 228129,
"name": "Cerveto Serrano",
"username": "joancervetoserrano",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/82dd89/{size}.png",
"created_at": "2025-06-18T09:22:03.820Z",
"cooked": "<p>Hi! <img src=\"https://emoji.discourse-cdn.com/apple/waving_hand.png?v=14\" title=\":waving_hand:\" class=\"emoji\" alt=\":waving_hand:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>I’m currently uploading a dataset that includes an <code>\"image\"</code> column (PNG files), along with some metadata columns. The dataset is loaded from a <code>.jsonl</code> file. My goal is to have the <code>\"image\"</code> column appear <strong>as the first column</strong> in the dataset card preview UI on the <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=14\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"> Hub.</p>\n<p>However, at the moment, the <code>\"image\"</code> column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve.</p>\n<p>I have a couple of questions:</p>\n<ul>\n<li>Is there a way to force the dataset card to display the <code>\"image\"</code> column first?</li>\n<li>Is there currently any way to control or influence the column order in the dataset preview UI?</li>\n<li>Does the order of keys in the <code>.jsonl</code> file or the <code>features</code> argument affect the display order?</li>\n</ul>\n<p>Thanks again for your time and help! <img src=\"https://emoji.discourse-cdn.com/apple/blush.png?v=14\" title=\":blush:\" class=\"emoji\" alt=\":blush:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-18T09:22:03.820Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 6,
"readers_count": 5,
"score": 51.2,
"yours": false,
"topic_id": 159787,
"topic_slug": "make-image-column-appear-first-in-dataset-preview-ui",
"display_username": "Cerveto Serrano",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97286,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/make-image-column-appear-first-in-dataset-preview-ui/159787/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228134,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-18T10:14:53.723Z",
"cooked": "<blockquote>\n<p>Does the order of keys in the <code>.jsonl</code> file or the <code>features</code> argument affect the display order?</p>\n</blockquote>\n<p>That’s probably true for datasets that have been loaded and saved in the <code>datasets</code> library.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://github.com/huggingface/datasets/discussions/4646\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/huggingface/datasets/discussions/4646\" target=\"_blank\" rel=\"noopener\">GitHub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/344;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/d/9d546a9cbbc745421d124e771e5e47733313021c_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"F2F3F5\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://github.com/huggingface/datasets/discussions/4646\" target=\"_blank\" rel=\"noopener\">Reorder columns · huggingface datasets · Discussion #4646</a></h3>\n\n <p>Is there a way to reorder the columns in a dataset? I notice remove_columns and rename_columns and have even tried the following to no avail: def reorder_cols(sample): sample = {col: sample[col] fo...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>However, if you simply upload image files as-is, I believe the order information will be automatically supplemented, so if you want to maintain the order in the viewer, you may need to manually create a settings file.</p>\n<p>The most reliable method is to convert the data to the parquet format using the <code>datasets</code> library (simply load and save).</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/datasets/image_dataset\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/datasets/image_dataset\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/5/35e852b936c2343e04e14f5d22299d4e04d553d8_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F8F5F0\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/datasets/image_dataset\" target=\"_blank\" rel=\"noopener\">Create an image dataset</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/hub/datasets-viewer-configure\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/hub/datasets-viewer-configure\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"FAF8F2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/hub/datasets-viewer-configure\" target=\"_blank\" rel=\"noopener\">Configure the Dataset Viewer</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-18T10:14:53.723Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 6.2,
"yours": false,
"topic_id": 159787,
"topic_slug": "make-image-column-appear-first-in-dataset-preview-ui",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/hub/datasets-viewer-configure",
"internal": false,
"reflection": false,
"title": "Configure the Dataset Viewer",
"clicks": 0
},
{
"url": "https://huggingface.co/docs/datasets/image_dataset",
"internal": false,
"reflection": false,
"title": "Create an image dataset",
"clicks": 0
},
{
"url": "https://github.com/huggingface/datasets/discussions/4646",
"internal": false,
"reflection": false,
"title": "Reorder columns · huggingface/datasets · Discussion #4646 · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/make-image-column-appear-first-in-dataset-preview-ui/159787/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228211,
"name": "Cerveto Serrano",
"username": "joancervetoserrano",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/82dd89/{size}.png",
"created_at": "2025-06-18T19:01:32.546Z",
"cooked": "<p>Thank you!! I will check it!<br>\n<img src=\"https://emoji.discourse-cdn.com/apple/flexed_biceps.png?v=14\" title=\":flexed_biceps:\" class=\"emoji only-emoji\" alt=\":flexed_biceps:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-18T19:01:32.546Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 159787,
"topic_slug": "make-image-column-appear-first-in-dataset-preview-ui",
"display_username": "Cerveto Serrano",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97286,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/make-image-column-appear-first-in-dataset-preview-ui/159787/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228289,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-19T07:02:17.819Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-19T07:02:17.819Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 159787,
"topic_slug": "make-image-column-appear-first-in-dataset-preview-ui",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/make-image-column-appear-first-in-dataset-preview-ui/159787/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi! <img src="https://emoji.discourse-cdn.com/apple/waving_hand.png?v=14" title=":waving_hand:" class="emoji" alt=":waving_hand:" loading="lazy" width="20" height="20"></p>
<p>I’m currently uploading a dataset that includes an <code>"image"</code> column (PNG files), along with some metadata columns. The dataset is loaded from a <code>.jsonl</code> file. My goal is to have the <code>"image"</code> column appear <strong>as the first column</strong> in the dataset card preview UI on the <img src="https://emoji.discourse-cdn.com/apple/hugs.png?v=14" title=":hugs:" class="emoji" alt=":hugs:" loading="lazy" width="20" height="20"> Hub.</p>
<p>However, at the moment, the <code>"image"</code> column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve.</p>
<p>I have a couple of questions:</p>
<ul>
<li>Is there a way to force the dataset card to display the <code>"image"</code> column first?</li>
<li>Is there currently any way to control or influence the column order in the dataset preview UI?</li>
<li>Does the order of keys in the <code>.jsonl</code> file or the <code>features</code> argument affect the display order?</li>
</ul>
<p>Thanks again for your time and help! <img src="https://emoji.discourse-cdn.com/apple/blush.png?v=14" title=":blush:" class="emoji" alt=":blush:" loading="lazy" width="20" height="20"></p>
|
<blockquote>
<p>Does the order of keys in the <code>.jsonl</code> file or the <code>features</code> argument affect the display order?</p>
</blockquote>
<p>That’s probably true for datasets that have been loaded and saved in the <code>datasets</code> library.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://github.com/huggingface/datasets/discussions/4646">
<header class="source">
<img src="https://github.githubassets.com/favicons/favicon.svg" class="site-icon" width="32" height="32">
<a href="https://github.com/huggingface/datasets/discussions/4646" target="_blank" rel="noopener">GitHub</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/344;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/d/9d546a9cbbc745421d124e771e5e47733313021c_2_690x345.png" class="thumbnail" data-dominant-color="F2F3F5" width="690" height="345"></div>
<h3><a href="https://github.com/huggingface/datasets/discussions/4646" target="_blank" rel="noopener">Reorder columns · huggingface datasets · Discussion #4646</a></h3>
<p>Is there a way to reorder the columns in a dataset? I notice remove_columns and rename_columns and have even tried the following to no avail: def reorder_cols(sample): sample = {col: sample[col] fo...</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>However, if you simply upload image files as-is, I believe the order information will be automatically supplemented, so if you want to maintain the order in the viewer, you may need to manually create a settings file.</p>
<p>The most reliable method is to convert the data to the parquet format using the <code>datasets</code> library (simply load and save).</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/datasets/image_dataset">
<header class="source">
<a href="https://huggingface.co/docs/datasets/image_dataset" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/5/35e852b936c2343e04e14f5d22299d4e04d553d8_2_690x372.png" class="thumbnail" data-dominant-color="F8F5F0" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/datasets/image_dataset" target="_blank" rel="noopener">Create an image dataset</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/hub/datasets-viewer-configure">
<header class="source">
<a href="https://huggingface.co/docs/hub/datasets-viewer-configure" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png" class="thumbnail" data-dominant-color="FAF8F2" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/hub/datasets-viewer-configure" target="_blank" rel="noopener">Configure the Dataset Viewer</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Does attention_mask refer to input_ids or to labels?
|
https://discuss.huggingface.co/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820
| 159,820
| 5
|
2025-06-18T15:29:28.038000Z
|
[
{
"id": 228172,
"name": "Philo Math",
"username": "Philomath868",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/b487fb/{size}.png",
"created_at": "2025-06-18T15:29:28.102Z",
"cooked": "<p>Seems like a silly question, but I’m learning and can’t find anything definitive…</p>\n<p>In models where <code>input_ids</code> and <code>labels</code> may be of different length (i.e. denoising, where a span of several tokens in labels may have been replaced by a single token), should the <code>attention_mask</code> correspond to labels (so the original chunk size) or to input_ids (so resized after noising)?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T15:29:28.102Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 93,
"reads": 10,
"readers_count": 9,
"score": 417,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Philo Math",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97307,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228179,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-18T16:22:56.744Z",
"cooked": "<p>The attention_mask tells the model which positions in the input to attend to, i.e., which tokens are real vs padding. It applies only to the forward pass — specifically, how attention is computed over the input_ids.</p>\n<p>The labels are not used during attention computation — they are only used in the loss computation</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T16:22:57.025Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 37,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": "Automatically removed quote of whole previous post.",
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228183,
"name": "Philo Math",
"username": "Philomath868",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/b487fb/{size}.png",
"created_at": "2025-06-18T16:41:13.944Z",
"cooked": "<p>Thanks, that’s a clear and succinct explanation!</p>\n<p>But I guess my question can still stand regarding <code>decoder_input_ids</code>, in case it’s based on labels (see <a href=\"https://discuss.huggingface.co/t/what-should-decoder-input-ids-be-when-pre-training-mbart/159819\">my other question</a>, which would mean - if I understand correctly - that labels (shifted right) <strong>are</strong> used during computation, at decoder side, no?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T16:41:13.944Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 10,
"readers_count": 9,
"score": 27,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Philo Math",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/what-should-decoder-input-ids-be-when-pre-training-mbart/159819",
"internal": true,
"reflection": false,
"title": "What should decoder_input_ids be when pre-training mBART?",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97307,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 94214,
"username": "Mdrnfox",
"name": "Riley Fox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228187,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-18T17:06:29.282Z",
"cooked": "<p>My bad, I completely didn’t see that</p>\n<p>Yes, the decoder_attention_mask (or just attention_mask on decoder_input_ids ) should match the decoder input, which is usually labels shifted right.</p>\n<p>decoder_input_ids are either provided manually or auto-generated by shifting labels right.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T17:06:29.282Z",
"reply_count": 1,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 36.6,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 97307,
"username": "Philomath868",
"name": "Philo Math",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/b487fb/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228191,
"name": "Philo Math",
"username": "Philomath868",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/b487fb/{size}.png",
"created_at": "2025-06-18T17:13:17.484Z",
"cooked": "<p>So in my dataset, I should include both attention_mask and decoder_attention_mask? Will the model know which mask to use at which phase? I’m a bit confused…</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T17:13:17.484Z",
"reply_count": 1,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 21.2,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Philo Math",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97307,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 94214,
"username": "Mdrnfox",
"name": "Riley Fox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228196,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-18T17:33:29.409Z",
"cooked": "<p>With HF Trainer, you only need to pass input_ids, attention_mask, labels</p>\n<p>If you pass labels, the model will:<br>\n1.\tAutomatically shift them to create decoder_input_ids<br>\n2.\tCreate the decoder_attention_mask to match the decoder_input_ids<br>\n3.\tHandle masking and loss computation (ignoring -100 in labels)</p>\n<p>So the full decoder setup is inferred internally — as long as you provide labels.</p>\n<p>You do not need to manually include decoder_input_ids or decoder_attention_mask — they are automatically derived at runtime by the model or tokenizer.</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T17:33:29.575Z",
"reply_count": 1,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 6,
"readers_count": 5,
"score": 36.2,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": "Automatically removed quote of whole previous post.",
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 97307,
"username": "Philomath868",
"name": "Philo Math",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/b487fb/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228199,
"name": "Philo Math",
"username": "Philomath868",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/b487fb/{size}.png",
"created_at": "2025-06-18T17:40:16.713Z",
"cooked": "<p>Thank you!</p>\n<p>So just to make it absolutely clear (just correct me if I’m wrong; ignore otherwise <img src=\"https://emoji.discourse-cdn.com/apple/wink.png?v=14\" title=\":wink:\" class=\"emoji\" alt=\":wink:\" loading=\"lazy\" width=\"20\" height=\"20\"> ): I must pass attention_mask based on the noised text (input_ids), for the encoder. I can just leave the (possibly longer) decoder_attention_mask for the trainer to handle. Great!</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-06-18T17:40:16.713Z",
"reply_count": 0,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 31.2,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "Philo Math",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97307,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/7",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 94214,
"username": "Mdrnfox",
"name": "Riley Fox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228275,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-19T05:40:33.060Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 8,
"post_type": 3,
"posts_count": 8,
"updated_at": "2025-06-19T05:40:33.060Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 159820,
"topic_slug": "does-attention-mask-refer-to-input-ids-or-to-labels",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/does-attention-mask-refer-to-input-ids-or-to-labels/159820/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Seems like a silly question, but I’m learning and can’t find anything definitive…</p>
<p>In models where <code>input_ids</code> and <code>labels</code> may be of different length (i.e. denoising, where a span of several tokens in labels may have been replaced by a single token), should the <code>attention_mask</code> correspond to labels (so the original chunk size) or to input_ids (so resized after noising)?</p>
|
<p>With HF Trainer, you only need to pass input_ids, attention_mask, labels</p>
<p>If you pass labels, the model will:<br>
1. Automatically shift them to create decoder_input_ids<br>
2. Create the decoder_attention_mask to match the decoder_input_ids<br>
3. Handle masking and loss computation (ignoring -100 in labels)</p>
<p>So the full decoder setup is inferred internally — as long as you provide labels.</p>
<p>You do not need to manually include decoder_input_ids or decoder_attention_mask — they are automatically derived at runtime by the model or tokenizer.</p>
|
Not seeing memory benefit to accelerate/FSDP2
|
https://discuss.huggingface.co/t/not-seeing-memory-benefit-to-accelerate-fsdp2/158039
| 158,039
| 18
|
2025-06-04T21:34:41.903000Z
|
[
{
"id": 225715,
"name": "hpcpony",
"username": "hpcpony",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/h/779978/{size}.png",
"created_at": "2025-06-04T21:34:41.982Z",
"cooked": "<p>TL;DR Why doesn’t Acclerate/FSDP seem to be doing much of anything to reduce memory in the following?</p>\n<p>I’m trying to get some hands-on and learn how to run large models across multiple nodes and/or GPUs. I’m starting with Trainer/accelerate/FSDP2 and planning to work up from there but I think I’m missing something.</p>\n<p>python 3.12.9<br>\ntorch 2.7.0<br>\ntransformers 4.52.4<br>\naccelerate 1.7.0</p>\n<p>My “toy” program to train an “empty” model:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from datasets import Dataset, DatasetDict\nfrom transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM\n\nfrom transformers import DefaultDataCollator, DataCollatorForLanguageModeling\nfrom transformers import TrainingArguments, Trainer\nimport os\n\nmodel_dir = 'NousResearch/Llama-3.2-1B'\nTRACE = False\nN = 2048\ncontext_length = 64\nbatch_size = 64\n\ndef load_datasets() :\n train_data_list = [\n {\"text\" : \"The quick brown fox jumped over the lazy dog's back t{:06d}\".format(i)} for i in range(4*N)\n ]\n eval_data_list = [\n {\"text\" : \"The quick brown fox jumped over the lazy dog's back e{:06d}\".format(i)} for i in range(N)\n ]\n datasets = DatasetDict ( # create datasets dict train and eval\n { 'train': Dataset.from_list(train_data_list),\n 'eval' : Dataset.from_list(eval_data_list)}\n )\n return datasets\n\ndef load_tokenizer(model_dir) :\n tokenizer = AutoTokenizer.from_pretrained(model_dir)\n return tokenizer\n\ndef load_model(model_dir) :\n # get just the config from the pretrained directory\n config = AutoConfig.from_pretrained(model_dir)\n model = AutoModelForCausalLM.from_config(config)\n return model\n\ndef mytrain(model_dir) :\n\n def tokenize(dataset) :\n return tokenizer(dataset['text'], padding='max_length', max_length=context_length, return_length=True)\n\n ##\n raw_datasets = load_datasets()\n if TRACE : print(\"dataset\\n\", raw_datasets)\n ##\n tokenizer = load_tokenizer(model_dir)\n if TRACE : print(\"tokenizer\\n\", tokenizer)\n ##\n tokenizer.pad_token = tokenizer.eos_token\n tokenized_datasets = raw_datasets.map(\n tokenize, batched=True, remove_columns=raw_datasets[\"train\"].column_names)\n if TRACE : print(\"tokenized_datasets\\n\", tokenized_datasets)\n ##\n data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)\n if TRACE :\n example_collated = data_collator([tokenized_datasets[\"train\"][i] for i in range(3)])\n print(\"example_collated\\n\", example_collated)\n ##\n training_args = TrainingArguments( # do this before model load for FSDP?\n output_dir=\"outputs/\",\n per_device_train_batch_size=batch_size,\n per_device_eval_batch_size=batch_size,\n num_train_epochs=10,\n logging_strategy=\"epoch\",\n eval_strategy=\"epoch\",\n save_strategy=\"no\",\n push_to_hub=False,\n disable_tqdm=True,\n deepspeed=None,\n )\n ##\n model = load_model(model_dir) # do the after TrainingArguments which sets up some stuff?\n if TRACE : print(\"model\\n\", model)\n ##\n trainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=tokenized_datasets[\"train\"],\n eval_dataset=tokenized_datasets[\"eval\"],\n processing_class=tokenizer,\n data_collator=data_collator,\n )\n trainer.train()\n\nfrom datasets.utils.logging import disable_progress_bar\nimport torch\nif __name__ == \"__main__\" :\n disable_progress_bar()\n mytrain(\n model_dir=model_dir\n )\n torch.distributed.destroy_process_group()\n</code></pre>\n<p>I first run my test progam as simple python/pytorch; single GPU without accelerate.</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">[gpu2:training] CUDA_VISIBLE_DEVICES=0 python 05_acctest.py \n{'loss': 0.8924, 'grad_norm': 0.8125, 'learning_rate': 4.50390625e-05, 'epoch': 1.0}\n{'eval_loss': 2.5442957878112793, 'eval_runtime': 2.4496, 'eval_samples_per_second': 836.064, 'eval_steps_per_second': 13.063, 'epoch': 1.0}\n{'loss': 0.6293, 'grad_norm': 0.65234375, 'learning_rate': 4.00390625e-05, 'epoch': 2.0}\n{'eval_loss': 2.6600184440612793, 'eval_runtime': 2.4495, 'eval_samples_per_second': 836.094, 'eval_steps_per_second': 13.064, 'epoch': 2.0}\n .\n .\n .\n{'loss': 0.6061, 'grad_norm': 0.4921875, 'learning_rate': 3.90625e-08, 'epoch': 10.0}\n{'eval_loss': 2.8240463733673096, 'eval_runtime': 2.4496, 'eval_samples_per_second': 836.055, 'eval_steps_per_second': 13.063, 'epoch': 10.0}\n{'train_runtime': 333.183, 'train_samples_per_second': 245.871, 'train_steps_per_second': 3.842, 'train_loss': 0.6405227959156037, 'epoch': 10.0}\n</code></pre>\n<p>While it’s running I use nvidia-smi to look at the memory used</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">+-----------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=========================================================================================|\n| 0 N/A N/A 21181 C python 21372MiB |\n+-----------------------------------------------------------------------------------------+\n</code></pre>\n<p>That’s at least in the ball-park for what accelerate estimates:</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">[gpu2:training] accelerate estimate-memory NousResearch/Llama-3.2-1B\nLoading pretrained config for `NousResearch/Llama-3.2-1B` from `transformers`...\n┌────────────────────────────────────────────────────────┐\n│ Memory Usage for loading `NousResearch/Llama-3.2-1B` │\n├───────┬─────────────┬──────────┬───────────────────────┤\n│ dtype │Largest Layer│Total Size│ Training using Adam │\n├───────┼─────────────┼──────────┼───────────────────────┤\n│float32│ 1002.0 MB │ 4.6 GB │ 18.42 GB │\n│float16│ 501.0 MB │ 2.3 GB │ 9.21 GB │\n│ int8 │ 250.5 MB │ 1.15 GB │ N/A │\n│ int4 │ 125.25 MB │589.28 MB │ N/A │\n└───────┴─────────────┴──────────┴───────────────────────┘\n</code></pre>\n<p>Next I use “accelerate config” to generate a config file for 2 GPUs using FSDP2. (mostly with default values)</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">[gpu2:training] cat 1n2gfsdp_defaults.yaml \ncompute_environment: LOCAL_MACHINE\ndebug: false\ndistributed_type: FSDP\ndowncast_bf16: 'no'\nenable_cpu_affinity: false\nfsdp_config:\n fsdp_activation_checkpointing: false\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\n fsdp_cpu_ram_efficient_loading: true\n fsdp_offload_params: false\n fsdp_reshard_after_forward: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n fsdp_version: 2\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: 'no'\nnum_machines: 1\nnum_processes: 2\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n</code></pre>\n<p>Using that file an running with accelerate…</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">[gpu2:training] CUDA_VISIBLE_DEVICES=0,1 accelerate launch --config_file 1n2gfsdp_defaults.yaml 05_acctest.py \n{'loss': 1.0797, 'grad_norm': 0.6328125, 'learning_rate': 4.5078125000000006e-05, 'epoch': 1.0}\n{'eval_loss': 2.5193161964416504, 'eval_runtime': 1.376, 'eval_samples_per_second': 1488.383, 'eval_steps_per_second': 11.628, 'epoch': 1.0}\n{'loss': 0.6584, 'grad_norm': 0.4609375, 'learning_rate': 4.0078125e-05, 'epoch': 2.0}\n{'eval_loss': 2.5891079902648926, 'eval_runtime': 1.3771, 'eval_samples_per_second': 1487.218, 'eval_steps_per_second': 11.619, 'epoch': 2.0}\n .\n .\n .\n{'loss': 0.6096, 'grad_norm': 0.462890625, 'learning_rate': 7.8125e-08, 'epoch': 10.0}\n{'eval_loss': 2.754133462905884, 'eval_runtime': 1.3776, 'eval_samples_per_second': 1486.605, 'eval_steps_per_second': 11.614, 'epoch': 10.0}\n{'train_runtime': 178.9799, 'train_samples_per_second': 457.705, 'train_steps_per_second': 3.576, 'train_loss': 0.6661747217178344, 'epoch': 10.0}\n</code></pre>\n<p>… nvidia-smi memory during the computation…</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">+-----------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=========================================================================================|\n| 0 N/A N/A 24421 C ...AI/training-4.52.4/bin/python 21384MiB |\n| 1 N/A N/A 24422 C ...AI/training-4.52.4/bin/python 21388MiB |\n+-----------------------------------------------------------------------------------------+\n</code></pre>\n<p>Next a config file with 4 GPUs…</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">[gpu2:training] cat 1n4gfsdp_defaults.yaml \ncompute_environment: LOCAL_MACHINE\ndebug: false\ndistributed_type: FSDP\ndowncast_bf16: 'no'\nenable_cpu_affinity: false\nfsdp_config:\n fsdp_activation_checkpointing: false\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\n fsdp_cpu_ram_efficient_loading: true\n fsdp_offload_params: false\n fsdp_reshard_after_forward: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n fsdp_version: 2\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: 'no'\nnum_machines: 1\nnum_processes: 4\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n</code></pre>\n<p>… execute using accelerate…</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">[gpu2:training] CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --config_file 1n4gfsdp_defaults.yaml 05_acctest.py \n{'loss': 1.373, 'grad_norm': 0.458984375, 'learning_rate': 4.515625e-05, 'epoch': 1.0}\n{'eval_loss': 2.402463912963867, 'eval_runtime': 0.6972, 'eval_samples_per_second': 2937.372, 'eval_steps_per_second': 11.474, 'epoch': 1.0}\n{'loss': 0.7474, 'grad_norm': 0.435546875, 'learning_rate': 4.0156250000000004e-05, 'epoch': 2.0}\n{'eval_loss': 2.3128156661987305, 'eval_runtime': 0.6946, 'eval_samples_per_second': 2948.607, 'eval_steps_per_second': 11.518, 'epoch': 2.0}\n .\n .\n .\n{'loss': 0.6214, 'grad_norm': 0.30078125, 'learning_rate': 1.5625e-07, 'epoch': 10.0}\n{'eval_loss': 2.432434320449829, 'eval_runtime': 0.694, 'eval_samples_per_second': 2950.801, 'eval_steps_per_second': 11.527, 'epoch': 10.0}\n{'train_runtime': 89.6101, 'train_samples_per_second': 914.182, 'train_steps_per_second': 3.571, 'train_loss': 0.718875628709793, 'epoch': 10.0}\n</code></pre>\n<p>… nvidia-smi while executing…</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">+-----------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=========================================================================================|\n| 0 N/A N/A 25570 C ...AI/training-4.52.4/bin/python 20526MiB |\n| 1 N/A N/A 25571 C ...AI/training-4.52.4/bin/python 20146MiB |\n| 2 N/A N/A 25572 C ...AI/training-4.52.4/bin/python 20146MiB |\n| 3 N/A N/A 25573 C ...AI/training-4.52.4/bin/python 20146MiB |\n+-----------------------------------------------------------------------------------------+\n</code></pre>\n<p>Clearly something is happening; I’m getting a performance benefit from using more GPUs (almost linear!). But, I’m not seeing a substantial improvement in memory usage.</p>\n<ol>\n<li>Is my config file missing something? Are there better parameters that facilitate memory savings?</li>\n<li>Can I somehow get accelerate to dump what it thinks it’s doing (vs. what I specified in the config file)?</li>\n<li>Can I somehow dump the wrapped model to see what FSDP has done?</li>\n</ol>\n<p>===============================================================</p>\n<p>I did a similar experiment with bloom-3b just to see if it made any difference, and things still seem strange.</p>\n<pre data-code-wrap=\"shell\"><code class=\"lang-shell\">+-----------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=========================================================================================|\n| 0 N/A N/A 37058 C python 74748MiB |\n+-----------------------------------------------------------------------------------------+\n\n┌────────────────────────────────────────────────────┐\n│ Memory Usage for loading `bigscience/bloom-3b` │\n├───────┬─────────────┬──────────┬───────────────────┤\n│ dtype │Largest Layer│Total Size│Training using Adam│\n├───────┼─────────────┼──────────┼───────────────────┤\n│float32│ 2.39 GB │ 11.19 GB │ 44.74 GB │\n│float16│ 1.2 GB │ 5.59 GB │ 22.37 GB │\n│ int8 │ 612.5 MB │ 2.8 GB │ N/A │\n│ int4 │ 306.25 MB │ 1.4 GB │ N/A │\n└───────┴─────────────┴──────────┴───────────────────┘\n\n+-----------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=========================================================================================|\n| 0 N/A N/A 251138 C ...AI/training-4.52.4/bin/python 53922MiB |\n| 1 N/A N/A 251139 C ...AI/training-4.52.4/bin/python 53538MiB |\n| 2 N/A N/A 251140 C ...AI/training-4.52.4/bin/python 53538MiB |\n| 3 N/A N/A 251141 C ...AI/training-4.52.4/bin/python 53538MiB |\n+-----------------------------------------------------------------------------------------+\n</code></pre>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-04T21:34:41.982Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 146,
"reads": 4,
"readers_count": 3,
"score": 700.8,
"yours": false,
"topic_id": 158039,
"topic_slug": "not-seeing-memory-benefit-to-accelerate-fsdp2",
"display_username": "hpcpony",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96043,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-seeing-memory-benefit-to-accelerate-fsdp2/158039/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225774,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-05T06:24:05.499Z",
"cooked": "<p>I don’t really understand how multi-GPU environments work…</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/pytorch/pytorch/issues/147168\">\n <header class=\"source\">\n\n <a href=\"https://github.com/pytorch/pytorch/issues/147168\" target=\"_blank\" rel=\"noopener\">github.com/pytorch/pytorch</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/pytorch/pytorch/issues/147168\" target=\"_blank\" rel=\"noopener\">[FSDP2] The evil `record_stream` in c10d causes FSDP2 to over-allocate GPU memory</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-02-14\" data-time=\"01:42:21\" data-timezone=\"UTC\">01:42AM - 14 Feb 25 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-03-08\" data-time=\"20:00:15\" data-timezone=\"UTC\">08:00PM - 08 Mar 25 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/leonardo0lyj\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/2/3/236a0034cd17360fc1da11117ce7c06ec6b3cd73.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"C6B180\">\n leonardo0lyj\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n oncall: distributed\n </span>\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n module: c10d\n </span>\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n module: fsdp\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">Hey Andrew @awgu,\n\nAs a big fan of FSDP2, I find an potential bug 😄\n\n## Demand:\n<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">- No inter-stream memory fragmentation (incurred by copy in streams)\n- Explicit Prefetch\n- CPU runs a head of GPU by a lot\n\n## `_set_unshard_async_op(True)`\n\nTo satisfy these demands, FSDP2 has to turn on [`_set_unshard_async_op(True)`](https://github.com/pytorch/pytorch/blob/20a369aa3abb6083600d5b22fcd8ba6e861c3959/torch/distributed/fsdp/_fully_shard/_fully_shard.py#L413) with explicit prefetch `set_modules_to_forward_prefetch` and `set_modules_to_backward_prefetch`.\n\n## Memory Over-Allocation\n\nThen memory over-allocation happens like this:\n\n\n\nwith memory traces:\n\n\n\n\n\n\n## Root Cause\n\nAs known to all, these memory over-allocations are caused by the evil `tensor.record_stream(ncclStream)`. Although FSDP2 tried to avoid this evil originated from FSDP1, such `record_stream` still is [embedded in all c10d collectives](https://github.com/pytorch/pytorch/blob/0acbf8039abccfc17f9c8529d217209db5a7cc85/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L5373) (when `async_op=True`). Therefore, FSDP2 still suffers over-allocation from this evil in c10d.\n\n## Candidate Solution\n\nNot sure how can we avoid the `record_stream` even when `async_op=True`?\n\nIMO, candidate solutions are below:\n1. Make `TORCH_NCCL_AVOID_RECORD_STREAMS=True` as an default value, getting rid of the `record_stream` in c10d. (Safety should be good without `record_stream`, as collective with `async_op=True` usually starts from allocation stream and ends at allocation stream, or users indeed know how to [manually sync streams](https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html).)\n\n2. Make `TORCH_NCCL_AVOID_RECORD_STREAMS=True` an advanced option to each collective, such as `dist.all_gather(..., _avoid_record_stream=True)`. This limits the scope of environmental `TORCH_NCCL_AVOID_RECORD_STREAMS` to each specific collective.\n\n3. Use only `dist.all_gather(async_op=False)` in FSDP2, but [changes the `current_stream`](https://github.com/pytorch/pytorch/blob/20a369aa3abb6083600d5b22fcd8ba6e861c3959/torch/distributed/fsdp/_fully_shard/_fsdp_param_group.py#L92) to the `all_gather_stream` such that all gather still allocates/frees in `current_stream` while runs in `all_gather_stream` and overlaps with `current_stream`, just like `async_op=True`.\n\n```python\ndef get_all_gather_streams(\n self, async_op: bool, training_state: TrainingState\n ) -> tuple[torch.Stream, torch.Stream]:\n if not async_op and training_state in (\n TrainingState.FORWARD,\n TrainingState.PRE_BACKWARD,\n ):\n # Use separate streams for implicit prefetching\n return self.all_gather_copy_in_stream, self.all_gather_stream\n \n # Use separate streams for explicit prefetching!\n current_stream = self.device_handle.current_stream()\n return current_stream, self.all_gather_stream # Change this!\n```\n\nHow do you prefer? \n\n(Let us make FSDP great again 😄)\n\n\n## Code\n\nP.S. the code to reproduce over-allocation:\n```python\nclass MLP(nn.Module):\n def __init__(self, hidden_dim: int, bias: bool = False):\n super().__init__()\n self.fc1 = nn.Linear(hidden_dim, hidden_dim, bias=bias)\n self.gelu = nn.GELU()\n self.fc2 = nn.Linear(hidden_dim, hidden_dim, bias=bias)\n\n def forward(self, x):\n x = self.fc1(x)\n x = self.gelu(x)\n x = self.fc2(x)\n return x\n\n\nclass MultiMLP(nn.Module):\n def __init__(self, hidden_dim: int, bias: bool = False, layers: int = 4):\n super().__init__()\n self.pre_norm = nn.LayerNorm(hidden_dim, bias=bias)\n self.mlps = nn.ModuleList([MLP(hidden_dim, bias) for _ in range(layers)])\n self.post_norm = nn.LayerNorm(hidden_dim, bias=bias)\n\n def forward(self, x):\n x = self.pre_norm(x)\n for mlp in self.mlps:\n x = x + mlp(x)\n x = self.post_norm(x)\n return x\n\nclass TestMemory(DTensorTestBase):\n @with_comms\n def test_over_allocation(self):\n mesh = init_device_mesh(\"cuda\", (self.world_size,))\n device = torch.device(\"cuda\")\n hidden_dim = 10240\n total_bsz = 16\n\n # ----- init model --------\n torch.manual_seed(0)\n model = MultiMLP(hidden_dim=hidden_dim).to(device).to(torch.float32)\n\n # -------- fsdp2 wrap --------\n fully_shard_fn = functools.partial(\n fully_shard,\n mesh=mesh,\n reshard_after_forward=True,\n )\n\n last_fsdp_module = None\n for module in model.modules():\n if isinstance(module, MLP):\n fully_shard_fn(module)\n if last_fsdp_module is not None:\n last_fsdp_module.set_modules_to_forward_prefetch([module])\n module.set_modules_to_backward_prefetch([last_fsdp_module])\n last_fsdp_module = module\n fsdp_model = fully_shard_fn(model)\n fsdp_model._set_unshard_async_op(True)\n\n optim = torch.optim.Adam(fsdp_model.parameters())\n\n # ----- init data -----\n torch.manual_seed(self.rank)\n bsz = total_bsz // self.world_size\n\n # -------- training loop --------\n torch.distributed.barrier()\n torch.cuda.synchronize(self.rank)\n \n train_iter = 4\n for iter in range(train_iter):\n # torch.distributed.barrier()\n # torch.cuda.synchronize(self.rank)\n\n if self.rank == 0 and iter == train_iter - 1:\n torch.cuda.memory._record_memory_history(max_entries=int(1E6))\n\n with record_function(\"## zero grad ##\"):\n optim.zero_grad()\n\n input = torch.randn((bsz, hidden_dim), device=\"cuda\")\n\n with record_function(f\"## forward ##\"):\n output = fsdp_model(input)\n loss = output.mean()\n\n with record_function(f\"## backward ##\"):\n loss.backward()\n\n with record_function(\"## optimizer step ##\"):\n optim.step()\n\n if self.rank == 0 and iter == train_iter - 1:\n timestamp = datetime.now().strftime(\"%b_%d_%H_%M_%S\")\n file_name = f\"mem_{timestamp}\"\n torch.cuda.memory._dump_snapshot(f\"{file_name}.pickle\")\n torch.cuda.memory._record_memory_history(enabled=None)\n\n torch.distributed.barrier()\n torch.cuda.synchronize(self.rank)\n\n```\n\n\n\n\n\n\ncc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/pytorch/torchtune/issues/2402\">\n <header class=\"source\">\n\n <a href=\"https://github.com/pytorch/torchtune/issues/2402\" target=\"_blank\" rel=\"noopener\">github.com/pytorch/torchtune</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/pytorch/torchtune/issues/2402\" target=\"_blank\" rel=\"noopener\">Does FSDP v2 have the best performance?</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-02-17\" data-time=\"08:47:36\" data-timezone=\"UTC\">08:47AM - 17 Feb 25 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/dz1iang\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/a/cad7ace10f462d60d16c28833bff2c858792f208.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"91808C\">\n dz1iang\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n discussion\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">Hi, when I set fsdp_reshard_after_forward: False, the training speed increased b<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">y approximately 5-7%(tokens_per_second_per_gpu). Are there any other configurations that affect performance? Or where do you recommend referring to for configurations?\n\nIn addition, the setting of gradient_accumulation_steps does not affect the speed. Generally speaking, setting a larger value will reduce the frequency of communication and speed up the training. The model used in the experiment is Qwen 2.5 3B.</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/pytorch/torchtitan/issues/735\">\n <header class=\"source\">\n\n <a href=\"https://github.com/pytorch/torchtitan/issues/735\" target=\"_blank\" rel=\"noopener\">github.com/pytorch/torchtitan</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/pytorch/torchtitan/issues/735\" target=\"_blank\" rel=\"noopener\">[question]FSDP2 have more peak active memory/reserved memory than FSDP1</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-12-13\" data-time=\"08:42:49\" data-timezone=\"UTC\">08:42AM - 13 Dec 24 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-12-17\" data-time=\"14:37:34\" data-timezone=\"UTC\">02:37PM - 17 Dec 24 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/FindDefinition\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/2/2/22935b8589757d9be3d0bb1435990ef886cf3884.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"BCE0E3\">\n FindDefinition\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n question\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">## Environment\nOS: Ubuntu\nGPU: 8x GPU\ntorch: torch-2.6.0.dev20241212+cu124\nD<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">DP: 4-way Tensor Parallel * 2-way FSDP\n\n## Problem\nI'm using FSDP+TP in my model and follow torchtitan code. when I switch fsdp1 to fsdp2, the memory usage showed by `nvidia-smi` increases by 10GB, also the peak active memory is greatly larger than fsdp1. is this expected? Which metric should be cared in `memory_summary` to avoid OOM?\n\nhere is the result from `torch.cuda.memory_summary()`. Following tables are generated when **first step is end**.\n\n* fsdp2\n```\n|===========================================================================|\n| PyTorch CUDA memory summary, device ID 0 |\n|---------------------------------------------------------------------------|\n| CUDA OOMs: 0 | cudaMalloc retries: 0 |\n|===========================================================================|\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\n|---------------------------------------------------------------------------|\n| Allocated memory | 13975 MiB | 18803 MiB | 2142 GiB | 2128 GiB |\n| from large pool | 13959 MiB | 18790 MiB | 2140 GiB | 2127 GiB |\n| from small pool | 16 MiB | 17 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| Active memory | 13975 MiB | 39454 MiB | 2142 GiB | 2128 GiB |\n| from large pool | 13959 MiB | 39437 MiB | 2140 GiB | 2127 GiB |\n| from small pool | 16 MiB | 18 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| Requested memory | 13792 MiB | 39306 MiB | 2138 GiB | 2125 GiB |\n| from large pool | 13775 MiB | 39289 MiB | 2137 GiB | 2124 GiB |\n| from small pool | 16 MiB | 18 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| GPU reserved memory | 45590 MiB | 45590 MiB | 45590 MiB | 0 B |\n| from large pool | 45566 MiB | 45566 MiB | 45566 MiB | 0 B |\n| from small pool | 24 MiB | 24 MiB | 24 MiB | 0 B |\n|---------------------------------------------------------------------------|\n| Non-releasable memory | 377331 KiB | 7818 MiB | 1017 GiB | 1017 GiB |\n| from large pool | 375788 KiB | 7813 MiB | 1016 GiB | 1016 GiB |\n| from small pool | 1543 KiB | 10 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| Allocations | 4735 | 4738 | 34212 | 29477 |\n| from large pool | 1504 | 1507 | 15954 | 14450 |\n| from small pool | 3231 | 3348 | 18258 | 15027 |\n|---------------------------------------------------------------------------|\n| Active allocs | 4735 | 4738 | 34212 | 29477 |\n| from large pool | 1504 | 1507 | 15954 | 14450 |\n| from small pool | 3231 | 3348 | 18258 | 15027 |\n|---------------------------------------------------------------------------|\n| GPU reserved segments | 304 | 304 | 304 | 0 |\n| from large pool | 292 | 292 | 292 | 0 |\n| from small pool | 12 | 12 | 12 | 0 |\n|---------------------------------------------------------------------------|\n| Non-releasable allocs | 15 | 135 | 15054 | 15039 |\n| from large pool | 13 | 89 | 9160 | 9147 |\n| from small pool | 2 | 65 | 5894 | 5892 |\n|---------------------------------------------------------------------------|\n| Oversize allocations | 0 | 0 | 0 | 0 |\n|---------------------------------------------------------------------------|\n| Oversize GPU segments | 0 | 0 | 0 | 0 |\n|===========================================================================|\n```\n\n* fsdp1\n```\n|===========================================================================|\n| PyTorch CUDA memory summary, device ID 0 |\n|---------------------------------------------------------------------------|\n| CUDA OOMs: 0 | cudaMalloc retries: 0 |\n|===========================================================================|\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\n|---------------------------------------------------------------------------|\n| Allocated memory | 13947 MiB | 18561 MiB | 2156 GiB | 2142 GiB |\n| from large pool | 13937 MiB | 18556 MiB | 2155 GiB | 2141 GiB |\n| from small pool | 10 MiB | 11 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| Active memory | 13947 MiB | 25765 MiB | 2156 GiB | 2142 GiB |\n| from large pool | 13937 MiB | 25758 MiB | 2155 GiB | 2141 GiB |\n| from small pool | 10 MiB | 11 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| Requested memory | 13792 MiB | 25709 MiB | 2154 GiB | 2140 GiB |\n| from large pool | 13782 MiB | 25702 MiB | 2153 GiB | 2139 GiB |\n| from small pool | 9 MiB | 11 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| GPU reserved memory | 36458 MiB | 36458 MiB | 36458 MiB | 0 B |\n| from large pool | 36446 MiB | 36446 MiB | 36446 MiB | 0 B |\n| from small pool | 12 MiB | 12 MiB | 12 MiB | 0 B |\n|---------------------------------------------------------------------------|\n| Non-releasable memory | 402232 KiB | 6360 MiB | 1345 GiB | 1345 GiB |\n| from large pool | 400277 KiB | 6359 MiB | 1344 GiB | 1343 GiB |\n| from small pool | 1955 KiB | 6 MiB | 1 GiB | 1 GiB |\n|---------------------------------------------------------------------------|\n| Allocations | 2460 | 2463 | 26870 | 24410 |\n| from large pool | 832 | 835 | 14354 | 13522 |\n| from small pool | 1628 | 1631 | 12516 | 10888 |\n|---------------------------------------------------------------------------|\n| Active allocs | 2460 | 2463 | 26870 | 24410 |\n| from large pool | 832 | 835 | 14354 | 13522 |\n| from small pool | 1628 | 1631 | 12516 | 10888 |\n|---------------------------------------------------------------------------|\n| GPU reserved segments | 305 | 305 | 305 | 0 |\n| from large pool | 299 | 299 | 299 | 0 |\n| from small pool | 6 | 6 | 6 | 0 |\n|---------------------------------------------------------------------------|\n| Non-releasable allocs | 56 | 86 | 13297 | 13241 |\n| from large pool | 53 | 76 | 8544 | 8491 |\n| from small pool | 3 | 31 | 4753 | 4750 |\n|---------------------------------------------------------------------------|\n| Oversize allocations | 0 | 0 | 0 | 0 |\n|---------------------------------------------------------------------------|\n| Oversize GPU segments | 0 | 0 | 0 | 0 |\n|===========================================================================|\n```\n\nfsdp related code:\n```Python\n compute_dtype = torch.bfloat16 \n full_shard: bool = True\n if use_fsdp2:\n mixed_fsdp2 = MixedPrecisionPolicy(reduce_dtype=torch.float32, param_dtype=compute_dtype)\n for layer_str, block in tp_model.blocks.items():\n # fsdp2 currently don't change buffer dtype in mixed precision policy\n # so we have to set buffer dtype by hand\n block.t_embed.to(torch.bfloat16)\n fully_shard(block, mesh=ddp_cp_mesh, mp_policy=mixed_fsdp2)\n fully_shard(tp_model, mesh=ddp_cp_mesh, mp_policy=mixed_fsdp2)\n tp_model_ddp = tp_model\n else:\n my_auto_wrap_policy = functools.partial(\n transformer_auto_wrap_policy, \n transformer_layer_cls={\n type(tp_model.blocks[\"0\"]),\n },\n )\n st = ShardingStrategy.FULL_SHARD if full_shard else ShardingStrategy.SHARD_GRAD_OP\n mixed = MixedPrecision(param_dtype=compute_dtype, reduce_dtype=torch.float32, buffer_dtype=compute_dtype)\n tp_model_ddp = FSDP(tp_model, auto_wrap_policy=my_auto_wrap_policy, device_mesh=ddp_cp_mesh, mixed_precision=mixed, \n sharding_strategy=st, device_id=torch.cuda.current_device(), use_orig_params=True)\n\n```\n* fsdp2 memory timeline\n\n\n* fsdp1 memory timeline\n</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-05T06:24:05.499Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 158039,
"topic_slug": "not-seeing-memory-benefit-to-accelerate-fsdp2",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/pytorch/torchtitan/issues/735",
"internal": false,
"reflection": false,
"title": "[question]FSDP2 have more peak active memory/reserved memory than FSDP1 · Issue #735 · pytorch/torchtitan · GitHub",
"clicks": 6
},
{
"url": "https://github.com/pytorch/torchtune/issues/2402",
"internal": false,
"reflection": false,
"title": "Does FSDP v2 have the best performance? · Issue #2402 · pytorch/torchtune · GitHub",
"clicks": 5
},
{
"url": "https://github.com/pytorch/pytorch/issues/147168",
"internal": false,
"reflection": false,
"title": "[FSDP2] The evil `record_stream` in c10d causes FSDP2 to over-allocate GPU memory · Issue #147168 · pytorch/pytorch · GitHub",
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-seeing-memory-benefit-to-accelerate-fsdp2/158039/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228173,
"name": "hpcpony",
"username": "hpcpony",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/h/779978/{size}.png",
"created_at": "2025-06-18T15:49:22.924Z",
"cooked": "<p>So after much futzing around and doing FSDP from pytorch I discovered that the answer to this question is that the memory usage reported by nvidia-smi is not an accurate reflection of memory required/used by pytorch. Apparently pytorch maintains a cache which is greater than that needed/used and that is primarily what the nvidia number reflects.</p>\n<p>pytorch.cuda has a number of ways to get memory information that seems to be more relevant (though not always clear of the implications).</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-18T15:49:22.924Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 10,
"reads": 3,
"readers_count": 2,
"score": 65.6,
"yours": false,
"topic_id": 158039,
"topic_slug": "not-seeing-memory-benefit-to-accelerate-fsdp2",
"display_username": "hpcpony",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96043,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-seeing-memory-benefit-to-accelerate-fsdp2/158039/3",
"reactions": [
{
"id": "clap",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 228257,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-19T03:50:18.068Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-19T03:50:18.068Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 5.4,
"yours": false,
"topic_id": 158039,
"topic_slug": "not-seeing-memory-benefit-to-accelerate-fsdp2",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-seeing-memory-benefit-to-accelerate-fsdp2/158039/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>TL;DR Why doesn’t Acclerate/FSDP seem to be doing much of anything to reduce memory in the following?</p>
<p>I’m trying to get some hands-on and learn how to run large models across multiple nodes and/or GPUs. I’m starting with Trainer/accelerate/FSDP2 and planning to work up from there but I think I’m missing something.</p>
<p>python 3.12.9<br>
torch 2.7.0<br>
transformers 4.52.4<br>
accelerate 1.7.0</p>
<p>My “toy” program to train an “empty” model:</p>
<pre data-code-wrap="python"><code class="lang-python">from datasets import Dataset, DatasetDict
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
from transformers import DefaultDataCollator, DataCollatorForLanguageModeling
from transformers import TrainingArguments, Trainer
import os
model_dir = 'NousResearch/Llama-3.2-1B'
TRACE = False
N = 2048
context_length = 64
batch_size = 64
def load_datasets() :
train_data_list = [
{"text" : "The quick brown fox jumped over the lazy dog's back t{:06d}".format(i)} for i in range(4*N)
]
eval_data_list = [
{"text" : "The quick brown fox jumped over the lazy dog's back e{:06d}".format(i)} for i in range(N)
]
datasets = DatasetDict ( # create datasets dict train and eval
{ 'train': Dataset.from_list(train_data_list),
'eval' : Dataset.from_list(eval_data_list)}
)
return datasets
def load_tokenizer(model_dir) :
tokenizer = AutoTokenizer.from_pretrained(model_dir)
return tokenizer
def load_model(model_dir) :
# get just the config from the pretrained directory
config = AutoConfig.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_config(config)
return model
def mytrain(model_dir) :
def tokenize(dataset) :
return tokenizer(dataset['text'], padding='max_length', max_length=context_length, return_length=True)
##
raw_datasets = load_datasets()
if TRACE : print("dataset\n", raw_datasets)
##
tokenizer = load_tokenizer(model_dir)
if TRACE : print("tokenizer\n", tokenizer)
##
tokenizer.pad_token = tokenizer.eos_token
tokenized_datasets = raw_datasets.map(
tokenize, batched=True, remove_columns=raw_datasets["train"].column_names)
if TRACE : print("tokenized_datasets\n", tokenized_datasets)
##
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)
if TRACE :
example_collated = data_collator([tokenized_datasets["train"][i] for i in range(3)])
print("example_collated\n", example_collated)
##
training_args = TrainingArguments( # do this before model load for FSDP?
output_dir="outputs/",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=10,
logging_strategy="epoch",
eval_strategy="epoch",
save_strategy="no",
push_to_hub=False,
disable_tqdm=True,
deepspeed=None,
)
##
model = load_model(model_dir) # do the after TrainingArguments which sets up some stuff?
if TRACE : print("model\n", model)
##
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["eval"],
processing_class=tokenizer,
data_collator=data_collator,
)
trainer.train()
from datasets.utils.logging import disable_progress_bar
import torch
if __name__ == "__main__" :
disable_progress_bar()
mytrain(
model_dir=model_dir
)
torch.distributed.destroy_process_group()
</code></pre>
<p>I first run my test progam as simple python/pytorch; single GPU without accelerate.</p>
<pre data-code-wrap="shell"><code class="lang-shell">[gpu2:training] CUDA_VISIBLE_DEVICES=0 python 05_acctest.py
{'loss': 0.8924, 'grad_norm': 0.8125, 'learning_rate': 4.50390625e-05, 'epoch': 1.0}
{'eval_loss': 2.5442957878112793, 'eval_runtime': 2.4496, 'eval_samples_per_second': 836.064, 'eval_steps_per_second': 13.063, 'epoch': 1.0}
{'loss': 0.6293, 'grad_norm': 0.65234375, 'learning_rate': 4.00390625e-05, 'epoch': 2.0}
{'eval_loss': 2.6600184440612793, 'eval_runtime': 2.4495, 'eval_samples_per_second': 836.094, 'eval_steps_per_second': 13.064, 'epoch': 2.0}
.
.
.
{'loss': 0.6061, 'grad_norm': 0.4921875, 'learning_rate': 3.90625e-08, 'epoch': 10.0}
{'eval_loss': 2.8240463733673096, 'eval_runtime': 2.4496, 'eval_samples_per_second': 836.055, 'eval_steps_per_second': 13.063, 'epoch': 10.0}
{'train_runtime': 333.183, 'train_samples_per_second': 245.871, 'train_steps_per_second': 3.842, 'train_loss': 0.6405227959156037, 'epoch': 10.0}
</code></pre>
<p>While it’s running I use nvidia-smi to look at the memory used</p>
<pre data-code-wrap="shell"><code class="lang-shell">+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 21181 C python 21372MiB |
+-----------------------------------------------------------------------------------------+
</code></pre>
<p>That’s at least in the ball-park for what accelerate estimates:</p>
<pre data-code-wrap="shell"><code class="lang-shell">[gpu2:training] accelerate estimate-memory NousResearch/Llama-3.2-1B
Loading pretrained config for `NousResearch/Llama-3.2-1B` from `transformers`...
┌────────────────────────────────────────────────────────┐
│ Memory Usage for loading `NousResearch/Llama-3.2-1B` │
├───────┬─────────────┬──────────┬───────────────────────┤
│ dtype │Largest Layer│Total Size│ Training using Adam │
├───────┼─────────────┼──────────┼───────────────────────┤
│float32│ 1002.0 MB │ 4.6 GB │ 18.42 GB │
│float16│ 501.0 MB │ 2.3 GB │ 9.21 GB │
│ int8 │ 250.5 MB │ 1.15 GB │ N/A │
│ int4 │ 125.25 MB │589.28 MB │ N/A │
└───────┴─────────────┴──────────┴───────────────────────┘
</code></pre>
<p>Next I use “accelerate config” to generate a config file for 2 GPUs using FSDP2. (mostly with default values)</p>
<pre data-code-wrap="shell"><code class="lang-shell">[gpu2:training] cat 1n2gfsdp_defaults.yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_reshard_after_forward: true
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_version: 2
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
</code></pre>
<p>Using that file an running with accelerate…</p>
<pre data-code-wrap="shell"><code class="lang-shell">[gpu2:training] CUDA_VISIBLE_DEVICES=0,1 accelerate launch --config_file 1n2gfsdp_defaults.yaml 05_acctest.py
{'loss': 1.0797, 'grad_norm': 0.6328125, 'learning_rate': 4.5078125000000006e-05, 'epoch': 1.0}
{'eval_loss': 2.5193161964416504, 'eval_runtime': 1.376, 'eval_samples_per_second': 1488.383, 'eval_steps_per_second': 11.628, 'epoch': 1.0}
{'loss': 0.6584, 'grad_norm': 0.4609375, 'learning_rate': 4.0078125e-05, 'epoch': 2.0}
{'eval_loss': 2.5891079902648926, 'eval_runtime': 1.3771, 'eval_samples_per_second': 1487.218, 'eval_steps_per_second': 11.619, 'epoch': 2.0}
.
.
.
{'loss': 0.6096, 'grad_norm': 0.462890625, 'learning_rate': 7.8125e-08, 'epoch': 10.0}
{'eval_loss': 2.754133462905884, 'eval_runtime': 1.3776, 'eval_samples_per_second': 1486.605, 'eval_steps_per_second': 11.614, 'epoch': 10.0}
{'train_runtime': 178.9799, 'train_samples_per_second': 457.705, 'train_steps_per_second': 3.576, 'train_loss': 0.6661747217178344, 'epoch': 10.0}
</code></pre>
<p>… nvidia-smi memory during the computation…</p>
<pre data-code-wrap="shell"><code class="lang-shell">+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 24421 C ...AI/training-4.52.4/bin/python 21384MiB |
| 1 N/A N/A 24422 C ...AI/training-4.52.4/bin/python 21388MiB |
+-----------------------------------------------------------------------------------------+
</code></pre>
<p>Next a config file with 4 GPUs…</p>
<pre data-code-wrap="shell"><code class="lang-shell">[gpu2:training] cat 1n4gfsdp_defaults.yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_reshard_after_forward: true
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_version: 2
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
</code></pre>
<p>… execute using accelerate…</p>
<pre data-code-wrap="shell"><code class="lang-shell">[gpu2:training] CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --config_file 1n4gfsdp_defaults.yaml 05_acctest.py
{'loss': 1.373, 'grad_norm': 0.458984375, 'learning_rate': 4.515625e-05, 'epoch': 1.0}
{'eval_loss': 2.402463912963867, 'eval_runtime': 0.6972, 'eval_samples_per_second': 2937.372, 'eval_steps_per_second': 11.474, 'epoch': 1.0}
{'loss': 0.7474, 'grad_norm': 0.435546875, 'learning_rate': 4.0156250000000004e-05, 'epoch': 2.0}
{'eval_loss': 2.3128156661987305, 'eval_runtime': 0.6946, 'eval_samples_per_second': 2948.607, 'eval_steps_per_second': 11.518, 'epoch': 2.0}
.
.
.
{'loss': 0.6214, 'grad_norm': 0.30078125, 'learning_rate': 1.5625e-07, 'epoch': 10.0}
{'eval_loss': 2.432434320449829, 'eval_runtime': 0.694, 'eval_samples_per_second': 2950.801, 'eval_steps_per_second': 11.527, 'epoch': 10.0}
{'train_runtime': 89.6101, 'train_samples_per_second': 914.182, 'train_steps_per_second': 3.571, 'train_loss': 0.718875628709793, 'epoch': 10.0}
</code></pre>
<p>… nvidia-smi while executing…</p>
<pre data-code-wrap="shell"><code class="lang-shell">+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 25570 C ...AI/training-4.52.4/bin/python 20526MiB |
| 1 N/A N/A 25571 C ...AI/training-4.52.4/bin/python 20146MiB |
| 2 N/A N/A 25572 C ...AI/training-4.52.4/bin/python 20146MiB |
| 3 N/A N/A 25573 C ...AI/training-4.52.4/bin/python 20146MiB |
+-----------------------------------------------------------------------------------------+
</code></pre>
<p>Clearly something is happening; I’m getting a performance benefit from using more GPUs (almost linear!). But, I’m not seeing a substantial improvement in memory usage.</p>
<ol>
<li>Is my config file missing something? Are there better parameters that facilitate memory savings?</li>
<li>Can I somehow get accelerate to dump what it thinks it’s doing (vs. what I specified in the config file)?</li>
<li>Can I somehow dump the wrapped model to see what FSDP has done?</li>
</ol>
<p>===============================================================</p>
<p>I did a similar experiment with bloom-3b just to see if it made any difference, and things still seem strange.</p>
<pre data-code-wrap="shell"><code class="lang-shell">+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 37058 C python 74748MiB |
+-----------------------------------------------------------------------------------------+
┌────────────────────────────────────────────────────┐
│ Memory Usage for loading `bigscience/bloom-3b` │
├───────┬─────────────┬──────────┬───────────────────┤
│ dtype │Largest Layer│Total Size│Training using Adam│
├───────┼─────────────┼──────────┼───────────────────┤
│float32│ 2.39 GB │ 11.19 GB │ 44.74 GB │
│float16│ 1.2 GB │ 5.59 GB │ 22.37 GB │
│ int8 │ 612.5 MB │ 2.8 GB │ N/A │
│ int4 │ 306.25 MB │ 1.4 GB │ N/A │
└───────┴─────────────┴──────────┴───────────────────┘
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 251138 C ...AI/training-4.52.4/bin/python 53922MiB |
| 1 N/A N/A 251139 C ...AI/training-4.52.4/bin/python 53538MiB |
| 2 N/A N/A 251140 C ...AI/training-4.52.4/bin/python 53538MiB |
| 3 N/A N/A 251141 C ...AI/training-4.52.4/bin/python 53538MiB |
+-----------------------------------------------------------------------------------------+
</code></pre>
|
<p>So after much futzing around and doing FSDP from pytorch I discovered that the answer to this question is that the memory usage reported by nvidia-smi is not an accurate reflection of memory required/used by pytorch. Apparently pytorch maintains a cache which is greater than that needed/used and that is primarily what the nvidia number reflects.</p>
<p>pytorch.cuda has a number of ways to get memory information that seems to be more relevant (though not always clear of the implications).</p>
|
Pytorch-Image models
|
https://discuss.huggingface.co/t/pytorch-image-models/154385
| 154,385
| 13
|
2025-05-10T04:41:31.114000Z
|
[
{
"id": 220959,
"name": "Mohit Kumar",
"username": "mohitb1i",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dbc845/{size}.png",
"created_at": "2025-05-10T04:41:31.171Z",
"cooked": "<p>In the <code>VisionTransformer</code> class, the default <code>act_layer</code> is <code>None</code> . If we do not provide it - this will lead to a <code>TypeError</code> in <code>MLP</code> because none of the classes (<code>Block</code> , <code>MLP</code> , or <code>VisionTransformer</code> ) handle this case. Obvious error message:<br>\nTypeError: ‘NoneType’ object is not callable</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-05-10T04:41:31.171Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 13,
"readers_count": 12,
"score": 87.6,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Mohit Kumar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93474,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226827,
"name": "Andrew Scott",
"username": "Pimpcat-AU",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png",
"created_at": "2025-06-10T20:24:42.368Z",
"cooked": "<p>Fix:<br>\nAlways set act_layer to a valid activation function (e.g., nn.GELU, nn.ReLU) when instantiating VisionTransformer.<br>\nExample:</p>\n<p>import torch.nn as nn<br>\nmodel = VisionTransformer(act_layer=nn.GELU)</p>\n<p>If not set, you’ll get TypeError: ‘NoneType’ object is not callable.</p>\n<p>Solution provided by Triskel Data Deterministic AI.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-10T20:24:42.368Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 22,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Andrew Scott",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96276,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226852,
"name": "Daniela Brenes",
"username": "dbrenes",
"avatar_template": "/user_avatar/discuss.huggingface.co/dbrenes/{size}/47087_2.png",
"created_at": "2025-06-11T00:05:50.417Z",
"cooked": "<p>Hello <a class=\"mention\" href=\"/u/mohitb1i\">@mohitb1i</a> ,</p>\n<p>In which PyTorch version are you experiencing this error?</p>\n<hr>\n<p><em>Machine Learning Engineer at <a href=\"https://www.ridgerun.ai/\" rel=\"noopener nofollow ugc\">RidgeRun.ai</a></em><br>\n<em>Contact us: <a href=\"mailto:[email protected]\">[email protected]</a></em></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-11T00:05:50.417Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 10,
"readers_count": 9,
"score": 37,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Daniela Brenes",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.ridgerun.ai/",
"internal": false,
"reflection": false,
"title": null,
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93201,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226906,
"name": "Mohit Kumar",
"username": "mohitb1i",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dbc845/{size}.png",
"created_at": "2025-06-11T08:19:02.529Z",
"cooked": "<p>I understand, but I am saying the default value of act_layer should be nn.GELU or just set it in instantiation, like:</p>\n<pre><code class=\"lang-auto\">block_fn(\n...\nact_layer = act_layer or nn.GELU,\n...\n)\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-11T08:19:02.529Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Mohit Kumar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93474,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/4",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 96276,
"username": "Pimpcat-AU",
"name": "Andrew Scott",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 226907,
"name": "Mohit Kumar",
"username": "mohitb1i",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dbc845/{size}.png",
"created_at": "2025-06-11T08:20:58.238Z",
"cooked": "<p>No it is a vision-transformer code from hugging face,<br>\n<a href=\"https://github.com/huggingface/pytorch-image-models/\" rel=\"noopener nofollow ugc\">original repo</a></p>\n<p><a href=\"https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py\" rel=\"noopener nofollow ugc\">code of Vision Transformer</a></p>",
"post_number": 5,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-11T08:20:58.238Z",
"reply_count": 1,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 26.2,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Mohit Kumar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py",
"internal": false,
"reflection": false,
"title": "pytorch-image-models/timm/models/vision_transformer.py at main · huggingface/pytorch-image-models · GitHub",
"clicks": 2
},
{
"url": "https://github.com/huggingface/pytorch-image-models/",
"internal": false,
"reflection": false,
"title": "GitHub - huggingface/pytorch-image-models: The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93474,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/5",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 93201,
"username": "dbrenes",
"name": "Daniela Brenes",
"avatar_template": "/user_avatar/discuss.huggingface.co/dbrenes/{size}/47087_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227793,
"name": "Daniela Brenes",
"username": "dbrenes",
"avatar_template": "/user_avatar/discuss.huggingface.co/dbrenes/{size}/47087_2.png",
"created_at": "2025-06-16T18:20:51.943Z",
"cooked": "<p>Upon reviewing the code, it appears that this behavior likely stems from the fact that the <code>VisionTransformer</code> class is not meant to be instantiated directly. Instead, the recommended approach is to use the <code>timm.create_model</code> function, which handles proper initialization of the available Vision Transformer variants. For example, calling models like <code>vit_small_patch16_224</code> or <code>vit_large_patch32_384</code> through <code>timm.create_model</code> returns a fully configured <code>VisionTransformer</code> instance.</p>\n<p>However, if you choose to instantiate the <code>VisionTransformer</code> class directly, you are probably responsible for explicitly providing certain arguments—such as the <code>act_layer</code>—as you noted earlier.</p>\n<hr>\n<p><em>Machine Learning Engineer at <a href=\"https://www.ridgerun.ai/\" rel=\"noopener nofollow ugc\">RidgeRun.ai</a></em><br>\n<em>Contact us: <a href=\"mailto:[email protected]\">[email protected]</a></em></p>",
"post_number": 6,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-16T18:20:51.943Z",
"reply_count": 1,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 36,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Daniela Brenes",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.ridgerun.ai/",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93201,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 93474,
"username": "mohitb1i",
"name": "Mohit Kumar",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dbc845/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227888,
"name": "Andrew Scott",
"username": "Pimpcat-AU",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png",
"created_at": "2025-06-17T06:03:42.316Z",
"cooked": "<p>import torch<br>\nimport torch.nn as nn</p>\n<p>class VisionTransformer(nn.Module):<br>\ndef <strong>init</strong>(self, act_layer=None, **kwargs):<br>\nsuper().<strong>init</strong>()<br>\n# Default to GELU if none provided<br>\nif act_layer is None:<br>\nact_layer = nn.GELU</p>\n<pre><code> # Support both nn.ReLU and nn.ReLU() styles\n self.act = act_layer() if isinstance(act_layer, type) else act_layer\n\n # Example MLP block using activation\n self.mlp = nn.Sequential(\n nn.Linear(768, 3072),\n self.act,\n nn.Linear(3072, 768)\n )\n\ndef forward(self, x):\n return self.mlp(x)\n</code></pre>\n<h1><a name=\"p-227888-example-usage-1\" class=\"anchor\" href=\"#p-227888-example-usage-1\"></a>Example usage:</h1>\n<p>if <strong>name</strong> == “<strong>main</strong>”:<br>\nmodel = VisionTransformer()<br>\nx = torch.randn(1, 768)<br>\nout = model(x)<br>\nprint(out.shape)</p>\n<p>Solution provided by Triskel Data Deterministic AI.</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-17T06:03:42.316Z",
"reply_count": 1,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 30.8,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Andrew Scott",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96276,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 93201,
"username": "dbrenes",
"name": "Daniela Brenes",
"avatar_template": "/user_avatar/discuss.huggingface.co/dbrenes/{size}/47087_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228015,
"name": "Mohit Kumar",
"username": "mohitb1i",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dbc845/{size}.png",
"created_at": "2025-06-17T19:12:21.511Z",
"cooked": "<p>Thanks, it was an oversight.</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 9,
"updated_at": "2025-06-17T19:12:21.511Z",
"reply_count": 0,
"reply_to_post_number": 7,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "Mohit Kumar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93474,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/8",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 96276,
"username": "Pimpcat-AU",
"name": "Andrew Scott",
"avatar_template": "/user_avatar/discuss.huggingface.co/pimpcat-au/{size}/48989_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 228108,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-18T07:12:51.633Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 9,
"post_type": 3,
"posts_count": 9,
"updated_at": "2025-06-18T07:12:51.633Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 154385,
"topic_slug": "pytorch-image-models",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/pytorch-image-models/154385/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>In the <code>VisionTransformer</code> class, the default <code>act_layer</code> is <code>None</code> . If we do not provide it - this will lead to a <code>TypeError</code> in <code>MLP</code> because none of the classes (<code>Block</code> , <code>MLP</code> , or <code>VisionTransformer</code> ) handle this case. Obvious error message:<br>
TypeError: ‘NoneType’ object is not callable</p>
|
<p>import torch<br>
import torch.nn as nn</p>
<p>class VisionTransformer(nn.Module):<br>
def <strong>init</strong>(self, act_layer=None, **kwargs):<br>
super().<strong>init</strong>()<br>
# Default to GELU if none provided<br>
if act_layer is None:<br>
act_layer = nn.GELU</p>
<pre><code> # Support both nn.ReLU and nn.ReLU() styles
self.act = act_layer() if isinstance(act_layer, type) else act_layer
# Example MLP block using activation
self.mlp = nn.Sequential(
nn.Linear(768, 3072),
self.act,
nn.Linear(3072, 768)
)
def forward(self, x):
return self.mlp(x)
</code></pre>
<h1><a name="p-227888-example-usage-1" class="anchor" href="#p-227888-example-usage-1"></a>Example usage:</h1>
<p>if <strong>name</strong> == “<strong>main</strong>”:<br>
model = VisionTransformer()<br>
x = torch.randn(1, 768)<br>
out = model(x)<br>
print(out.shape)</p>
<p>Solution provided by Triskel Data Deterministic AI.</p>
|
Cannot get tools to work: InferenceClient + hf-inference + Qwen/Qwen3-235B-A22B – Internal Server Error
|
https://discuss.huggingface.co/t/cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error/159469
| 159,469
| 6
|
2025-06-16T08:34:20.199000Z
|
[
{
"id": 227679,
"name": "Björn Buchhold",
"username": "bbuchhold",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/c2a13f/{size}.png",
"created_at": "2025-06-16T08:34:20.253Z",
"cooked": "<p>I’m trying to get an existing app (OpenAI or Gemini both work well ) to run on open-weight models and keep failing. I have now distilled a minimal example that works on gpt-4.1-mini but doesn’t on Qwen3.</p>\n<pre><code class=\"lang-auto\">client = openai.Client()\nMODEL = \"gpt-4.1-mini\"\n\nmessages = [\n {\"role\": \"user\", \"content\": \"You are a shopping assistant for a store. You can help pick the right products for the user.\"},\n {\"role\": \"user\", \"content\": \"I'm looking for a T-shirt\"}\n]\n\ndummy_tools = [{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_products\",\n \"description\": (\n \"Search for products. Useful if someone needs clothing.\"\n ),\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"The query to look up products for.\"\n }\n },\n \"required\": [\n \"query\"\n ],\n \"additionalProperties\": False\n },\n \"strict\": True\n }\n }]\nr = client.chat.completions.create(model=MODEL, tools=dummy_tools, messages=messages)\ntcs = []\nfor tc in r.choices[0].message.tool_calls:\n tcs.append({\n \"id\": tc.id,\n \"type\": tc.type,\n \"function\": {\n \"name\": tc.function.name,\n \"arguments\": tc.function.arguments,\n }\n })\nmessages.append({\"role\": \"assistant\", \"tool_calls\": tcs})\n# fake it for brevity\nmessages.append({\"role\": \"tool\", \"tool_call_id\": tcs[0][\"id\"], \"content\": \"Product 1: Blue T-Shirt\\nProduct 2: Red Hoody.\"})\nfor m in messages:\n print(m)\nprint(\"-----------\")\nr = client.chat.completions.create(model=MODEL, messages=messages)\nprint(r.choices[0])\n</code></pre>\n<p>works and prints:</p>\n<pre><code class=\"lang-auto\">{'role': 'user', 'content': 'You are a shopping assistant for a store. You can help pick the right products for the user.'}\n{'role': 'user', 'content': \"I'm looking for a T-shirt\"}\n{'role': 'assistant', 'tool_calls': [{'id': 'call_b7Gp98ZGcdv6TSbAlgrZC8Sq', 'type': 'function', 'function': {'name': 'get_products', 'arguments': '{\"query\":\"T-shirt\"}'}}]}\n{'role': 'tool', 'tool_call_id': 'call_b7Gp98ZGcdv6TSbAlgrZC8Sq', 'content': 'Product 1: Blue T-Shirt\\nProduct 2: Red Hoody.'}\n -----------\nChoice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='I found a Blue T-Shirt for you. Would you like more options or details about this one?', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))\n</code></pre>\n<p>Meanwhile:</p>\n<pre><code class=\"lang-auto\">client = InferenceClient(\n provider=\"hf-inference\",\n api_key=os.environ[\"HF_TOKEN\"],\n )\nMODEL = \"Qwen/Qwen3-235B-A22B\"\n\nmessages = [\n {\"role\": \"user\", \"content\": \"You are a shopping assistant for a store. You can help pick the right products for the user.\"},\n {\"role\": \"user\", \"content\": \"I'm looking for a T-shirt\"}\n]\n\ndummy_tools = [{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_products\",\n \"description\": (\n \"Search for products. Useful if someone needs clothing.\"\n ),\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"The query to look up products for.\"\n }\n },\n \"required\": [\n \"query\"\n ],\n \"additionalProperties\": False\n },\n \"strict\": True\n }\n }]\nr = client.chat.completions.create(model=MODEL, tools=dummy_tools, messages=messages)\ntcs = []\nfor tc in r.choices[0].message.tool_calls:\n tcs.append({\n \"id\": tc.id,\n \"type\": tc.type,\n \"function\": {\n \"name\": tc.function.name,\n \"arguments\": tc.function.arguments,\n }\n })\nmessages.append({\"role\": \"assistant\", \"tool_calls\": tcs})\n# fake it for brevity\nmessages.append({\"role\": \"tool\", \"tool_call_id\": tcs[0][\"id\"], \"content\": \"Product 1: Blue T-Shirt\\nProduct 2: Red Hoody.\"})\nfor m in messages:\n print(m)\nprint(\"-----------\")\nr = client.chat.completions.create(model=MODEL, messages=messages)\nprint(r.choices[0])\n</code></pre>\n<p>fails with</p>\n<pre><code class=\"lang-auto\">---------------------------------------------------------------------------\nHTTPError Traceback (most recent call last)\nFile ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/utils/_http.py:409, in hf_raise_for_status(response, endpoint_name)\n 408 try:\n--> 409 response.raise_for_status()\n 410 except HTTPError as e:\n\nFile ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/requests/models.py:1024, in Response.raise_for_status(self)\n 1023 if http_error_msg:\n-> 1024 raise HTTPError(http_error_msg, response=self)\n\nHTTPError: 500 Server Error: Internal Server Error for url: https://router.huggingface.co/hf-inference/models/Qwen/Qwen3-235B-A22B/v1/chat/completions\n\nThe above exception was the direct cause of the following exception:\n\nHfHubHTTPError Traceback (most recent call last)\nCell In[107], line 52\n 50 print(m)\n 51 print(\"-----------\")\n---> 52 r = client.chat.completions.create(model=MODEL, messages=messages)\n 53 print(r.choices[0])\n\nFile ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/inference/_client.py:924, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream_options, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p, extra_body)\n 896 parameters = {\n 897 \"model\": payload_model,\n 898 \"frequency_penalty\": frequency_penalty,\n (...) 915 **(extra_body or {}),\n 916 }\n 917 request_parameters = provider_helper.prepare_request(\n 918 inputs=messages,\n 919 parameters=parameters,\n (...) 922 api_key=self.token,\n 923 )\n--> 924 data = self._inner_post(request_parameters, stream=stream)\n 926 if stream:\n 927 return _stream_chat_completion_response(data) # type: ignore[arg-type]\n\nFile ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/inference/_client.py:280, in InferenceClient._inner_post(self, request_parameters, stream)\n 277 raise InferenceTimeoutError(f\"Inference call timed out: {request_parameters.url}\") from error # type: ignore\n 279 try:\n--> 280 hf_raise_for_status(response)\n 281 return response.iter_lines() if stream else response.content\n 282 except HTTPError as error:\n\nFile ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/utils/_http.py:482, in hf_raise_for_status(response, endpoint_name)\n 478 raise _format(HfHubHTTPError, message, response) from e\n 480 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\n 481 # as well (request id and/or server error message)\n--> 482 raise _format(HfHubHTTPError, str(e), response) from e\n\nHfHubHTTPError: 500 Server Error: Internal Server Error for url: https://router.huggingface.co/hf-inference/models/Qwen/Qwen3-235B-A22B/v1/chat/completions (Request ID: Root=1-684c0e94-1b2fcc1112ce97d968f42b89;4a0857fe-92d3-4b59-977c-2c58fee78502)\n</code></pre>\n<p>Unfortunately, I fail to get a better reason than the 500 return code, and I’m not sure if I am misusing the API somehow</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-16T08:34:20.253Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 42,
"reads": 10,
"readers_count": 9,
"score": 217,
"yours": false,
"topic_id": 159469,
"topic_slug": "cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error",
"display_username": "Björn Buchhold",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/bad-request-your-endpoint-is-in-error-check-its-status-on-endpoints-huggingface-co/159439/5",
"internal": true,
"reflection": true,
"title": "\"Bad Request: Your endpoint is in error, check its status on endpoints.huggingface.co",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96853,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error/159469/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227702,
"name": "Björn Buchhold",
"username": "bbuchhold",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/c2a13f/{size}.png",
"created_at": "2025-06-16T08:56:17.694Z",
"cooked": "<p>3 days later, this works. I assume the “internal server error” actually was an internal error after all <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-16T08:56:17.694Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 10,
"readers_count": 9,
"score": 97,
"yours": false,
"topic_id": 159469,
"topic_slug": "cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error",
"display_username": "Björn Buchhold",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96853,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error/159469/2",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227745,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-16T13:55:02.786Z",
"cooked": "<p>Great. Links that may be useful in case of trouble. However, ongoing problems may not always be apparent.<br>\nServer status: <a href=\"https://status.huggingface.co/\">https://status.huggingface.co/</a><br>\nChangeLog: <a href=\"https://huggingface.co/changelog\" class=\"inline-onebox\">Changelog - Hugging Face</a></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-16T13:55:02.786Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 8,
"readers_count": 7,
"score": 6.6,
"yours": false,
"topic_id": 159469,
"topic_slug": "cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://status.huggingface.co/",
"internal": false,
"reflection": false,
"title": "Hugging Face status",
"clicks": 4
},
{
"url": "https://huggingface.co/changelog",
"internal": false,
"reflection": false,
"title": "Changelog - Hugging Face",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error/159469/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227851,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-17T01:55:03.232Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-17T01:55:03.232Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 1,
"yours": false,
"topic_id": 159469,
"topic_slug": "cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cannot-get-tools-to-work-inferenceclient-hf-inference-qwen-qwen3-235b-a22b-internal-server-error/159469/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’m trying to get an existing app (OpenAI or Gemini both work well ) to run on open-weight models and keep failing. I have now distilled a minimal example that works on gpt-4.1-mini but doesn’t on Qwen3.</p>
<pre><code class="lang-auto">client = openai.Client()
MODEL = "gpt-4.1-mini"
messages = [
{"role": "user", "content": "You are a shopping assistant for a store. You can help pick the right products for the user."},
{"role": "user", "content": "I'm looking for a T-shirt"}
]
dummy_tools = [{
"type": "function",
"function": {
"name": "get_products",
"description": (
"Search for products. Useful if someone needs clothing."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to look up products for."
}
},
"required": [
"query"
],
"additionalProperties": False
},
"strict": True
}
}]
r = client.chat.completions.create(model=MODEL, tools=dummy_tools, messages=messages)
tcs = []
for tc in r.choices[0].message.tool_calls:
tcs.append({
"id": tc.id,
"type": tc.type,
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
}
})
messages.append({"role": "assistant", "tool_calls": tcs})
# fake it for brevity
messages.append({"role": "tool", "tool_call_id": tcs[0]["id"], "content": "Product 1: Blue T-Shirt\nProduct 2: Red Hoody."})
for m in messages:
print(m)
print("-----------")
r = client.chat.completions.create(model=MODEL, messages=messages)
print(r.choices[0])
</code></pre>
<p>works and prints:</p>
<pre><code class="lang-auto">{'role': 'user', 'content': 'You are a shopping assistant for a store. You can help pick the right products for the user.'}
{'role': 'user', 'content': "I'm looking for a T-shirt"}
{'role': 'assistant', 'tool_calls': [{'id': 'call_b7Gp98ZGcdv6TSbAlgrZC8Sq', 'type': 'function', 'function': {'name': 'get_products', 'arguments': '{"query":"T-shirt"}'}}]}
{'role': 'tool', 'tool_call_id': 'call_b7Gp98ZGcdv6TSbAlgrZC8Sq', 'content': 'Product 1: Blue T-Shirt\nProduct 2: Red Hoody.'}
-----------
Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='I found a Blue T-Shirt for you. Would you like more options or details about this one?', refusal=None, role='assistant', annotations=[], audio=None, function_call=None, tool_calls=None))
</code></pre>
<p>Meanwhile:</p>
<pre><code class="lang-auto">client = InferenceClient(
provider="hf-inference",
api_key=os.environ["HF_TOKEN"],
)
MODEL = "Qwen/Qwen3-235B-A22B"
messages = [
{"role": "user", "content": "You are a shopping assistant for a store. You can help pick the right products for the user."},
{"role": "user", "content": "I'm looking for a T-shirt"}
]
dummy_tools = [{
"type": "function",
"function": {
"name": "get_products",
"description": (
"Search for products. Useful if someone needs clothing."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to look up products for."
}
},
"required": [
"query"
],
"additionalProperties": False
},
"strict": True
}
}]
r = client.chat.completions.create(model=MODEL, tools=dummy_tools, messages=messages)
tcs = []
for tc in r.choices[0].message.tool_calls:
tcs.append({
"id": tc.id,
"type": tc.type,
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
}
})
messages.append({"role": "assistant", "tool_calls": tcs})
# fake it for brevity
messages.append({"role": "tool", "tool_call_id": tcs[0]["id"], "content": "Product 1: Blue T-Shirt\nProduct 2: Red Hoody."})
for m in messages:
print(m)
print("-----------")
r = client.chat.completions.create(model=MODEL, messages=messages)
print(r.choices[0])
</code></pre>
<p>fails with</p>
<pre><code class="lang-auto">---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/utils/_http.py:409, in hf_raise_for_status(response, endpoint_name)
408 try:
--> 409 response.raise_for_status()
410 except HTTPError as e:
File ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/requests/models.py:1024, in Response.raise_for_status(self)
1023 if http_error_msg:
-> 1024 raise HTTPError(http_error_msg, response=self)
HTTPError: 500 Server Error: Internal Server Error for url: https://router.huggingface.co/hf-inference/models/Qwen/Qwen3-235B-A22B/v1/chat/completions
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
Cell In[107], line 52
50 print(m)
51 print("-----------")
---> 52 r = client.chat.completions.create(model=MODEL, messages=messages)
53 print(r.choices[0])
File ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/inference/_client.py:924, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream_options, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p, extra_body)
896 parameters = {
897 "model": payload_model,
898 "frequency_penalty": frequency_penalty,
(...) 915 **(extra_body or {}),
916 }
917 request_parameters = provider_helper.prepare_request(
918 inputs=messages,
919 parameters=parameters,
(...) 922 api_key=self.token,
923 )
--> 924 data = self._inner_post(request_parameters, stream=stream)
926 if stream:
927 return _stream_chat_completion_response(data) # type: ignore[arg-type]
File ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/inference/_client.py:280, in InferenceClient._inner_post(self, request_parameters, stream)
277 raise InferenceTimeoutError(f"Inference call timed out: {request_parameters.url}") from error # type: ignore
279 try:
--> 280 hf_raise_for_status(response)
281 return response.iter_lines() if stream else response.content
282 except HTTPError as error:
File ~/micromamba/envs/strauss_rag_202505/lib/python3.13/site-packages/huggingface_hub/utils/_http.py:482, in hf_raise_for_status(response, endpoint_name)
478 raise _format(HfHubHTTPError, message, response) from e
480 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
481 # as well (request id and/or server error message)
--> 482 raise _format(HfHubHTTPError, str(e), response) from e
HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://router.huggingface.co/hf-inference/models/Qwen/Qwen3-235B-A22B/v1/chat/completions (Request ID: Root=1-684c0e94-1b2fcc1112ce97d968f42b89;4a0857fe-92d3-4b59-977c-2c58fee78502)
</code></pre>
<p>Unfortunately, I fail to get a better reason than the 500 return code, and I’m not sure if I am misusing the API somehow</p>
|
<p>3 days later, this works. I assume the “internal server error” actually was an internal error after all <img src="https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14" title=":slight_smile:" class="emoji" alt=":slight_smile:" loading="lazy" width="20" height="20"></p>
|
LoRA Finetuning RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
|
https://discuss.huggingface.co/t/lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/159445
| 159,445
| 9
|
2025-06-16T06:41:50.936000Z
|
[
{
"id": 227646,
"name": "Benjamin Koch",
"username": "by-benj-k",
"avatar_template": "/user_avatar/discuss.huggingface.co/by-benj-k/{size}/49508_2.png",
"created_at": "2025-06-16T06:41:51.002Z",
"cooked": "<p>Hello everyone,<br>\nI am trying to fine-tune a Llama 3.1 8B Instruct Model using LoRA. I would like to use multiple GPUs, but I am getting the following error.</p>\n<pre><code class=\"lang-auto\">Traceback (most recent call last): \n File \"/home/user/s25/finetune_model_LoRA.py\", line 68, in <module> \n trainer.train() \n ~~~~~~~~~~~~~^^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py\", line 2240, in train \n return inner_training_loop( \n args=args, \n ...<2 lines>... \n ignore_keys_for_eval=ignore_keys_for_eval, \n ) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py\", line 2555, in _inner_training_loop \n tr_loss_step = self.training_step(model, inputs, num_items_in_batch) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/trl/trainer/sft_trainer.py\", line 733, in training_step \n return super().training_step(*args, **kwargs) \n ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py\", line 3745, in training_step \n loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/trl/trainer/sft_trainer.py\", line 687, in compute_loss \n (loss, outputs) = super().compute_loss( \n ~~~~~~~~~~~~~~~~~~~~^ \n model, inputs, return_outputs=True, num_items_in_batch=num_items_in_batch \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \n ) \n ^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py\", line 3810, in compute_loss \n outputs = model(**inputs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl \n return self._call_impl(*args, **kwargs) \n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl \n return forward_call(*args, **kwargs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/accelerate/utils/operations.py\", line 818, in forward \n return model_forward(*args, **kwargs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/accelerate/utils/operations.py\", line 806, in __call__ \n return convert_to_fp32(self.model_forward(*args, **kwargs)) \n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/amp/autocast_mode.py\", line 44, in decorate_autocast \n return func(*args, **kwargs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/peft/peft_model.py\", line 1757, in forward \n return self.base_model( \n ~~~~~~~~~~~~~~~^ \n input_ids=input_ids, \n ^^^^^^^^^^^^^^^^^^^^ \n ...<6 lines>... \n **kwargs, \n ^^^^^^^^^ \n ) \n ^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl \n return self._call_impl(*args, **kwargs) \n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl \n return forward_call(*args, **kwargs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/peft/tuners/tuners_utils.py\", line 193, in forward \n return self.model.forward(*args, **kwargs) \n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/accelerate/hooks.py\", line 175, in new_forward \n output = module._old_forward(*args, **kwargs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/utils/generic.py\", line 969, in wrapper\n output = func(self, *args, **kwargs) \n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/models/llama/modeling_llama.py\", line 708, in forward\n loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)\n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/loss/loss_utils.py\", line 64, in ForCausalLMLoss\n loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)\n File \"/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/loss/loss_utils.py\", line 38, in fixed_cross_entropy\n loss = loss / num_items_in_batch \n ~~~~~^~~~~~~~~~~~~~~~~~~~ \nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!\n</code></pre>\n<p>I use the following script.</p>\n<pre><code class=\"lang-auto\"># Imports\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, DataCollatorForLanguageModeling, BitsAndBytesConfig\nfrom peft import LoraConfig\nfrom huggingface_hub import login\nfrom datasets import load_dataset\nfrom dotenv import load_dotenv\nfrom trl import SFTTrainer, SFTConfig\nfrom os import getenv\nimport torch\n\n# Load environment variables\nload_dotenv()\n\n# Login to huggingface\nlogin(token=getenv(\"HUGGINGFACE_ACCESS_TOKEN\"))\n\n# Load bitsandbytes config\nbnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=\"float16\", bnb_4bit_use_double_quant=False)\n\n# Load the model and tokenizer corresponding to the model\nmodel_name = \"meta-llama/Llama-3.1-8B-Instruct\"\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name, quantization_config=bnb_config, device_map=\"auto\")\ntokenizer = AutoTokenizer.from_pretrained(model_name)\ntokenizer.pad_token = tokenizer.eos_token\n\n# Load the dataset\ndataset = load_dataset(\n \"json\", data_files=\"/home/user/s25/documents.jsonl\", split=\"train\")\n\n# Define tokenization function and tokenize the dataset\n\n\ndef tokenize(examples):\n inputs = tokenizer(examples[\"document\"])\n return inputs\n\n\ntokenized_dataset = dataset.map(\n tokenize, batched=True, remove_columns=[\"document\"])\n\n# Instantiate data collator\ndata_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)\n\n# Load LoRA configuration\npeft_config = LoraConfig(\n r=64, lora_alpha=16, lora_dropout=0, task_type=\"CAUSAL_LM\", target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"])\n\n# Specify the training arguments\ntrainings_arguments = SFTConfig(output_dir=\"/data/projects/s25/Llama-3.1-8B-Instruct-lora-v6-1epochs\", save_strategy=\"steps\", save_steps=500, save_total_limit=1,\n per_device_train_batch_size=2, num_train_epochs=1, learning_rate=5e-4, weight_decay=0.01, logging_dir=\"/data/projects/s25/Llama-3.1-8B-Instruct-lora-v6-1epochs-log\", logging_steps=50, report_to=\"none\", fp16=True, bf16=False, dataset_text_field=None)\n\n# Set up trainer\ntrainer = SFTTrainer(model=model, args=trainings_arguments,\n train_dataset=tokenized_dataset, processing_class=tokenizer, data_collator=data_collator, peft_config=peft_config)\n\n# Train the model\ntrainer.train()\n</code></pre>\n<p>This issue is very similar to the following already existing posts:</p>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"147337\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/rohitdiwane/48/44042_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-7-and-cuda-0/147337\">RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!</a> <a class=\"badge-category__wrapper \" href=\"/c/transformers/9\"><span data-category-id=\"9\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Transformers library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Transformers</span></span></a>\n </div>\n <blockquote>\n RuntimeError Traceback (most recent call last) \nCell In[29], line 2 \n1 # Train model \n----> 2 trainer.train() \n4 # # Start training from the last checkpoint \n5 # trainer.train(resume_from_checkpoint=checkpoint) \nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/transformers/trainer.py:2245, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) \n2243 hf_hub_utils.enable_progress_bars() \n2244 else: \n → 2245 return i…\n </blockquote>\n</aside>\n\n<p>However, the solutions provided there did not help me solve the problem.</p>\n<p>Lastly, the versions of the most relevant packages (not necessarily enough to run the script, but I was character-limited for this post).</p>\n<pre><code class=\"lang-auto\">accelerate 1.7.0 pyhe01879c_0 conda-forge \nbitsandbytes 0.46.0 cuda126_py313hde49398_0 conda-forge \ndatasets 3.6.0 pyhd8ed1ab_0 conda-forge\nhuggingface_hub 0.33.0 pyhd8ed1ab_0 conda-forge \nnumpy 2.3.0 py313h17eae1a_0 conda-forge \npandas 2.3.0 py313ha87cce1_0 conda-forge \npip 25.1.1 pyh145f28c_0 conda-forge \npython 3.13.2 hf636f53_101_cp313 conda-forge \npython-dateutil 2.9.0.post0 pyhff2d567_1 conda-forge \npython-dotenv 1.1.0 pyh29332c3_1 conda-forge \npython-gil 3.13.5 h4df99d1_101 conda-forge \npython-tzdata 2025.2 pyhd8ed1ab_0 conda-forge \npython-xxhash 3.5.0 py313h536fd9c_2 conda-forge \npython_abi 3.13 7_cp313 conda-forge \npytorch 2.7.0 cuda126_generic_py313_h14c909a_200 conda-forge \ntokenizers 0.21.1 py313h1191936_0 conda-forge\ntorch 2.6.0+cu126 pypi_0 pypi\ntorchaudio 2.6.0+cu126 pypi_0 pypi\ntorchvision 0.21.0+cu126 pypi_0 pypi\ntransformers 4.52.4 pyhd8ed1ab_0 conda-forge\ntrl 0.18.2 pyhd8ed1ab_0 conda-forge\n</code></pre>\n<p>I am very grateful for any support! Thank you very much!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-16T06:41:51.002Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 128,
"reads": 7,
"readers_count": 6,
"score": 586.4,
"yours": false,
"topic_id": 159445,
"topic_slug": "lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0",
"display_username": "Benjamin Koch",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-7-and-cuda-0/147337",
"internal": true,
"reflection": false,
"title": "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97059,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/159445/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227649,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-16T07:00:48.906Z",
"cooked": "<p>If so, it may be an unresolved compatibility issue between accelerate and bitsandbytes?</p><aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"150275\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/t/3da27b/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/bitsandbytes-conflict-with-accelerate/150275\">BitsandBytes conflict with Accelerate</a> <a class=\"badge-category__wrapper \" href=\"/c/accelerate/18\"><span data-category-id=\"18\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Accelerate library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Accelerate</span></span></a>\n </div>\n <blockquote>\n I’m running inference on a <a href=\"https://huggingface.co/openvla/openvla-7b\">custom VLM derived model</a>. Inference works fine when using the weights in their bfloat16 precision. However, when I try defining a BitsandBytes config, I receive errors that I suspect is due to conflicts between BitsandBytes and Accelerate, where Accelerate and BitsandBytes are both trying to set the compute device and hence generating the following stack trace. \nTraceback (most recent call last):\n File \"/home/tyr/RobotAI/openvla/scripts/extern/verify_prismatic.py\", l…\n </blockquote>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"150685\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/s/b2d939/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/multi-gpu-inference-llama-3-2-vision-with-qlora/150685\">Multi-gpu inference llama-3.2 vision with QLoRA</a> <a class=\"badge-category__wrapper \" href=\"/c/accelerate/18\"><span data-category-id=\"18\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Accelerate library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Accelerate</span></span></a>\n </div>\n <blockquote>\n Hello <img width=\"20\" height=\"20\" src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\"slight_smile\" alt=\"slight_smile\" class=\"emoji\"> \nAfter fine-tuning meta-llama/Llama-3.2-11B-Vision-Instruct I run into a weird error while running inference with multi-gpu. \nThis is how I loads the model: \nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=\"bfloat16\",\n bnb_4bit_use_double_quant=True,\n bnb_4bit_quant_storage='bfloat16'\n)\n\nmodel = MllamaForConditionalGeneration.from_pretrained(\n model_path_or_name,\n quantization_config=bnb_config,\n …\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-16T07:00:48.906Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 7,
"readers_count": 6,
"score": 66.4,
"yours": false,
"topic_id": 159445,
"topic_slug": "lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/bitsandbytes-conflict-with-accelerate/150275",
"internal": true,
"reflection": false,
"title": "BitsandBytes conflict with Accelerate",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/multi-gpu-inference-llama-3-2-vision-with-qlora/150685",
"internal": true,
"reflection": false,
"title": "Multi-gpu inference llama-3.2 vision with QLoRA",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/159445/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227650,
"name": "Benjamin Koch",
"username": "by-benj-k",
"avatar_template": "/user_avatar/discuss.huggingface.co/by-benj-k/{size}/49508_2.png",
"created_at": "2025-06-16T07:22:17.905Z",
"cooked": "<p>Thanks for the information, however, I have tried running the script without the bitsandbytes configuration (and also with the bitsandbytes package removed) by just utilizing more GPUs, however the error seems to persist.</p>\n<p>So essentially by simply loading the model as follows:</p>\n<pre><code class=\"lang-auto\">model_name = \"meta-llama/Llama-3.1-8B-Instruct\"\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name, device_map=\"auto\")\ntokenizer = AutoTokenizer.from_pretrained(model_name)\ntokenizer.pad_token = tokenizer.eos_token\n</code></pre>\n<p>(And by the way launching the script with: CUDA_VISIBLE_DEVICES=0,1 python finetune_model_LoRA.py)</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-16T07:26:23.606Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 6,
"readers_count": 5,
"score": 26.2,
"yours": false,
"topic_id": 159445,
"topic_slug": "lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0",
"display_username": "Benjamin Koch",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97059,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/159445/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227711,
"name": "Benjamin Koch",
"username": "by-benj-k",
"avatar_template": "/user_avatar/discuss.huggingface.co/by-benj-k/{size}/49508_2.png",
"created_at": "2025-06-16T09:44:18.325Z",
"cooked": "<p>UPDATE: at least for now the problem seems to be fixed. I downgraded the transformers library to version 4.49.0, used the transfomers.Trainer instead of the SFTTrainer and modified the loading of the model to the following.</p>\n<pre><code class=\"lang-auto\"># Load bitsandbytes config\nbnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=\"float16\", bnb_4bit_use_double_quant=False)\n\n# Load LoRA configuration\npeft_config = LoraConfig(\n r=64, lora_alpha=16, lora_dropout=0, task_type=\"CAUSAL_LM\", target_modules=[\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"])\n\n# Load the model and prepare it for peft finetuning\nmodel_name = \"meta-llama/Llama-3.1-8B-Instruct\"\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name, quantization_config=bnb_config, device_map=\"auto\")\n\nmodel = prepare_model_for_kbit_training(model)\nmodel = get_peft_model(model, peft_config)\n</code></pre>\n<p>Maybe this will help someone in the future!</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-16T09:44:18.325Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 5,
"readers_count": 4,
"score": 41,
"yours": false,
"topic_id": 159445,
"topic_slug": "lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0",
"display_username": "Benjamin Koch",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 97059,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/159445/4",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227832,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-16T21:45:04.711Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-16T21:45:04.711Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 5.8,
"yours": false,
"topic_id": 159445,
"topic_slug": "lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/lora-finetuning-runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-1-and-cuda-0/159445/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello everyone,<br>
I am trying to fine-tune a Llama 3.1 8B Instruct Model using LoRA. I would like to use multiple GPUs, but I am getting the following error.</p>
<pre><code class="lang-auto">Traceback (most recent call last):
File "/home/user/s25/finetune_model_LoRA.py", line 68, in <module>
trainer.train()
~~~~~~~~~~~~~^^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py", line 2240, in train
return inner_training_loop(
args=args,
...<2 lines>...
ignore_keys_for_eval=ignore_keys_for_eval,
)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/trl/trainer/sft_trainer.py", line 733, in training_step
return super().training_step(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py", line 3745, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/trl/trainer/sft_trainer.py", line 687, in compute_loss
(loss, outputs) = super().compute_loss(
~~~~~~~~~~~~~~~~~~~~^
model, inputs, return_outputs=True, num_items_in_batch=num_items_in_batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/trainer.py", line 3810, in compute_loss
outputs = model(**inputs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/accelerate/utils/operations.py", line 818, in forward
return model_forward(*args, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/accelerate/utils/operations.py", line 806, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/peft/peft_model.py", line 1757, in forward
return self.base_model(
~~~~~~~~~~~~~~~^
input_ids=input_ids,
^^^^^^^^^^^^^^^^^^^^
...<6 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/peft/tuners/tuners_utils.py", line 193, in forward
return self.model.forward(*args, **kwargs)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/accelerate/hooks.py", line 175, in new_forward
output = module._old_forward(*args, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/models/llama/modeling_llama.py", line 708, in forward
loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/loss/loss_utils.py", line 64, in ForCausalLMLoss
loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
File "/local/home/user/miniforge3/envs/project/lib/python3.13/site-packages/transformers/loss/loss_utils.py", line 38, in fixed_cross_entropy
loss = loss / num_items_in_batch
~~~~~^~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
</code></pre>
<p>I use the following script.</p>
<pre><code class="lang-auto"># Imports
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, DataCollatorForLanguageModeling, BitsAndBytesConfig
from peft import LoraConfig
from huggingface_hub import login
from datasets import load_dataset
from dotenv import load_dotenv
from trl import SFTTrainer, SFTConfig
from os import getenv
import torch
# Load environment variables
load_dotenv()
# Login to huggingface
login(token=getenv("HUGGINGFACE_ACCESS_TOKEN"))
# Load bitsandbytes config
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=False)
# Load the model and tokenizer corresponding to the model
model_name = "meta-llama/Llama-3.1-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name, quantization_config=bnb_config, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
# Load the dataset
dataset = load_dataset(
"json", data_files="/home/user/s25/documents.jsonl", split="train")
# Define tokenization function and tokenize the dataset
def tokenize(examples):
inputs = tokenizer(examples["document"])
return inputs
tokenized_dataset = dataset.map(
tokenize, batched=True, remove_columns=["document"])
# Instantiate data collator
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
# Load LoRA configuration
peft_config = LoraConfig(
r=64, lora_alpha=16, lora_dropout=0, task_type="CAUSAL_LM", target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"])
# Specify the training arguments
trainings_arguments = SFTConfig(output_dir="/data/projects/s25/Llama-3.1-8B-Instruct-lora-v6-1epochs", save_strategy="steps", save_steps=500, save_total_limit=1,
per_device_train_batch_size=2, num_train_epochs=1, learning_rate=5e-4, weight_decay=0.01, logging_dir="/data/projects/s25/Llama-3.1-8B-Instruct-lora-v6-1epochs-log", logging_steps=50, report_to="none", fp16=True, bf16=False, dataset_text_field=None)
# Set up trainer
trainer = SFTTrainer(model=model, args=trainings_arguments,
train_dataset=tokenized_dataset, processing_class=tokenizer, data_collator=data_collator, peft_config=peft_config)
# Train the model
trainer.train()
</code></pre>
<p>This issue is very similar to the following already existing posts:</p>
<aside class="quote quote-modified" data-post="1" data-topic="147337">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/rohitdiwane/48/44042_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-7-and-cuda-0/147337">RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!</a> <a class="badge-category__wrapper " href="/c/transformers/9"><span data-category-id="9" style="--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="This category is for any question related to the Transformers library. You can also file an issue."><span class="badge-category__name">🤗Transformers</span></span></a>
</div>
<blockquote>
RuntimeError Traceback (most recent call last)
Cell In[29], line 2
1 # Train model
----> 2 trainer.train()
4 # # Start training from the last checkpoint
5 # trainer.train(resume_from_checkpoint=checkpoint)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/transformers/trainer.py:2245, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2243 hf_hub_utils.enable_progress_bars()
2244 else:
→ 2245 return i…
</blockquote>
</aside>
<p>However, the solutions provided there did not help me solve the problem.</p>
<p>Lastly, the versions of the most relevant packages (not necessarily enough to run the script, but I was character-limited for this post).</p>
<pre><code class="lang-auto">accelerate 1.7.0 pyhe01879c_0 conda-forge
bitsandbytes 0.46.0 cuda126_py313hde49398_0 conda-forge
datasets 3.6.0 pyhd8ed1ab_0 conda-forge
huggingface_hub 0.33.0 pyhd8ed1ab_0 conda-forge
numpy 2.3.0 py313h17eae1a_0 conda-forge
pandas 2.3.0 py313ha87cce1_0 conda-forge
pip 25.1.1 pyh145f28c_0 conda-forge
python 3.13.2 hf636f53_101_cp313 conda-forge
python-dateutil 2.9.0.post0 pyhff2d567_1 conda-forge
python-dotenv 1.1.0 pyh29332c3_1 conda-forge
python-gil 3.13.5 h4df99d1_101 conda-forge
python-tzdata 2025.2 pyhd8ed1ab_0 conda-forge
python-xxhash 3.5.0 py313h536fd9c_2 conda-forge
python_abi 3.13 7_cp313 conda-forge
pytorch 2.7.0 cuda126_generic_py313_h14c909a_200 conda-forge
tokenizers 0.21.1 py313h1191936_0 conda-forge
torch 2.6.0+cu126 pypi_0 pypi
torchaudio 2.6.0+cu126 pypi_0 pypi
torchvision 0.21.0+cu126 pypi_0 pypi
transformers 4.52.4 pyhd8ed1ab_0 conda-forge
trl 0.18.2 pyhd8ed1ab_0 conda-forge
</code></pre>
<p>I am very grateful for any support! Thank you very much!</p>
|
<p>UPDATE: at least for now the problem seems to be fixed. I downgraded the transformers library to version 4.49.0, used the transfomers.Trainer instead of the SFTTrainer and modified the loading of the model to the following.</p>
<pre><code class="lang-auto"># Load bitsandbytes config
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16", bnb_4bit_use_double_quant=False)
# Load LoRA configuration
peft_config = LoraConfig(
r=64, lora_alpha=16, lora_dropout=0, task_type="CAUSAL_LM", target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"])
# Load the model and prepare it for peft finetuning
model_name = "meta-llama/Llama-3.1-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name, quantization_config=bnb_config, device_map="auto")
model = prepare_model_for_kbit_training(model)
model = get_peft_model(model, peft_config)
</code></pre>
<p>Maybe this will help someone in the future!</p>
|
ValueError: Incompatible safetensors file. File metadata is not [‘pt’, ‘tf’, ‘flax’, ‘mlx’] but None
|
https://discuss.huggingface.co/t/valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none/159226
| 159,226
| 13
|
2025-06-14T05:06:59.907000Z
|
[
{
"id": 227369,
"name": "Angkul",
"username": "angkul07",
"avatar_template": "/user_avatar/discuss.huggingface.co/angkul07/{size}/49392_2.png",
"created_at": "2025-06-14T05:06:59.977Z",
"cooked": "<p>Hi experts,</p>\n<p>I have trained a custom LLMs from scratch using pytorch and saved the model checkpoint. According to documentation, for custom pytorch models, I used the <code>PyTorchModelHubMixin</code> in my model class, to make it compatible. Now when I push it to hub using the following code:</p>\n<pre><code class=\"lang-auto\">GPT_CONFIG = {\n \"model_type\": \"gpt\",\n \"vocab_size\": 26000,\n \"context_length\": 256,\n \"emb_dim\": 768,\n \"n_heads\": 16,\n \"n_layers\": 12,\n \"drop_rate\": 0.2,\n \"qkv_bias\": False,\n \"flash\": True,\n}\n\nfrom model import GPTModel\nimport torch\n\nmodel = GPTModel(GPT_CONFIG)\n\ncheckpoint = torch.load(\"/teamspace/studios/this_studio/model/gpt_model_checkpoint.pth\", map_location=\"cpu\")\nmodel.load_state_dict(checkpoint['model_state_dict'])\n\nmodel.save_pretrained(\n save_directory=\"local-save-dir2\",\n config=GPT_CONFIG,\n)\n\nrepo_id = \"angkul07/llm_100M\"\n\nmodel.push_to_hub(\n repo_id=repo_id,\n commit_message=\"Initial commit of GPTModel checkpoint\",\n private=False\n)\n</code></pre>\n<p>When I try to load it using the <code>AutoModel</code>:</p>\n<pre><code class=\"lang-auto\">model = AutoModel.from_pretrained(\"angkul07/my-awesome-model\")\n</code></pre>\n<p>I get the following Value error:</p>\n<pre><code class=\"lang-auto\">ValueError: Incompatible safetensors file. File metadata is not ['pt', 'tf', 'flax', 'mlx'] but None\n```.\n\n\nI have tried looking for it on the internet but its no help. So, how can I fix it? How can I add a metadata?</code></pre>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-14T05:15:41.235Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 109,
"reads": 9,
"readers_count": 8,
"score": 541.8,
"yours": false,
"topic_id": 159226,
"topic_slug": "valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none",
"display_username": "Angkul",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96913,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none/159226/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227374,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-14T07:13:18.284Z",
"cooked": "<p>This is a very rare error, but it may just be that there is no metadata.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/5/85223a48e16db3ec22952bf78b2616967ed5f074_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"EAEDEF\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2\" target=\"_blank\" rel=\"noopener\">SeaLLMs/SeaLLM-7B-Hybrid · Seems like metadata is not in the safetensors files</a></h3>\n\n <p>Running AutoModel.from_pretrained(\"SeaLLMs/SeaLLM-7B-Hybrid\") gets the following error messages:</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/ml-explore/mlx/issues/743\">\n <header class=\"source\">\n\n <a href=\"https://github.com/ml-explore/mlx/issues/743\" target=\"_blank\" rel=\"noopener\">github.com/ml-explore/mlx</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/ml-explore/mlx/issues/743\" target=\"_blank\" rel=\"noopener\">[BUG] Saved safetensors are missing metadata format pt and cannot be loaded through `transformers` library</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-02-26\" data-time=\"13:37:02\" data-timezone=\"UTC\">01:37PM - 26 Feb 24 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-02-26\" data-time=\"23:18:23\" data-timezone=\"UTC\">11:18PM - 26 Feb 24 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/alexweberk\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/7/87eaccdcdbf2fe2a3e7ddaa052fa38d55321ae91.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"674E46\">\n alexweberk\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n enhancement\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">**Issue description**\nWhen uploading safetensors files as part of the `mlx_lm.f<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">use` step, all the weights files with `.safetensors` extensions are missing the optional metadata for format attribute. As a result, the uploaded weights cannot be loaded when used by `transformers` library users. (`mlx` loads them without a problem.)\n\n**To Reproduce**\n\nRun LoRA fine-tuning, then run fusing script:\n\n```bash\n!python -m mlx_lm.fuse \\\n --model google/gemma-7b-it \\\n --adapter-file checkpoints/600_adapters.npz \\\n --upload-repo alexweberk/gemma-7b-it-trismegistus \\\n --hf-path google/gemma-7b-it\n```\n\nAfter the upload, I tried running:\n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nrepo_id = \"alexweberk/gemma-7b-it-trismegistus\"\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nmodel = AutoModelForCausalLM.from_pretrained(repo_id)\nmodel.to(\"mps\")\n\ninput_text = format_prompt(system_prompt, question)\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").to(\"mps\")\n\noutputs = model.generate(\n **input_ids,\n max_new_tokens=256,\n)\nprint(tokenizer.decode(outputs[0]))\n```\n\nWhich gives the full error message below:\n\n```\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\nCell In[14], [line 7](vscode-notebook-cell:?execution_count=14&line=7)\n [4](vscode-notebook-cell:?execution_count=14&line=4) repo_id = \"alexweberk/gemma-7b-it-trismegistus\"\n [6](vscode-notebook-cell:?execution_count=14&line=6) tokenizer = AutoTokenizer.from_pretrained(repo_id)\n----> [7](vscode-notebook-cell:?execution_count=14&line=7) model = AutoModelForCausalLM.from_pretrained(repo_id)\n [8](vscode-notebook-cell:?execution_count=14&line=8) model.to('mps')\n [10](vscode-notebook-cell:?execution_count=14&line=10) input_text = format_prompt(system_prompt, question)\n\nFile [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:561](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:561), in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\n [559](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:559) elif type(config) in cls._model_mapping.keys():\n [560](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:560) model_class = _get_model_class(config, cls._model_mapping)\n--> [561](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:561) return model_class.from_pretrained(\n [562](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:562) pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs\n [563](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:563) )\n [564](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:564) raise ValueError(\n [565](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:565) f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\\n\"\n [566](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:566) f\"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}.\"\n [567](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:567) )\n\nFile [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3502](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3502), in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)\n [3493](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3493) if dtype_orig is not None:\n [3494](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3494) torch.set_default_dtype(dtype_orig)\n [3495](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3495) (\n [3496](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3496) model,\n [3497](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3497) missing_keys,\n [3498](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3498) unexpected_keys,\n [3499](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3499) mismatched_keys,\n [3500](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3500) offload_index,\n [3501](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3501) error_msgs,\n-> [3502](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3502) ) = cls._load_pretrained_model(\n [3503](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3503) model,\n [3504](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3504) state_dict,\n [3505](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3505) loaded_state_dict_keys, # XXX: rename?\n [3506](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3506) resolved_archive_file,\n [3507](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3507) pretrained_model_name_or_path,\n [3508](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3508) ignore_mismatched_sizes=ignore_mismatched_sizes,\n [3509](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3509) sharded_metadata=sharded_metadata,\n [3510](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3510) _fast_init=_fast_init,\n [3511](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3511) low_cpu_mem_usage=low_cpu_mem_usage,\n [3512](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3512) device_map=device_map,\n [3513](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3513) offload_folder=offload_folder,\n [3514](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3514) offload_state_dict=offload_state_dict,\n [3515](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3515) dtype=torch_dtype,\n [3516](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3516) hf_quantizer=hf_quantizer,\n [3517](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3517) keep_in_fp32_modules=keep_in_fp32_modules,\n [3518](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3518) )\n [3520](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3520) # make sure token embedding weights are still tied if needed\n [3521](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3521) model.tie_weights()\n\nFile [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3903](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3903), in PreTrainedModel._load_pretrained_model(cls, model, state_dict, loaded_keys, resolved_archive_file, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, _fast_init, low_cpu_mem_usage, device_map, offload_folder, offload_state_dict, dtype, hf_quantizer, keep_in_fp32_modules)\n [3901](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3901) if shard_file in disk_only_shard_files:\n [3902](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3902) continue\n-> [3903](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3903) state_dict = load_state_dict(shard_file)\n [3905](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3905) # Mistmatched keys contains tuples key/shape1/shape2 of weights in the checkpoint that have a shape not\n [3906](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3906) # matching the weights in the model.\n [3907](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3907) mismatched_keys += _find_mismatched_keys(\n [3908](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3908) state_dict,\n [3909](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3909) model_state_dict,\n (...)\n [3913](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3913) ignore_mismatched_sizes,\n [3914](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3914) )\n\nFile [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:507](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:507), in load_state_dict(checkpoint_file)\n [505](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:505) with safe_open(checkpoint_file, framework=\"pt\") as f:\n [506](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:506) metadata = f.metadata()\n--> [507](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:507) if metadata.get(\"format\") not in [\"pt\", \"tf\", \"flax\"]:\n [508](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:508) raise OSError(\n [509](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:509) f\"The safetensors archive passed at {checkpoint_file} does not contain the valid metadata. Make sure \"\n [510](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:510) \"you save your model with the `save_pretrained` method.\"\n [511](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:511) )\n [512](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:512) return safe_load_file(checkpoint_file)\n\nAttributeError: 'NoneType' object has no attribute 'get'\n```\n\nThe error seems to stem from the safetensors files missing the metadata for {\"format\": \"pt\"} when they are loaded by `AutoModelForCausalLM.from_pretrained()`.\n\nA quick work around was to separately resave the files one by one using the below script for each of the safetensors files, and then uploading them to Huggingface.\n\n```\nfrom safetensors import safe_open\nfrom safetensors.torch import save_file\n\nsafetensor_path = \"lora_fused_model/model-00001-of-00004.safetensors\"\n# ...\nfname, ext = safetensor_path.split(\"/\")[-1].split(\".\")\ntensors = dict()\nwith safe_open(safetensor_path, framework=\"pt\", device=\"cpu\") as f:\n for key in f.keys():\n tensors[key] = f.get_tensor(key)\n\nsave_file(tensors, f\"lora_fused_model/{fname}-with-format.{ext}\", metadata={\"format\": \"pt\"})\n```\n\nHowever, it would be nice to be able to quickly upload and have the model available for a wider audience more easily.\n\nThe source code led me to `mx.save_safetensors()` which led me to file the issue on this repo.\nhttps://github.com/ml-explore/mlx-examples/blob/47dd6bd17f3cc7ef95672ea16e443e58ce5eb1bf/llms/mlx_lm/utils.py#L479\n\n\n**Expected behavior**\nSince there are many `transformers` users in the ecosystem, it would be beneficial to be able to seamlessly train and upload model weights to Huggingface and have other users use them through `transformers`.\n\n**Desktop (please complete the following information):**\n - OS Version: [e.g. MacOS 14.3]\n - MacBook Pro M3 Max 128GB\n - mlx==0.4.0\n - mlx-lm==0.0.13\n - transformers==4.38.1</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-14T07:13:18.284Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 8,
"readers_count": 7,
"score": 26.6,
"yours": false,
"topic_id": 159226,
"topic_slug": "valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/ml-explore/mlx/issues/743",
"internal": false,
"reflection": false,
"title": "[BUG] Saved safetensors are missing metadata format pt and cannot be loaded through `transformers` library · Issue #743 · ml-explore/mlx · GitHub",
"clicks": 15
},
{
"url": "https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2",
"internal": false,
"reflection": false,
"title": "SeaLLMs/SeaLLM-7B-Hybrid · Seems like metadata is not in the safetensors files",
"clicks": 9
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none/159226/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227383,
"name": "Angkul",
"username": "angkul07",
"avatar_template": "/user_avatar/discuss.huggingface.co/angkul07/{size}/49392_2.png",
"created_at": "2025-06-14T08:09:24.679Z",
"cooked": "<p>hey <a class=\"mention\" href=\"/u/john6666\">@John6666</a>, thanks this works like a charm. Thank you so much.</p>\n<p>Btw, I am facing one more issue, I have a custom trained sentencepiece tokenizer. So, two files <code>tokenizer.model</code> and <code>tokenizer.vocab</code>. Now, I want to convert them into the AutoTokenizer format to match the compatibility. I used the following code to convert:</p>\n<pre><code class=\"lang-auto\">from transformers import PreTrainedTokenizerFast\n\ntokenizer = PreTrainedTokenizerFast(\n tokenizer_file=\"/teamspace/studios/this_studio/model/tokenizer.model\",\n model_max_length=256, \n bos_token=\"<s>\",\n eos_token=\"</s>\",\n unk_token=\"<unk>\",\n pad_token=\"<pad>\",\n mask_token=\"<mask>\" \n)\n\ntokenizer.save_pretrained(\"my-tokenizer\")\n</code></pre>\n<p>But I get the following error:</p>\n<pre><code class=\"lang-auto\">Exception: stream did not contain valid UTF-8\n</code></pre>\n<p>Do you have any idea how to convert this sentencepiece tokenizer to AutoTokenizer format? Thanks.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-14T08:09:24.679Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 8,
"readers_count": 7,
"score": 21.6,
"yours": false,
"topic_id": 159226,
"topic_slug": "valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none",
"display_username": "Angkul",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96913,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none/159226/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227386,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-14T08:23:45.928Z",
"cooked": "<p>Maybe it’s a character encoding issue?</p>\n<p>For example, Windows 10 Notepad saves files in UTF-16, so comments that aren’t in English may cause errors…<br>\nThis probably won’t happen if you’re using VSCode, and if you’re using a Colab environment, the cause is likely something else.</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/tokenizers/issues/282\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/tokenizers/issues/282\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/tokenizers</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/tokenizers/issues/282\" target=\"_blank\" rel=\"noopener\">Exception: stream did not contain valid UTF-8</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2020-05-28\" data-time=\"08:54:32\" data-timezone=\"UTC\">08:54AM - 28 May 20 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2020-06-29\" data-time=\"16:29:13\" data-timezone=\"UTC\">04:29PM - 29 Jun 20 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/phamdinhkhanh\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/4/a/4ad6043da8583a2ff69e4a3e17813a350e3bd551.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"928B74\">\n phamdinhkhanh\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">I get bug when tokenize ByteLevelBPETokenizer() for diacritic language in utf-16<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\"> such as 'Viet Nam' language. Bellow are my code initialize tokenizer.\n\n\n```\n%%time \nfrom pathlib import Path\n\nfrom tokenizers import ByteLevelBPETokenizer\n\npaths = ['file1.txt', 'file2.txt']\nprint(paths)\n# Initialize a tokenizer\ntokenizer = ByteLevelBPETokenizer()\n# Customize training\ntokenizer.train(files=paths, vocab_size=52000, min_frequency=2, special_tokens=[\n \"<s>\",\n \"<pad>\",\n \"</s>\",\n \"<unk>\",\n \"<mask>\",\n])\n```\n\nAnd bug log:\n\n> <ipython-input-78-66e6ec31bd7b> in train(self, files, vocab_size, min_frequency, show_progress, special_tokens)\n> 90 files = [files]\n> 91 print('files list: \\n', files)\n> ---> 92 self._tokenizer.train(trainer, files)\n> \n> Exception: stream did not contain valid UTF-8\n\nmy `file1.txt` and `file2.txt` contain words like:\n\n`xin chào tôi đến từ Việt Nam, tôi gặp vấn đề với tokenizer.`\n\n\nI try to find what self._tokenizer.train() does to fix it myself but project code are complicated. Can you explain what i was wrong?</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-14T08:23:45.928Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 159226,
"topic_slug": "valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/tokenizers/issues/282",
"internal": false,
"reflection": false,
"title": "Exception: stream did not contain valid UTF-8 · Issue #282 · huggingface/tokenizers · GitHub",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none/159226/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227449,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-14T20:24:08.080Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-14T20:24:08.080Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 3,
"readers_count": 2,
"score": 10.6,
"yours": false,
"topic_id": 159226,
"topic_slug": "valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/valueerror-incompatible-safetensors-file-file-metadata-is-not-pt-tf-flax-mlx-but-none/159226/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi experts,</p>
<p>I have trained a custom LLMs from scratch using pytorch and saved the model checkpoint. According to documentation, for custom pytorch models, I used the <code>PyTorchModelHubMixin</code> in my model class, to make it compatible. Now when I push it to hub using the following code:</p>
<pre><code class="lang-auto">GPT_CONFIG = {
"model_type": "gpt",
"vocab_size": 26000,
"context_length": 256,
"emb_dim": 768,
"n_heads": 16,
"n_layers": 12,
"drop_rate": 0.2,
"qkv_bias": False,
"flash": True,
}
from model import GPTModel
import torch
model = GPTModel(GPT_CONFIG)
checkpoint = torch.load("/teamspace/studios/this_studio/model/gpt_model_checkpoint.pth", map_location="cpu")
model.load_state_dict(checkpoint['model_state_dict'])
model.save_pretrained(
save_directory="local-save-dir2",
config=GPT_CONFIG,
)
repo_id = "angkul07/llm_100M"
model.push_to_hub(
repo_id=repo_id,
commit_message="Initial commit of GPTModel checkpoint",
private=False
)
</code></pre>
<p>When I try to load it using the <code>AutoModel</code>:</p>
<pre><code class="lang-auto">model = AutoModel.from_pretrained("angkul07/my-awesome-model")
</code></pre>
<p>I get the following Value error:</p>
<pre><code class="lang-auto">ValueError: Incompatible safetensors file. File metadata is not ['pt', 'tf', 'flax', 'mlx'] but None
```.
I have tried looking for it on the internet but its no help. So, how can I fix it? How can I add a metadata?</code></pre>
|
<p>This is a very rare error, but it may just be that there is no metadata.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2">
<header class="source">
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/5/85223a48e16db3ec22952bf78b2616967ed5f074_2_690x372.png" class="thumbnail" data-dominant-color="EAEDEF" width="690" height="372"></div>
<h3><a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2" target="_blank" rel="noopener">SeaLLMs/SeaLLM-7B-Hybrid · Seems like metadata is not in the safetensors files</a></h3>
<p>Running AutoModel.from_pretrained("SeaLLMs/SeaLLM-7B-Hybrid") gets the following error messages:</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox githubissue" data-onebox-src="https://github.com/ml-explore/mlx/issues/743">
<header class="source">
<a href="https://github.com/ml-explore/mlx/issues/743" target="_blank" rel="noopener">github.com/ml-explore/mlx</a>
</header>
<article class="onebox-body">
<div class="github-row">
<div class="github-icon-container" title="Issue" data-github-private-repo="false">
<svg width="60" height="60" class="github-icon" viewBox="0 0 14 16" aria-hidden="true"><path fill-rule="evenodd" d="M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z"></path></svg>
</div>
<div class="github-info-container">
<h4>
<a href="https://github.com/ml-explore/mlx/issues/743" target="_blank" rel="noopener">[BUG] Saved safetensors are missing metadata format pt and cannot be loaded through `transformers` library</a>
</h4>
<div class="github-info">
<div class="date">
opened <span class="discourse-local-date" data-format="ll" data-date="2024-02-26" data-time="13:37:02" data-timezone="UTC">01:37PM - 26 Feb 24 UTC</span>
</div>
<div class="date">
closed <span class="discourse-local-date" data-format="ll" data-date="2024-02-26" data-time="23:18:23" data-timezone="UTC">11:18PM - 26 Feb 24 UTC</span>
</div>
<div class="user">
<a href="https://github.com/alexweberk" target="_blank" rel="noopener">
<img alt="" src="https://us1.discourse-cdn.com/hellohellohello/original/3X/8/7/87eaccdcdbf2fe2a3e7ddaa052fa38d55321ae91.jpeg" class="onebox-avatar-inline" width="20" height="20" data-dominant-color="674E46">
alexweberk
</a>
</div>
</div>
<div class="labels">
<span style="display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;">
enhancement
</span>
</div>
</div>
</div>
<div class="github-row">
<p class="github-body-container">**Issue description**
When uploading safetensors files as part of the `mlx_lm.f<span class="show-more-container"><a href="" rel="noopener" class="show-more">…</a></span><span class="excerpt hidden">use` step, all the weights files with `.safetensors` extensions are missing the optional metadata for format attribute. As a result, the uploaded weights cannot be loaded when used by `transformers` library users. (`mlx` loads them without a problem.)
**To Reproduce**
Run LoRA fine-tuning, then run fusing script:
```bash
!python -m mlx_lm.fuse \
--model google/gemma-7b-it \
--adapter-file checkpoints/600_adapters.npz \
--upload-repo alexweberk/gemma-7b-it-trismegistus \
--hf-path google/gemma-7b-it
```
After the upload, I tried running:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
repo_id = "alexweberk/gemma-7b-it-trismegistus"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(repo_id)
model.to("mps")
input_text = format_prompt(system_prompt, question)
input_ids = tokenizer(input_text, return_tensors="pt").to("mps")
outputs = model.generate(
**input_ids,
max_new_tokens=256,
)
print(tokenizer.decode(outputs[0]))
```
Which gives the full error message below:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[14], [line 7](vscode-notebook-cell:?execution_count=14&line=7)
[4](vscode-notebook-cell:?execution_count=14&line=4) repo_id = "alexweberk/gemma-7b-it-trismegistus"
[6](vscode-notebook-cell:?execution_count=14&line=6) tokenizer = AutoTokenizer.from_pretrained(repo_id)
----> [7](vscode-notebook-cell:?execution_count=14&line=7) model = AutoModelForCausalLM.from_pretrained(repo_id)
[8](vscode-notebook-cell:?execution_count=14&line=8) model.to('mps')
[10](vscode-notebook-cell:?execution_count=14&line=10) input_text = format_prompt(system_prompt, question)
File [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:561](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:561), in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
[559](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:559) elif type(config) in cls._model_mapping.keys():
[560](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:560) model_class = _get_model_class(config, cls._model_mapping)
--> [561](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:561) return model_class.from_pretrained(
[562](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:562) pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
[563](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:563) )
[564](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:564) raise ValueError(
[565](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:565) f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
[566](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:566) f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
[567](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:567) )
File [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3502](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3502), in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
[3493](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3493) if dtype_orig is not None:
[3494](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3494) torch.set_default_dtype(dtype_orig)
[3495](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3495) (
[3496](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3496) model,
[3497](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3497) missing_keys,
[3498](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3498) unexpected_keys,
[3499](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3499) mismatched_keys,
[3500](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3500) offload_index,
[3501](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3501) error_msgs,
-> [3502](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3502) ) = cls._load_pretrained_model(
[3503](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3503) model,
[3504](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3504) state_dict,
[3505](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3505) loaded_state_dict_keys, # XXX: rename?
[3506](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3506) resolved_archive_file,
[3507](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3507) pretrained_model_name_or_path,
[3508](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3508) ignore_mismatched_sizes=ignore_mismatched_sizes,
[3509](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3509) sharded_metadata=sharded_metadata,
[3510](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3510) _fast_init=_fast_init,
[3511](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3511) low_cpu_mem_usage=low_cpu_mem_usage,
[3512](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3512) device_map=device_map,
[3513](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3513) offload_folder=offload_folder,
[3514](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3514) offload_state_dict=offload_state_dict,
[3515](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3515) dtype=torch_dtype,
[3516](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3516) hf_quantizer=hf_quantizer,
[3517](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3517) keep_in_fp32_modules=keep_in_fp32_modules,
[3518](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3518) )
[3520](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3520) # make sure token embedding weights are still tied if needed
[3521](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3521) model.tie_weights()
File [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3903](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3903), in PreTrainedModel._load_pretrained_model(cls, model, state_dict, loaded_keys, resolved_archive_file, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, _fast_init, low_cpu_mem_usage, device_map, offload_folder, offload_state_dict, dtype, hf_quantizer, keep_in_fp32_modules)
[3901](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3901) if shard_file in disk_only_shard_files:
[3902](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3902) continue
-> [3903](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3903) state_dict = load_state_dict(shard_file)
[3905](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3905) # Mistmatched keys contains tuples key/shape1/shape2 of weights in the checkpoint that have a shape not
[3906](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3906) # matching the weights in the model.
[3907](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3907) mismatched_keys += _find_mismatched_keys(
[3908](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3908) state_dict,
[3909](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3909) model_state_dict,
(...)
[3913](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3913) ignore_mismatched_sizes,
[3914](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:3914) )
File [~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:507](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:507), in load_state_dict(checkpoint_file)
[505](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:505) with safe_open(checkpoint_file, framework="pt") as f:
[506](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:506) metadata = f.metadata()
--> [507](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:507) if metadata.get("format") not in ["pt", "tf", "flax"]:
[508](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:508) raise OSError(
[509](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:509) f"The safetensors archive passed at {checkpoint_file} does not contain the valid metadata. Make sure "
[510](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:510) "you save your model with the `save_pretrained` method."
[511](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:511) )
[512](https://file+.vscode-resource.vscode-cdn.net/Users/alexishida/Projects/07_libraries/playing-with-llms/notebooks/mlx_gemma/~/miniforge3/envs/py311/lib/python3.11/site-packages/transformers/modeling_utils.py:512) return safe_load_file(checkpoint_file)
AttributeError: 'NoneType' object has no attribute 'get'
```
The error seems to stem from the safetensors files missing the metadata for {"format": "pt"} when they are loaded by `AutoModelForCausalLM.from_pretrained()`.
A quick work around was to separately resave the files one by one using the below script for each of the safetensors files, and then uploading them to Huggingface.
```
from safetensors import safe_open
from safetensors.torch import save_file
safetensor_path = "lora_fused_model/model-00001-of-00004.safetensors"
# ...
fname, ext = safetensor_path.split("/")[-1].split(".")
tensors = dict()
with safe_open(safetensor_path, framework="pt", device="cpu") as f:
for key in f.keys():
tensors[key] = f.get_tensor(key)
save_file(tensors, f"lora_fused_model/{fname}-with-format.{ext}", metadata={"format": "pt"})
```
However, it would be nice to be able to quickly upload and have the model available for a wider audience more easily.
The source code led me to `mx.save_safetensors()` which led me to file the issue on this repo.
https://github.com/ml-explore/mlx-examples/blob/47dd6bd17f3cc7ef95672ea16e443e58ce5eb1bf/llms/mlx_lm/utils.py#L479
**Expected behavior**
Since there are many `transformers` users in the ecosystem, it would be beneficial to be able to seamlessly train and upload model weights to Huggingface and have other users use them through `transformers`.
**Desktop (please complete the following information):**
- OS Version: [e.g. MacOS 14.3]
- MacBook Pro M3 Max 128GB
- mlx==0.4.0
- mlx-lm==0.0.13
- transformers==4.38.1</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Dataset.map Ignore failed batches
|
https://discuss.huggingface.co/t/dataset-map-ignore-failed-batches/158906
| 158,906
| 10
|
2025-06-11T11:16:01.198000Z
|
[
{
"id": 226940,
"name": "wuwenhao",
"username": "whh",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/958977/{size}.png",
"created_at": "2025-06-11T11:16:01.267Z",
"cooked": "<p>I often use the batch mode of dataset.map to process large amounts of data. Since there may be some format problems in the data, some batches may fail in the map (while most batches are OK).</p>\n<p>Is there some way to ignore the failed batches and return the successful batches?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-11T11:16:01.267Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 5,
"readers_count": 4,
"score": 96,
"yours": false,
"topic_id": 158906,
"topic_slug": "dataset-map-ignore-failed-batches",
"display_username": "wuwenhao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 81967,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/dataset-map-ignore-failed-batches/158906/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226948,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-11T11:39:10.983Z",
"cooked": "<p>For example, how about just use Python Exception?</p><aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"31614\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/h/57b2e6/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/saving-outcomes-if-error-while-applying-map-function-on-dataset/31614\">Saving outcomes if Error while applying map function on dataset</a> <a class=\"badge-category__wrapper \" href=\"/c/datasets/10\"><span data-category-id=\"10\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the datasets library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Datasets</span></span></a>\n </div>\n <blockquote>\n I use an API (like huggingface_hub) to let a language model answer questions from my dataset. \nSince I want to send every single example to the language model, I wrote a function that does that and then use the map function to map this API call to every example of my dataset. \nMy issue is: If there is an Error at any point (e.g. the API throws an Error after one hour, because sth happend) I loose all the information. Lets say the map worked for the first 100 examples and then the API throws an E…\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-11T11:39:10.983Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 6,
"yours": false,
"topic_id": 158906,
"topic_slug": "dataset-map-ignore-failed-batches",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/saving-outcomes-if-error-while-applying-map-function-on-dataset/31614",
"internal": true,
"reflection": false,
"title": "Saving outcomes if Error while applying map function on dataset",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/dataset-map-ignore-failed-batches/158906/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227235,
"name": "wuwenhao",
"username": "whh",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/958977/{size}.png",
"created_at": "2025-06-13T06:26:22.970Z",
"cooked": "<p>Thanks, It’s helpful !</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-13T06:26:22.970Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 158906,
"topic_slug": "dataset-map-ignore-failed-batches",
"display_username": "wuwenhao",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 81967,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/dataset-map-ignore-failed-batches/158906/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227320,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-13T18:27:07.581Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-13T18:27:07.581Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 158906,
"topic_slug": "dataset-map-ignore-failed-batches",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/dataset-map-ignore-failed-batches/158906/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I often use the batch mode of dataset.map to process large amounts of data. Since there may be some format problems in the data, some batches may fail in the map (while most batches are OK).</p>
<p>Is there some way to ignore the failed batches and return the successful batches?</p>
|
<p>For example, how about just use Python Exception?</p><aside class="quote quote-modified" data-post="1" data-topic="31614">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/h/57b2e6/48.png" class="avatar">
<a href="https://discuss.huggingface.co/t/saving-outcomes-if-error-while-applying-map-function-on-dataset/31614">Saving outcomes if Error while applying map function on dataset</a> <a class="badge-category__wrapper " href="/c/datasets/10"><span data-category-id="10" style="--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="This category is for any question related to the datasets library. You can also file an issue."><span class="badge-category__name">🤗Datasets</span></span></a>
</div>
<blockquote>
I use an API (like huggingface_hub) to let a language model answer questions from my dataset.
Since I want to send every single example to the language model, I wrote a function that does that and then use the map function to map this API call to every example of my dataset.
My issue is: If there is an Error at any point (e.g. the API throws an Error after one hour, because sth happend) I loose all the information. Lets say the map worked for the first 100 examples and then the API throws an E…
</blockquote>
</aside>
|
Unable to Upload arXiv Paper to HuggingFace Daily Papers
|
https://discuss.huggingface.co/t/unable-to-upload-arxiv-paper-to-huggingface-daily-papers/159000
| 159,000
| 23
|
2025-06-12T02:21:34.885000Z
|
[
{
"id": 227049,
"name": "Kevin Galim",
"username": "kev95",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/k/df788c/{size}.png",
"created_at": "2025-06-12T02:21:34.941Z",
"cooked": "<p>Hello,</p>\n<p>I am trying to upload my recent arXiv paper (<a href=\"https://arxiv.org/abs/2506.08373\" rel=\"noopener nofollow ugc\">arXiv:2506.08373</a>) to the HuggingFace Daily Papers platform, but I am encountering the following error:</p>\n<pre><code class=\"lang-auto\">{\"error\":\"Arxiv paper not found\"}\n</code></pre>\n<p>The paper is publicly available on arXiv, so I’m not sure why it isn’t being recognized by the platform. Could you please help me resolve this issue?</p>\n<p>Thank you!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-12T02:21:34.941Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 76,
"reads": 7,
"readers_count": 6,
"score": 386.4,
"yours": false,
"topic_id": 159000,
"topic_slug": "unable-to-upload-arxiv-paper-to-huggingface-daily-papers",
"display_username": "Kevin Galim",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://arxiv.org/abs/2506.08373",
"internal": false,
"reflection": false,
"title": "[2506.08373] Draft-based Approximate Inference for LLMs",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96744,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-upload-arxiv-paper-to-huggingface-daily-papers/159000/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227053,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-12T02:48:41.745Z",
"cooked": "<p>I wonder if the Endpoint for submitting papers is malfunctioning… <a class=\"mention\" href=\"/u/pierric\">@pierric</a></p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/huggingface_hub/issues/2745\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/huggingface_hub/issues/2745\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/huggingface_hub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/huggingface_hub/issues/2745\" target=\"_blank\" rel=\"noopener\">[HfApi] Add `submit_paper` endpoint</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-01-13\" data-time=\"09:39:28\" data-timezone=\"UTC\">09:39AM - 13 Jan 25 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/hanouticelina\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/7/37ae73356a558a9815c89bf11cef8bdf4449f473.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"725D42\">\n hanouticelina\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n enhancement\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">Feature request from @NielsRogge and @AK391, slack thread [here](https://hugging<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">face.slack.com/archives/C06QV3LNWRJ/p1736446591900449) (private).\n\n### Description\n\nAdd a `submit_paper()` method to the `HfApi` class to allow authors to submit papers to Daily Papers on Hugging Face Hub. This endpoint is currently available via `/api/papers/submit`.\n\n### Endpoint Specs\n\n**API Endpoint:** `POST /api/papers/submit`\n\n**inputs:**\n- `paper_id` (required): ArXiv ID of the paper to submit.\n- `comment` (optional): Text comment about the paper.\n- `media_urls` (optional): List of media URLs associated with the paper.\n\n**limitations:**\n- User must have at least one paper on HF to submit. \n- cannot submit papers on weekends. \n- Regular users limited to X submissions per day. \n- Papers older than 7 days cannot be submitted. \n- Same paper cannot be submitted twice. \n\nThe server throws HTTP errors in these cases.\n\n### Implementation Details\n\n```python\n@validate_hf_hub_args\ndef submit_paper(\n self,\n paper_id: str,\n *,\n comment: Optional[str] = None, \n media_urls: Optional[List[str]] = None,\n token: Union[bool, str, None] = None,\n) -> None:\n \"\"\"Submit a paper to the Daily Papers feed.\n\n Note:\n - You must have at least one paper on HF to submit.\n - You cannot submit papers on weekends.\n - The number of submissions per day is limited.\n - Papers older than 7 days cannot be submitted.\n - Same paper cannot be submitted twice.\n \n Args:\n paper_id (`str`):\n The ArXiv ID of the paper to submit (e.g. \"2401.12345\") \n comment (`str`, *optional*):\n An optional comment about the paper\n media_urls (`List[str]`, *optional*): \n Optional list of media URLs to attach to the submission\n token (`Union[bool, str, None]`, *optional*):\n Authentication token. Required.\n \n Returns:\n None\n \n Raises:\n - ValueError if submission criteria not met\n - HTTPError for various failure cases\n \"\"\"\n```\n\n### Tests\nWe still need to figure out how to run tests properly in the staging environment [hub-ci](https://hub-ci.huggingface.co). We need to have a dummy user with at least one paper submitted and find how to mock the paper submission date. \n⚠️ Not sure if it's worth investing too much time on the tests here given the limited usage.</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-12T02:48:41.745Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 159000,
"topic_slug": "unable-to-upload-arxiv-paper-to-huggingface-daily-papers",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/huggingface_hub/issues/2745",
"internal": false,
"reflection": false,
"title": "[HfApi] Add `submit_paper` endpoint · Issue #2745 · huggingface/huggingface_hub · GitHub",
"clicks": 8
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-upload-arxiv-paper-to-huggingface-daily-papers/159000/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 227209,
"name": "Kevin Galim",
"username": "kev95",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/k/df788c/{size}.png",
"created_at": "2025-06-13T02:07:09.420Z",
"cooked": "<p>It is working now. Thank you for your support!</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-13T02:07:09.420Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 6,
"readers_count": 5,
"score": 26.2,
"yours": false,
"topic_id": 159000,
"topic_slug": "unable-to-upload-arxiv-paper-to-huggingface-daily-papers",
"display_username": "Kevin Galim",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96744,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-upload-arxiv-paper-to-huggingface-daily-papers/159000/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227281,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-13T14:08:06.126Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-13T14:08:06.126Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 1,
"yours": false,
"topic_id": 159000,
"topic_slug": "unable-to-upload-arxiv-paper-to-huggingface-daily-papers",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-upload-arxiv-paper-to-huggingface-daily-papers/159000/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello,</p>
<p>I am trying to upload my recent arXiv paper (<a href="https://arxiv.org/abs/2506.08373" rel="noopener nofollow ugc">arXiv:2506.08373</a>) to the HuggingFace Daily Papers platform, but I am encountering the following error:</p>
<pre><code class="lang-auto">{"error":"Arxiv paper not found"}
</code></pre>
<p>The paper is publicly available on arXiv, so I’m not sure why it isn’t being recognized by the platform. Could you please help me resolve this issue?</p>
<p>Thank you!</p>
|
<p>It is working now. Thank you for your support!</p>
|
Correct way to load multiple LoRA adapters for inference
|
https://discuss.huggingface.co/t/correct-way-to-load-multiple-lora-adapters-for-inference/158863
| 158,863
| 9
|
2025-06-11T05:16:17.424000Z
|
[
{
"id": 226879,
"name": "Shruti Priya",
"username": "sapphicart",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/90db22/{size}.png",
"created_at": "2025-06-11T05:16:17.482Z",
"cooked": "<p>I have trained two LoRA Adapters on top of the same base model. I saved the adapters with <code>model.save_pretrained()</code> Right now, I am trying to load both adapters for inference. My current approach is this:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)\nmodel = PeftModelFromSequenceClassification.from_pretrained(base_model, adapter_1, adapter_name=\"adapter_1\", num_labels=2)\n\nweighted_adapter_name=\"two-lora\"\nmodel.load_adapter(adapter_2, adapter_name=\"adapter_2\")\n\nmodel.add_weighted_adapter(\n adapters=[\"adapter_1\", \"adapter_2\"],\n weights=[0.7, 0.3],\n adapter_name=weighted_adapter_name,\n combination_type=\"linear\",\n)\n</code></pre>\n<p>But this gives me the error <code>Cannot add weighted adapters if they target the same module with modules_to_save, but found 1 such instance(s).</code></p>\n<p>Then, I tried another method from this <a href=\"https://huggingface.co/docs/peft/main/en/developer_guides/mixed_models\">documentation</a></p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)\nmodel = PeftMixedModel.from_pretrained(base_model, adapter_1, adapter_name=\"adapter_1\")\n\nmodel.load_adapter(adapter_2, adapter_name=\"adapter_2\")\nmodel.set_adapter([\"adapter_1\", \"adapter_2\"])\n</code></pre>\n<p>But this too throws an error <code>ValueError: Only one adapter can be set at a time for modules_to_save</code>.</p>\n<p>I don’t understand what I am doing wrong. Should I try this:</p>\n<ul>\n<li><code>get_peft_model</code> with <code>base_model</code> and <code>adapter_1</code></li>\n<li>train this adapter</li>\n<li><code>add_adapter</code> with <code>adapter_2</code> to this model</li>\n<li>train second adapter</li>\n</ul>\n<p>But with this approach how would I load both adapters for inference?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-11T05:34:27.706Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 388,
"reads": 14,
"readers_count": 13,
"score": 1867.8,
"yours": false,
"topic_id": 158863,
"topic_slug": "correct-way-to-load-multiple-lora-adapters-for-inference",
"display_username": "Shruti Priya",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/peft/main/en/developer_guides/mixed_models",
"internal": false,
"reflection": false,
"title": "Mixed adapter types",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95123,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/correct-way-to-load-multiple-lora-adapters-for-inference/158863/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226880,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-11T05:35:43.348Z",
"cooked": "<p>Like this?</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://github.com/huggingface/peft/discussions/1315\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/huggingface/peft/discussions/1315\" target=\"_blank\" rel=\"noopener\">GitHub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/344;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/1/7/1703ff384c3f4b08dda75b6b811543b3618b628b_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"E9EBEE\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://github.com/huggingface/peft/discussions/1315\" target=\"_blank\" rel=\"noopener\">How to train multiple LoRA adapters on the same base model concurrently. ·...</a></h3>\n\n <p>I want to train 2 LoRA models in conjunction on my dataset. I don't want gradients from one model impact the other. However, since the base model is the same I am confused if just setting adapter_n...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-11T05:35:43.348Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 12,
"readers_count": 11,
"score": 32.4,
"yours": false,
"topic_id": 158863,
"topic_slug": "correct-way-to-load-multiple-lora-adapters-for-inference",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/peft/discussions/1315",
"internal": false,
"reflection": false,
"title": "How to train multiple LoRA adapters on the same base model concurrently. · huggingface/peft · Discussion #1315 · GitHub",
"clicks": 46
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/correct-way-to-load-multiple-lora-adapters-for-inference/158863/2",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226912,
"name": "Shruti Priya",
"username": "sapphicart",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/90db22/{size}.png",
"created_at": "2025-06-11T08:57:26.154Z",
"cooked": "<p>Thanks for the reply! I tried this and it works perfectly. But, when I try to save the model and load it from local directory, I get the error <code>ValueError: Can't find 'adapter_config.json' at '/path/to/model'</code>. I have tried pushing the model to hub and then loading it, still the same error. I can see there is no <code>adapter_config.json</code> at the path. The json files are actually inside new directories for the adapters.</p>\n<p>The file structure is like this:</p>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\">model\n|____adapter_1\n| |_____adapter_config.json\n| |_____adapter_model.safetensors\n|____adapter_2\n| |_____adapter_config.json\n| |_____adapter_model.safetensors\n|____special_tokens_map.json\n|____tokenizer.json\n|____tokenizer.config.json\n|____vocab.txt\n|____README.md\n</code></pre>\n<p>I am trying to load the model with adapters like this (the code is from <a href=\"https://discuss.huggingface.co/t/correct-way-to-save-load-adapters-and-checkpoints-in-peft/77836/8\">this</a> discussion):</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">outputs = \"/path/to/model\"\nadapter_1 = \"/path/to/model/adapter_1\"\nadapter_2 = \"/path/to/model/adapter_2\"\n\nadapter_1_config = PeftConfig.from_pretrained(adapter_1)\nadapter_2_config = PeftConfig.from_pretrained(adapter_2)\n\nbase_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)\n\npeft_model = PeftModelForSequenceClassification.from_pretrained(base_model, outputs, num_labels=2)\npeft_model.load_adapter(adapter_1)\npeft_model.load_adapter(adapter_2)\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-11T08:57:26.154Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 11,
"readers_count": 10,
"score": 62.2,
"yours": false,
"topic_id": 158863,
"topic_slug": "correct-way-to-load-multiple-lora-adapters-for-inference",
"display_username": "Shruti Priya",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/correct-way-to-save-load-adapters-and-checkpoints-in-peft/77836/8",
"internal": true,
"reflection": false,
"title": "Correct way to save/load adapters and checkpoints in PEFT",
"clicks": 6
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95123,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/correct-way-to-load-multiple-lora-adapters-for-inference/158863/3",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 226915,
"name": "Shruti Priya",
"username": "sapphicart",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/90db22/{size}.png",
"created_at": "2025-06-11T09:20:17.903Z",
"cooked": "<p>Found a solution!</p>\n<p>Instead of loading <code>PeftModel</code> from base directory, I instead loaded it from <code>adapter_1</code> then I loaded <code>adapter_2</code> and used both for inference.</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">adapter_1 = \"/path/to/model/adapter_1\"\nadapter_2 = \"/path/to/model/adapter_2\"\n\nbase_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)\n\npeft_model = PeftModelForSequenceClassification.from_pretrained(base_model, adapter_1, num_labels=2)\npeft_model.load_adapter(adapter_1, adapter_name=\"adapter_1\")\npeft_model.load_adapter(adapter_2, adapter_name=\"adapter_2\")\npeft_model.base_model.set_adapter([\"adapter_1\", \"adapter_2\"])\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-11T09:20:17.903Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 11,
"readers_count": 10,
"score": 87.2,
"yours": false,
"topic_id": 158863,
"topic_slug": "correct-way-to-load-multiple-lora-adapters-for-inference",
"display_username": "Shruti Priya",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95123,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/correct-way-to-load-multiple-lora-adapters-for-inference/158863/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95123,
"username": "sapphicart",
"name": "Shruti Priya",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/90db22/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 227011,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-11T21:20:26.083Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-11T21:20:26.083Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 1.4,
"yours": false,
"topic_id": 158863,
"topic_slug": "correct-way-to-load-multiple-lora-adapters-for-inference",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/correct-way-to-load-multiple-lora-adapters-for-inference/158863/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have trained two LoRA Adapters on top of the same base model. I saved the adapters with <code>model.save_pretrained()</code> Right now, I am trying to load both adapters for inference. My current approach is this:</p>
<pre data-code-wrap="python"><code class="lang-python">base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)
model = PeftModelFromSequenceClassification.from_pretrained(base_model, adapter_1, adapter_name="adapter_1", num_labels=2)
weighted_adapter_name="two-lora"
model.load_adapter(adapter_2, adapter_name="adapter_2")
model.add_weighted_adapter(
adapters=["adapter_1", "adapter_2"],
weights=[0.7, 0.3],
adapter_name=weighted_adapter_name,
combination_type="linear",
)
</code></pre>
<p>But this gives me the error <code>Cannot add weighted adapters if they target the same module with modules_to_save, but found 1 such instance(s).</code></p>
<p>Then, I tried another method from this <a href="https://huggingface.co/docs/peft/main/en/developer_guides/mixed_models">documentation</a></p>
<pre data-code-wrap="python"><code class="lang-python">base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)
model = PeftMixedModel.from_pretrained(base_model, adapter_1, adapter_name="adapter_1")
model.load_adapter(adapter_2, adapter_name="adapter_2")
model.set_adapter(["adapter_1", "adapter_2"])
</code></pre>
<p>But this too throws an error <code>ValueError: Only one adapter can be set at a time for modules_to_save</code>.</p>
<p>I don’t understand what I am doing wrong. Should I try this:</p>
<ul>
<li><code>get_peft_model</code> with <code>base_model</code> and <code>adapter_1</code></li>
<li>train this adapter</li>
<li><code>add_adapter</code> with <code>adapter_2</code> to this model</li>
<li>train second adapter</li>
</ul>
<p>But with this approach how would I load both adapters for inference?</p>
|
<p>Found a solution!</p>
<p>Instead of loading <code>PeftModel</code> from base directory, I instead loaded it from <code>adapter_1</code> then I loaded <code>adapter_2</code> and used both for inference.</p>
<pre data-code-wrap="python"><code class="lang-python">adapter_1 = "/path/to/model/adapter_1"
adapter_2 = "/path/to/model/adapter_2"
base_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, output_hidden_states=False)
peft_model = PeftModelForSequenceClassification.from_pretrained(base_model, adapter_1, num_labels=2)
peft_model.load_adapter(adapter_1, adapter_name="adapter_1")
peft_model.load_adapter(adapter_2, adapter_name="adapter_2")
peft_model.base_model.set_adapter(["adapter_1", "adapter_2"])
</code></pre>
|
Linux. Transfer ISOs
|
https://discuss.huggingface.co/t/linux-transfer-isos/158545
| 158,545
| 5
|
2025-06-09T07:29:26.789000Z
|
[
{
"id": 226422,
"name": "Jordan kiss",
"username": "VexxaGlitch",
"avatar_template": "/user_avatar/discuss.huggingface.co/vexxaglitch/{size}/48728_2.png",
"created_at": "2025-06-09T07:29:26.848Z",
"cooked": "<p>Does anyone know about Linux? I’m trying to put a ISO on a flash drive</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T07:29:26.848Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 8,
"readers_count": 7,
"score": 26.6,
"yours": false,
"topic_id": 158545,
"topic_slug": "linux-transfer-isos",
"display_username": "Jordan kiss",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95898,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/linux-transfer-isos/158545/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226431,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-09T08:03:07.654Z",
"cooked": "<p>I don’t know, but I found it when I searched.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://www.geeksforgeeks.org/techtips/setup-dual-boot-with-linux-and-windows/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/b/eb43f6eeac1480d83f476ebbc7b8ea0e3a29ec05.png\" class=\"site-icon\" data-dominant-color=\"2F8D46\" width=\"32\" height=\"32\">\n\n <a href=\"https://www.geeksforgeeks.org/techtips/setup-dual-boot-with-linux-and-windows/\" target=\"_blank\" rel=\"noopener\" title=\"07:10PM - 31 December 2018\">GeeksforGeeks – 31 Dec 18</a>\n </header>\n\n <article class=\"onebox-body\">\n <img width=\"200\" height=\"200\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/4/b4fa4d1c3b06010fdcb7ca9c1a6707068222eb93_2_200x200.png\" class=\"thumbnail onebox-avatar\" data-dominant-color=\"B48E8D\">\n\n<h3><a href=\"https://www.geeksforgeeks.org/techtips/setup-dual-boot-with-linux-and-windows/\" target=\"_blank\" rel=\"noopener\">How to Set Up a Dual Boot with Ubuntu and Windows? - GeeksforGeeks</a></h3>\n\n <p>Your All-in-One Learning Portal: GeeksforGeeks is a comprehensive educational platform that empowers learners across domains-spanning computer science and programming, school education, upskilling, commerce, software tools, competitive exams, and...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T08:03:07.654Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 21.4,
"yours": false,
"topic_id": 158545,
"topic_slug": "linux-transfer-isos",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.geeksforgeeks.org/techtips/setup-dual-boot-with-linux-and-windows/",
"internal": false,
"reflection": false,
"title": "How to Set Up a Dual Boot with Ubuntu and Windows? - GeeksforGeeks",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/linux-transfer-isos/158545/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226536,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-09T17:53:17.498Z",
"cooked": "<p>Are you needing Linux? You could use a dual boot, VM, or download the WSL for windows.</p>\n<p>I know you are going to need to burn the iso to the flash drive and format it with FAT32.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T17:53:17.498Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 31,
"yours": false,
"topic_id": 158545,
"topic_slug": "linux-transfer-isos",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/linux-transfer-isos/158545/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 226575,
"name": "Jordan kiss",
"username": "VexxaGlitch",
"avatar_template": "/user_avatar/discuss.huggingface.co/vexxaglitch/{size}/48728_2.png",
"created_at": "2025-06-09T21:22:12.199Z",
"cooked": "<p>I was trying to do it on a chrome book LOL but I was able to download it on a family members computer🫶🏼</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T21:22:12.199Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 158545,
"topic_slug": "linux-transfer-isos",
"display_username": "Jordan kiss",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95898,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/linux-transfer-isos/158545/4",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226701,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-10T09:22:17.178Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-10T09:22:17.178Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 158545,
"topic_slug": "linux-transfer-isos",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/linux-transfer-isos/158545/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Does anyone know about Linux? I’m trying to put a ISO on a flash drive</p>
|
<p>Are you needing Linux? You could use a dual boot, VM, or download the WSL for windows.</p>
<p>I know you are going to need to burn the iso to the flash drive and format it with FAT32.</p>
|
How was self.loss_function implemented
|
https://discuss.huggingface.co/t/how-was-self-loss-function-implemented/158573
| 158,573
| 9
|
2025-06-09T09:07:49.199000Z
|
[
{
"id": 226460,
"name": "Omar Samir",
"username": "OmarSamir",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/o/c57346/{size}.png",
"created_at": "2025-06-09T09:07:49.255Z",
"cooked": "<p>Hi, I was curious about how the <code>self.loss_function</code> is implemented in the Qwen2.5-VL model to compute the loss during training.<br>\nCould someone explain how it works or point me to the relevant part of the code?</p>\n<p>Here’s the link to the line I’m referring to:</p><aside class=\"onebox githubblob\" data-onebox-src=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615\" target=\"_blank\" rel=\"noopener nofollow ugc\">github.com/huggingface/transformers</a>\n </header>\n\n <article class=\"onebox-body\">\n <h4><a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615\" target=\"_blank\" rel=\"noopener nofollow ugc\">src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py</a></h4>\n\n<div class=\"git-blob-info\">\n <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615\" rel=\"noopener nofollow ugc\"><code>main</code></a>\n</div>\n\n\n\n <pre class=\"onebox\"><code class=\"lang-py\">\n <ol class=\"start lines\" start=\"1605\" style=\"counter-reset: li-counter 1604 ;\">\n <li> return_dict=True,</li>\n <li> cache_position=cache_position,</li>\n <li> **kwargs,</li>\n <li>)</li>\n <li></li>\n <li>hidden_states = outputs[0]</li>\n <li>logits = self.lm_head(hidden_states)</li>\n <li></li>\n <li>loss = None</li>\n <li>if labels is not None:</li>\n <li class=\"selected\"> loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size)</li>\n <li></li>\n <li>return Qwen2_5_VLCausalLMOutputWithPast(</li>\n <li> loss=loss,</li>\n <li> logits=logits,</li>\n <li> past_key_values=outputs.past_key_values,</li>\n <li> hidden_states=outputs.hidden_states,</li>\n <li> attentions=outputs.attentions,</li>\n <li> rope_deltas=outputs.rope_deltas,</li>\n <li>)</li>\n <li></li>\n </ol>\n </code></pre>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Thanks in advance!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T09:07:49.255Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 235,
"reads": 11,
"readers_count": 10,
"score": 1117,
"yours": false,
"topic_id": 158573,
"topic_slug": "how-was-self-loss-function-implemented",
"display_username": "Omar Samir",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py at main · huggingface/transformers · GitHub",
"clicks": 7
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96455,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-was-self-loss-function-implemented/158573/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226478,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-09T11:13:52.136Z",
"cooked": "<p>Maybe this?</p><aside class=\"quote\" data-post=\"1\" data-topic=\"26073\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/u/fbc32d/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/where-to-look-for-a-loss-definition-for-a-pretrained-model/26073\">Where to look for a loss definition for a pretrained model?</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n I am using facebook/opt-350m model: \nmodel = AutoModelForCausalLM.from_pretrained(\"facebook/opt-350m\")\n\nAs far as I understand, its default loss is the crossentropy loss. But how can I verify it, and where can I see the implementation details? Thank you.\n </blockquote>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"63395\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/a/e495f1/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/having-troubel-in-understanding-what-loss-is-currently-in-use/63395\">Having troubel in understanding what loss is currently in use</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n I was going through this hugging face code and I am having trouble understanding what loss the model is currently using. Although I know most seq2seq models uses CrossEntrophy loss but I don’t see the definition anywhere in the code \n\n\nActually I want to train the model with a new custom loss. I have trained a baseline model and its working fine. \nThank You\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T11:13:52.136Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 12,
"reads": 9,
"readers_count": 8,
"score": 56.6,
"yours": false,
"topic_id": 158573,
"topic_slug": "how-was-self-loss-function-implemented",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/where-to-look-for-a-loss-definition-for-a-pretrained-model/26073",
"internal": true,
"reflection": false,
"title": "Where to look for a loss definition for a pretrained model?",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/having-troubel-in-understanding-what-loss-is-currently-in-use/63395",
"internal": true,
"reflection": false,
"title": "Having troubel in understanding what loss is currently in use",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-was-self-loss-function-implemented/158573/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226484,
"name": "Omar Samir",
"username": "OmarSamir",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/o/c57346/{size}.png",
"created_at": "2025-06-09T11:40:37.854Z",
"cooked": "<p>Thank you so much for sharing. However, these issues predated the Transformers version 4.53.0.dev0. What I want to know is where the self.loss_function was implemented for these models so I can modify it correctly.</p>\n<p>Thank you!</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T11:40:37.854Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 8,
"readers_count": 7,
"score": 46.4,
"yours": false,
"topic_id": 158573,
"topic_slug": "how-was-self-loss-function-implemented",
"display_username": "Omar Samir",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96455,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-was-self-loss-function-implemented/158573/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226495,
"name": "Omar Samir",
"username": "OmarSamir",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/o/c57346/{size}.png",
"created_at": "2025-06-09T12:32:19.186Z",
"cooked": "<p>The loss functions are defined in src/transformers/loss/loss_utils.py. The logic for selecting which loss function to use is implemented in the PreTrainedModel class, located in src/transformers/modeling_utils.py.</p>\n<p>link: <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">transformers/src/transformers/loss/loss_utils.py at main · huggingface/transformers · GitHub</a><br>\nlink: <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5446\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">transformers/src/transformers/modeling_utils.py at main · huggingface/transformers · GitHub</a></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-09T12:32:19.186Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 8,
"readers_count": 7,
"score": 46.4,
"yours": false,
"topic_id": 158573,
"topic_slug": "how-was-self-loss-function-implemented",
"display_username": "Omar Samir",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/loss/loss_utils.py at main · huggingface/transformers · GitHub",
"clicks": 34
},
{
"url": "https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5446",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/modeling_utils.py at main · huggingface/transformers · GitHub",
"clicks": 16
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96455,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-was-self-loss-function-implemented/158573/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 96455,
"username": "OmarSamir",
"name": "Omar Samir",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/o/c57346/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 226593,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-10T00:32:58.119Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-10T00:32:58.119Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 1,
"yours": false,
"topic_id": 158573,
"topic_slug": "how-was-self-loss-function-implemented",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-was-self-loss-function-implemented/158573/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi, I was curious about how the <code>self.loss_function</code> is implemented in the Qwen2.5-VL model to compute the loss during training.<br>
Could someone explain how it works or point me to the relevant part of the code?</p>
<p>Here’s the link to the line I’m referring to:</p><aside class="onebox githubblob" data-onebox-src="https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615">
<header class="source">
<a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615" target="_blank" rel="noopener nofollow ugc">github.com/huggingface/transformers</a>
</header>
<article class="onebox-body">
<h4><a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615" target="_blank" rel="noopener nofollow ugc">src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py</a></h4>
<div class="git-blob-info">
<a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L1615" rel="noopener nofollow ugc"><code>main</code></a>
</div>
<pre class="onebox"><code class="lang-py">
<ol class="start lines" start="1605" style="counter-reset: li-counter 1604 ;">
<li> return_dict=True,</li>
<li> cache_position=cache_position,</li>
<li> **kwargs,</li>
<li>)</li>
<li></li>
<li>hidden_states = outputs[0]</li>
<li>logits = self.lm_head(hidden_states)</li>
<li></li>
<li>loss = None</li>
<li>if labels is not None:</li>
<li class="selected"> loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size)</li>
<li></li>
<li>return Qwen2_5_VLCausalLMOutputWithPast(</li>
<li> loss=loss,</li>
<li> logits=logits,</li>
<li> past_key_values=outputs.past_key_values,</li>
<li> hidden_states=outputs.hidden_states,</li>
<li> attentions=outputs.attentions,</li>
<li> rope_deltas=outputs.rope_deltas,</li>
<li>)</li>
<li></li>
</ol>
</code></pre>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>Thanks in advance!</p>
|
<p>The loss functions are defined in src/transformers/loss/loss_utils.py. The logic for selecting which loss function to use is implemented in the PreTrainedModel class, located in src/transformers/modeling_utils.py.</p>
<p>link: <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py" class="inline-onebox" rel="noopener nofollow ugc">transformers/src/transformers/loss/loss_utils.py at main · huggingface/transformers · GitHub</a><br>
link: <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5446" class="inline-onebox" rel="noopener nofollow ugc">transformers/src/transformers/modeling_utils.py at main · huggingface/transformers · GitHub</a></p>
|
Unable to Train Lora with Oobabooga
|
https://discuss.huggingface.co/t/unable-to-train-lora-with-oobabooga/158175
| 158,175
| 5
|
2025-06-05T21:39:50.162000Z
|
[
{
"id": 225947,
"name": "Chris",
"username": "363ls2gto",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/3/b3f665/{size}.png",
"created_at": "2025-06-05T21:39:50.232Z",
"cooked": "<p>I am a beginner with LLMs but I have been able to install Ollama, Oobabooga, sillytavern, anything llm, and convert between GGUF to GPTQ. I use windows 10 and Ubuntu 24.04 and also have some training experience with Flux on my home computer and Massed Compute.</p>\n<p>I have been trying to train my own Lora using Oogbooga. I have tried on linux and windows. I have tried GGUF models and GPTQ models. I have tried .txt files and Json files generated from past chats. Nothing seems to work. I have also installed the Training Pro extension.</p>\n<p>Every time I try a GGUF model I receive the errpr:</p>\n<p>Attribute Error: ‘LlamaServer’ object has no attribute ‘bos_token_id’</p>\n<p>I was hoping that Training Pro would fix this error as it has a box to add a bos token to each data set item.</p>\n<p>I get even more errors when trying to train a GPTQ model.</p>\n<p>I have searched for alternate training.py files if that is the problem and have not found any that work.</p>\n<p>I have not found much help on the internet or github.</p>\n<p>Any suggestion?</p>\n<p>The whole console output for the Lora is:</p>\n<p>16:24:07-798561 INFO Loaded “nvidia_Llama-3.1-Nemotron-Nano-4B-v1.1-Q6_K.gguf” in 2.51 seconds.<br>\n16:24:07-800568 INFO LOADER: “llama.cpp”<br>\n16:24:07-801571 INFO TRUNCATION LENGTH: 8192<br>\n16:24:07-802575 INFO INSTRUCTION TEMPLATE: “Custom (obtained from model metadata)”<br>\n16:24:23-882099 INFO Loading Text file…<br>\nPrecise raw text slicer: ON<br>\nSentences: 2967<br>\nText Blocks: 230</p>\n<ul>\n<li>Overlapping blocks: 228<br>\n16:24:28-939665 WARNING LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models.<br>\n(Found model type: LlamaServer)<br>\n*** LoRA: 1 ***<br>\n16:24:33-942140 INFO Loading text file…<br>\nPrecise raw text slicer: ON<br>\nSentences: 2967<br>\nText Blocks: 230</li>\n<li>Overlapping blocks: 228<br>\nTraceback (most recent call last):<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\queueing.py”, line 580, in process_events<br>\nresponse = await route_utils.call_process_api(<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\route_utils.py”, line 276, in call_process_api<br>\noutput = await app.get_blocks().process_api(<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\blocks.py”, line 1928, in process_api<br>\nresult = await self.call_function(<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\blocks.py”, line 1526, in call_function<br>\nprediction = await utils.async_iteration(iterator)<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\utils.py”, line 657, in async_iteration<br>\nreturn await iterator.<strong>anext</strong>()<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\utils.py”, line 650, in <strong>anext</strong><br>\nreturn await anyio.to_thread.run_sync(<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\anyio\\to_thread.py”, line 56, in run_sync<br>\nreturn await get_async_backend().run_sync_in_worker_thread(<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\anyio_backends_asyncio.py”, line 2470, in run_sync_in_worker_thread<br>\nreturn await future<br>\n^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\anyio_backends_asyncio.py”, line 967, in run<br>\nresult = context.run(func, *args)<br>\n^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\utils.py”, line 633, in run_sync_iterator_async<br>\nreturn next(iterator)<br>\n^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\installer_files\\env\\Lib\\site-packages\\gradio\\utils.py”, line 816, in gen_wrapper<br>\nresponse = next(iterator)<br>\n^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\extensions\\Training_PRO\\script.py”, line 704, in do_train<br>\ntrain_data = Dataset.from_list([tokenize(x, add_EOS_to_all, add_bos_token) for x in text_chunks])<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\extensions\\Training_PRO\\script.py”, line 704, in <br>\ntrain_data = Dataset.from_list([tokenize(x, add_EOS_to_all, add_bos_token) for x in text_chunks])<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\extensions\\Training_PRO\\script.py”, line 623, in tokenize<br>\ninput_ids = encode(prompt, prepend_bos_token)<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Oobabooga\\text-generation-webui-main\\extensions\\Training_PRO\\script.py”, line 613, in encode<br>\nif len(result) >= 2 and result[:2] == [shared.tokenizer.bos_token_id, shared.tokenizer.bos_token_id]:<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nAttributeError: ‘LlamaServer’ object has no attribute ‘bos_token_id’</li>\n</ul>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-05T21:39:50.232Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 331,
"reads": 10,
"readers_count": 9,
"score": 1582,
"yours": false,
"topic_id": 158175,
"topic_slug": "unable-to-train-lora-with-oobabooga",
"display_username": "Chris",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96153,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-train-lora-with-oobabooga/158175/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226033,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-06T11:24:26.097Z",
"cooked": "<p>From a quick read of the code, I don’t think training a GGUF-quantized model is intended. How about trying it with the Transoformers-format model before GGUF quantization?</p><aside class=\"onebox githubblob\" data-onebox-src=\"https://github.com/oobabooga/text-generation-webui/blob/main/extensions/Training_PRO/script.py\">\n <header class=\"source\">\n\n <a href=\"https://github.com/oobabooga/text-generation-webui/blob/main/extensions/Training_PRO/script.py\" target=\"_blank\" rel=\"noopener\">github.com/oobabooga/text-generation-webui</a>\n </header>\n\n <article class=\"onebox-body\">\n <h4><a href=\"https://github.com/oobabooga/text-generation-webui/blob/main/extensions/Training_PRO/script.py\" target=\"_blank\" rel=\"noopener\">extensions/Training_PRO/script.py</a></h4>\n\n<div class=\"git-blob-info\">\n <a href=\"https://github.com/oobabooga/text-generation-webui/blob/main/extensions/Training_PRO/script.py\" rel=\"noopener\"><code>main</code></a>\n</div>\n\n\n <pre><code class=\"lang-py\">import os\n\nos.environ[\"WANDB_MODE\"] = \"offline\"\n# os.environ[\"WANDB_DISABLED\"] = \"true\"\n\nimport json\nimport math\nimport random\nimport shutil\nimport sys\nimport threading\nimport time\nimport traceback\nfrom datetime import datetime\nfrom pathlib import Path\n\nimport gradio as gr\nimport pandas as pd\nimport torch\nimport transformers\n</code></pre>\n\n\n\n This file has been truncated. <a href=\"https://github.com/oobabooga/text-generation-webui/blob/main/extensions/Training_PRO/script.py\" target=\"_blank\" rel=\"noopener\">show original</a>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/4/845fd3cccc4be34531c08a87267b28f11ea543ea_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5C71A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1\" target=\"_blank\" rel=\"noopener\">nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-06T11:24:26.097Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 5,
"readers_count": 4,
"score": 36,
"yours": false,
"topic_id": 158175,
"topic_slug": "unable-to-train-lora-with-oobabooga",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/oobabooga/text-generation-webui/blob/main/extensions/Training_PRO/script.py",
"internal": false,
"reflection": false,
"title": "text-generation-webui/extensions/Training_PRO/script.py at main · oobabooga/text-generation-webui · GitHub",
"clicks": 7
},
{
"url": "https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1",
"internal": false,
"reflection": false,
"title": "nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 · Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-train-lora-with-oobabooga/158175/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226138,
"name": "Chris",
"username": "363ls2gto",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/3/b3f665/{size}.png",
"created_at": "2025-06-07T03:24:50.274Z",
"cooked": "<p>Thank you for the reply. I also tried training using a transformers based GPTQ model. I received several errors attempting to train this format as well. I will try and get them posted. At least I know where not to waste my time now.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-07T03:24:50.274Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 5,
"readers_count": 4,
"score": 36,
"yours": false,
"topic_id": 158175,
"topic_slug": "unable-to-train-lora-with-oobabooga",
"display_username": "Chris",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96153,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-train-lora-with-oobabooga/158175/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226233,
"name": "Chris",
"username": "363ls2gto",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/3/b3f665/{size}.png",
"created_at": "2025-06-07T21:49:28.446Z",
"cooked": "<p>I found the solution. I selected transformers but received errors. I was told to use pip-install XYZ (I can’t remember the exact command).</p>\n<p>For Ubuntu, run the cmd_linux.sh in konsole by right clicking and selecting this option. Make sure to select the “run in terminal” option vs “open terminal here” option. The cmd_linux.sh file is located in the same folder as the start.sh and update programs.</p>\n<p>Copy the pip install command from oobabooga and paste it into the terminal you just opened. This command should be located in the bottom right portion of the page after all the previous errors listed in the training tab of the gradio.</p>\n<p>You have to do this a second time for a new package that also needs to be installed. This time oobabooga gives you an option of two different pip installs. Select the second option as the first does not work.</p>\n<p>Copy and paste this new pip-install command that oobabooga gives you into the terminal. (you may have to close and restart the run in cmd_linux.sh terminal for the new pip install.)</p>\n<p>If you can load a GPTQ file using transformers, you should be able to train a LORA using either the normal or training pro extension.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-06-07T21:54:27.020Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 5,
"readers_count": 4,
"score": 51,
"yours": false,
"topic_id": 158175,
"topic_slug": "unable-to-train-lora-with-oobabooga",
"display_username": "Chris",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96153,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-train-lora-with-oobabooga/158175/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226295,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-08T09:50:12.243Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-06-08T09:50:12.243Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 158175,
"topic_slug": "unable-to-train-lora-with-oobabooga",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/unable-to-train-lora-with-oobabooga/158175/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am a beginner with LLMs but I have been able to install Ollama, Oobabooga, sillytavern, anything llm, and convert between GGUF to GPTQ. I use windows 10 and Ubuntu 24.04 and also have some training experience with Flux on my home computer and Massed Compute.</p>
<p>I have been trying to train my own Lora using Oogbooga. I have tried on linux and windows. I have tried GGUF models and GPTQ models. I have tried .txt files and Json files generated from past chats. Nothing seems to work. I have also installed the Training Pro extension.</p>
<p>Every time I try a GGUF model I receive the errpr:</p>
<p>Attribute Error: ‘LlamaServer’ object has no attribute ‘bos_token_id’</p>
<p>I was hoping that Training Pro would fix this error as it has a box to add a bos token to each data set item.</p>
<p>I get even more errors when trying to train a GPTQ model.</p>
<p>I have searched for alternate training.py files if that is the problem and have not found any that work.</p>
<p>I have not found much help on the internet or github.</p>
<p>Any suggestion?</p>
<p>The whole console output for the Lora is:</p>
<p>16:24:07-798561 INFO Loaded “nvidia_Llama-3.1-Nemotron-Nano-4B-v1.1-Q6_K.gguf” in 2.51 seconds.<br>
16:24:07-800568 INFO LOADER: “llama.cpp”<br>
16:24:07-801571 INFO TRUNCATION LENGTH: 8192<br>
16:24:07-802575 INFO INSTRUCTION TEMPLATE: “Custom (obtained from model metadata)”<br>
16:24:23-882099 INFO Loading Text file…<br>
Precise raw text slicer: ON<br>
Sentences: 2967<br>
Text Blocks: 230</p>
<ul>
<li>Overlapping blocks: 228<br>
16:24:28-939665 WARNING LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models.<br>
(Found model type: LlamaServer)<br>
*** LoRA: 1 ***<br>
16:24:33-942140 INFO Loading text file…<br>
Precise raw text slicer: ON<br>
Sentences: 2967<br>
Text Blocks: 230</li>
<li>Overlapping blocks: 228<br>
Traceback (most recent call last):<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py”, line 580, in process_events<br>
response = await route_utils.call_process_api(<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py”, line 276, in call_process_api<br>
output = await app.get_blocks().process_api(<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py”, line 1928, in process_api<br>
result = await self.call_function(<br>
^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py”, line 1526, in call_function<br>
prediction = await utils.async_iteration(iterator)<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py”, line 657, in async_iteration<br>
return await iterator.<strong>anext</strong>()<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py”, line 650, in <strong>anext</strong><br>
return await anyio.to_thread.run_sync(<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py”, line 56, in run_sync<br>
return await get_async_backend().run_sync_in_worker_thread(<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py”, line 2470, in run_sync_in_worker_thread<br>
return await future<br>
^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py”, line 967, in run<br>
result = context.run(func, *args)<br>
^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py”, line 633, in run_sync_iterator_async<br>
return next(iterator)<br>
^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py”, line 816, in gen_wrapper<br>
response = next(iterator)<br>
^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\extensions\Training_PRO\script.py”, line 704, in do_train<br>
train_data = Dataset.from_list([tokenize(x, add_EOS_to_all, add_bos_token) for x in text_chunks])<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\extensions\Training_PRO\script.py”, line 704, in <br>
train_data = Dataset.from_list([tokenize(x, add_EOS_to_all, add_bos_token) for x in text_chunks])<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\extensions\Training_PRO\script.py”, line 623, in tokenize<br>
input_ids = encode(prompt, prepend_bos_token)<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Oobabooga\text-generation-webui-main\extensions\Training_PRO\script.py”, line 613, in encode<br>
if len(result) >= 2 and result[:2] == [shared.tokenizer.bos_token_id, shared.tokenizer.bos_token_id]:<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
AttributeError: ‘LlamaServer’ object has no attribute ‘bos_token_id’</li>
</ul>
|
<p>I found the solution. I selected transformers but received errors. I was told to use pip-install XYZ (I can’t remember the exact command).</p>
<p>For Ubuntu, run the cmd_linux.sh in konsole by right clicking and selecting this option. Make sure to select the “run in terminal” option vs “open terminal here” option. The cmd_linux.sh file is located in the same folder as the start.sh and update programs.</p>
<p>Copy the pip install command from oobabooga and paste it into the terminal you just opened. This command should be located in the bottom right portion of the page after all the previous errors listed in the training tab of the gradio.</p>
<p>You have to do this a second time for a new package that also needs to be installed. This time oobabooga gives you an option of two different pip installs. Select the second option as the first does not work.</p>
<p>Copy and paste this new pip-install command that oobabooga gives you into the terminal. (you may have to close and restart the run in cmd_linux.sh terminal for the new pip install.)</p>
<p>If you can load a GPTQ file using transformers, you should be able to train a LORA using either the normal or training pro extension.</p>
|
Opus-MT: Translation returns <unk> token
|
https://discuss.huggingface.co/t/opus-mt-translation-returns-unk-token/158124
| 158,124
| 13
|
2025-06-05T12:50:34.687000Z
|
[
{
"id": 225882,
"name": "Math Dons",
"username": "mathdons",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/5e9695/{size}.png",
"created_at": "2025-06-05T12:50:34.757Z",
"cooked": "<p>(x-posting with StackOverflow)</p>\n<p>I’m having relatively good results with HelsinkiNlp models for translation, except for one thing: some special characters are omitted from the translation. If I decode without skipping the special tokens, I get the following:</p>\n<p><code><pad> <unk> a fait mal !</s></code></p>\n<p><code><unk></code> is right where the translation should include a French Ç (expected result “Ça fait mal” from source “That hurts!”). Note:</p>\n<ul>\n<li>lower case ç works just fine.</li>\n<li>Exact same issue with È: <code><pad> APR<unk> S VOUS !</s></code> (should be “APRÈS VOUS !”)</li>\n</ul>\n<p>It’s definitely not a model issue, but a me issue, if I try on OpusTranslate Space (<a href=\"https://huggingface.co/spaces/Helsinki-NLP/opus-translate\" class=\"inline-onebox\">OPUS Translate - a Hugging Face Space by Helsinki-NLP</a>), it works just fine.</p>\n<p>I tried using the code verbatim from the model page, to no avail (<a href=\"https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr\" class=\"inline-onebox\">Helsinki-NLP/opus-mt-tc-big-en-fr · Hugging Face</a>)</p>\n<p>My current code is not far from it, and produces exactly the result I posted above:</p>\n<pre><code class=\"lang-auto\">def __init__(self, model_path_or_name: str, source_language:str, target_langueg:str):\n self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n self.tokenizer = MarianTokenizer.from_pretrained(model_path_or_name)\n self.model = MarianMTModel.from_pretrained(model_path_or_name).to(self.device)\n\ndef single_translate(self, text: str) -> str:\n \"\"\"\n Translate a single sentence and return the translated string only.\n \"\"\"\n inputs = self.tokenizer([text], return_tensors=\"pt\", padding=True, truncation=True)\n input_ids = inputs.input_ids.to(self.model.device)\n with torch.no_grad():\n outputs = self.model.generate(input_ids=input_ids)\n decoded = self.tokenizer.batch_decode(outputs, skip_special_tokens=False)\n return decoded[0]\n</code></pre>\n<p>Any advice would be greatly appreciated!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-05T12:50:34.757Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 5,
"readers_count": 4,
"score": 96,
"yours": false,
"topic_id": 158124,
"topic_slug": "opus-mt-translation-returns-unk-token",
"display_username": "Math Dons",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/Helsinki-NLP/opus-translate",
"internal": false,
"reflection": false,
"title": "OPUS Translate - a Hugging Face Space by Helsinki-NLP",
"clicks": 1
},
{
"url": "https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr",
"internal": false,
"reflection": false,
"title": "Helsinki-NLP/opus-mt-tc-big-en-fr · Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96113,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/opus-mt-translation-returns-unk-token/158124/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226047,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-06T12:58:25.566Z",
"cooked": "<p>It seems model issue…</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">from transformers import pipeline\npipe = pipeline(\"translation\", model=\"Helsinki-NLP/opus-mt-en-fr\")\nprint(pipe(\"That hurts!\")) # [{'translation_text': 'Ça fait mal !'}]\npipe = pipeline(\"translation\", model=\"Helsinki-NLP/opus-mt-tc-big-en-fr\")\nprint(pipe(\"That hurts!\")) # [{'translation_text': 'a fait mal !'}]\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-06T12:58:25.566Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 158124,
"topic_slug": "opus-mt-translation-returns-unk-token",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/opus-mt-translation-returns-unk-token/158124/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 226051,
"name": "Math Dons",
"username": "mathdons",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/5e9695/{size}.png",
"created_at": "2025-06-06T13:37:55.045Z",
"cooked": "<p>Damn, it never occurred to me that the space could be using a different model in the same family/language. Thanks a lot, you’ve saved me a lot of headaches trying to find what was going wrong. Going to add a comment on the model / community page.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-06T13:37:55.045Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 158124,
"topic_slug": "opus-mt-translation-returns-unk-token",
"display_username": "Math Dons",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96113,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/opus-mt-translation-returns-unk-token/158124/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 226132,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-07T01:38:40.309Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-07T01:38:40.309Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 158124,
"topic_slug": "opus-mt-translation-returns-unk-token",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/opus-mt-translation-returns-unk-token/158124/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>(x-posting with StackOverflow)</p>
<p>I’m having relatively good results with HelsinkiNlp models for translation, except for one thing: some special characters are omitted from the translation. If I decode without skipping the special tokens, I get the following:</p>
<p><code><pad> <unk> a fait mal !</s></code></p>
<p><code><unk></code> is right where the translation should include a French Ç (expected result “Ça fait mal” from source “That hurts!”). Note:</p>
<ul>
<li>lower case ç works just fine.</li>
<li>Exact same issue with È: <code><pad> APR<unk> S VOUS !</s></code> (should be “APRÈS VOUS !”)</li>
</ul>
<p>It’s definitely not a model issue, but a me issue, if I try on OpusTranslate Space (<a href="https://huggingface.co/spaces/Helsinki-NLP/opus-translate" class="inline-onebox">OPUS Translate - a Hugging Face Space by Helsinki-NLP</a>), it works just fine.</p>
<p>I tried using the code verbatim from the model page, to no avail (<a href="https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-fr" class="inline-onebox">Helsinki-NLP/opus-mt-tc-big-en-fr · Hugging Face</a>)</p>
<p>My current code is not far from it, and produces exactly the result I posted above:</p>
<pre><code class="lang-auto">def __init__(self, model_path_or_name: str, source_language:str, target_langueg:str):
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.tokenizer = MarianTokenizer.from_pretrained(model_path_or_name)
self.model = MarianMTModel.from_pretrained(model_path_or_name).to(self.device)
def single_translate(self, text: str) -> str:
"""
Translate a single sentence and return the translated string only.
"""
inputs = self.tokenizer([text], return_tensors="pt", padding=True, truncation=True)
input_ids = inputs.input_ids.to(self.model.device)
with torch.no_grad():
outputs = self.model.generate(input_ids=input_ids)
decoded = self.tokenizer.batch_decode(outputs, skip_special_tokens=False)
return decoded[0]
</code></pre>
<p>Any advice would be greatly appreciated!</p>
|
<p>It seems model issue…</p>
<pre data-code-wrap="py"><code class="lang-py">from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-fr")
print(pipe("That hurts!")) # [{'translation_text': 'Ça fait mal !'}]
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-fr")
print(pipe("That hurts!")) # [{'translation_text': 'a fait mal !'}]
</code></pre>
|
Can I Build a Real-Time Object Detection Space with Flask or FastAPI on Hugging Face?
|
https://discuss.huggingface.co/t/can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face/158020
| 158,020
| 24
|
2025-06-04T17:36:19.822000Z
|
[
{
"id": 225693,
"name": "Danh Tran",
"username": "danhtran2mind",
"avatar_template": "/user_avatar/discuss.huggingface.co/danhtran2mind/{size}/48804_2.png",
"created_at": "2025-06-04T17:36:19.884Z",
"cooked": "<p>Hello Hugging Face community,</p>\n<p>I’m planning to create a Hugging Face Space for real-time object detection, using Flask or FastAPI as the backend to process images or video streams with models like YOLO or DETR from the Hugging Face Space.</p>\n<p>I have two questions:</p>\n<ol>\n<li>\n<p>Is it practical to run real-time object detection in a Space using Flask or FastAPI? What are the key limitations or best practices for deployment on Hugging Face Spaces?</p>\n</li>\n<li>\n<p>I’m worried about violating Hugging Face’s policies. Could this type of Space risk my account being flagged or blocked? What steps can I take to ensure compliance with Hugging Face’s Terms of Service?</p>\n</li>\n</ol>\n<p>Any advice, example Spaces, or links to relevant documentation would be greatly appreciated. Thank you!</p>\n<p>Best,<br>\nDanh Tran (danhtran2mind).</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-04T17:36:19.884Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 47,
"reads": 5,
"readers_count": 4,
"score": 241,
"yours": false,
"topic_id": 158020,
"topic_slug": "can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face",
"display_username": "Danh Tran",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96029,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face/158020/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225749,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-05T03:18:29.610Z",
"cooked": "<blockquote>\n<p>1</p>\n</blockquote>\n<p>I think Gradio’s backend is FastAPI, so I think it should be possible…<br>\nI don’t know much about Flask.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/f/ef25000e5ae4519258b040dcb9e8f298540a680c_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"8D7D9D\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu\" target=\"_blank\" rel=\"noopener\">SmolVLM realtime WebGPU - a Hugging Face Space by webml-community</a></h3>\n\n <p>This app lets you describe objects or scenes captured by your webcam. Simply enter an instruction like \"What do you see?\" and the app will generate a response based on the video feed. You control h...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/1/1/1130c1c3169693f6b3624e85dda1c7b816ecbc0c.png\" class=\"site-icon\" data-dominant-color=\"F99D00\" width=\"64\" height=\"64\">\n\n <a href=\"https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc\" target=\"_blank\" rel=\"noopener\">gradio.app</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/357;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/5/4532d24411c1a1e834a20ef8aada4248d8075883_2_690x357.jpeg\" class=\"thumbnail\" data-dominant-color=\"E5E1DE\" width=\"690\" height=\"357\"></div>\n\n<h3><a href=\"https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc\" target=\"_blank\" rel=\"noopener\">Object Detection From Webcam With Webrtc</a></h3>\n\n <p>A Step-by-Step Gradio Tutorial</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p>2</p>\n</blockquote>\n<p>I think <code>5.</code> of this article mainly refers to prohibited acts in Spaces.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/content-policy\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/content-policy\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/1X/5c4130fb1d8662cb15c5385a9fd9a44626aa4aa2_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"E9E7E2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/content-policy\" target=\"_blank\" rel=\"noopener\">Content Policy – Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-05T03:18:29.610Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 3,
"readers_count": 2,
"score": 30.6,
"yours": false,
"topic_id": 158020,
"topic_slug": "can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc",
"internal": false,
"reflection": false,
"title": "Object Detection From Webcam With Webrtc",
"clicks": 1
},
{
"url": "https://huggingface.co/content-policy",
"internal": false,
"reflection": false,
"title": "Content Policy – Hugging Face",
"clicks": 1
},
{
"url": "https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu",
"internal": false,
"reflection": false,
"title": "SmolVLM realtime WebGPU - a Hugging Face Space by webml-community",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face/158020/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225839,
"name": "Danh Tran",
"username": "danhtran2mind",
"avatar_template": "/user_avatar/discuss.huggingface.co/danhtran2mind/{size}/48804_2.png",
"created_at": "2025-06-05T10:21:53.958Z",
"cooked": "<p>Hey, do you like cats. I love dogs. Thanks for your support.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-06-05T10:21:53.958Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 15.4,
"yours": false,
"topic_id": 158020,
"topic_slug": "can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face",
"display_username": "Danh Tran",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 96029,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face/158020/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225953,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-05T22:22:49.286Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-06-05T22:22:49.286Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 2,
"readers_count": 1,
"score": 10.4,
"yours": false,
"topic_id": 158020,
"topic_slug": "can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-build-a-real-time-object-detection-space-with-flask-or-fastapi-on-hugging-face/158020/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello Hugging Face community,</p>
<p>I’m planning to create a Hugging Face Space for real-time object detection, using Flask or FastAPI as the backend to process images or video streams with models like YOLO or DETR from the Hugging Face Space.</p>
<p>I have two questions:</p>
<ol>
<li>
<p>Is it practical to run real-time object detection in a Space using Flask or FastAPI? What are the key limitations or best practices for deployment on Hugging Face Spaces?</p>
</li>
<li>
<p>I’m worried about violating Hugging Face’s policies. Could this type of Space risk my account being flagged or blocked? What steps can I take to ensure compliance with Hugging Face’s Terms of Service?</p>
</li>
</ol>
<p>Any advice, example Spaces, or links to relevant documentation would be greatly appreciated. Thank you!</p>
<p>Best,<br>
Danh Tran (danhtran2mind).</p>
|
<blockquote>
<p>1</p>
</blockquote>
<p>I think Gradio’s backend is FastAPI, so I think it should be possible…<br>
I don’t know much about Flask.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu">
<header class="source">
<a href="https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/e/f/ef25000e5ae4519258b040dcb9e8f298540a680c_2_690x372.png" class="thumbnail" data-dominant-color="8D7D9D" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu" target="_blank" rel="noopener">SmolVLM realtime WebGPU - a Hugging Face Space by webml-community</a></h3>
<p>This app lets you describe objects or scenes captured by your webcam. Simply enter an instruction like "What do you see?" and the app will generate a response based on the video feed. You control h...</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/1/1/1130c1c3169693f6b3624e85dda1c7b816ecbc0c.png" class="site-icon" data-dominant-color="F99D00" width="64" height="64">
<a href="https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc" target="_blank" rel="noopener">gradio.app</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/357;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/5/4532d24411c1a1e834a20ef8aada4248d8075883_2_690x357.jpeg" class="thumbnail" data-dominant-color="E5E1DE" width="690" height="357"></div>
<h3><a href="https://www.gradio.app/guides/object-detection-from-webcam-with-webrtc" target="_blank" rel="noopener">Object Detection From Webcam With Webrtc</a></h3>
<p>A Step-by-Step Gradio Tutorial</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<blockquote>
<p>2</p>
</blockquote>
<p>I think <code>5.</code> of this article mainly refers to prohibited acts in Spaces.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/content-policy">
<header class="source">
<a href="https://huggingface.co/content-policy" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/1X/5c4130fb1d8662cb15c5385a9fd9a44626aa4aa2_2_690x372.png" class="thumbnail" data-dominant-color="E9E7E2" width="690" height="372"></div>
<h3><a href="https://huggingface.co/content-policy" target="_blank" rel="noopener">Content Policy – Hugging Face</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Distil whisper models
|
https://discuss.huggingface.co/t/distil-whisper-models/157873
| 157,873
| 5
|
2025-06-03T17:47:56.338000Z
|
[
{
"id": 225486,
"name": "jpalvaradomil",
"username": "jpalvaradomil",
"avatar_template": "/user_avatar/discuss.huggingface.co/jpalvaradomil/{size}/48739_2.png",
"created_at": "2025-06-03T17:47:56.407Z",
"cooked": "<p>I need to distil whisper models. I have the python file that do that. It work in my pc, but i want to distil the large models.<br>\nI try to do that using the spaces (not free space) but i got the next message:<br>\nLaunch timed out space was not healthy after 30 min<br>\nHow to increment the launch time?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-03T17:47:56.407Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 9,
"readers_count": 8,
"score": 41.8,
"yours": false,
"topic_id": 157873,
"topic_slug": "distil-whisper-models",
"display_username": "jpalvaradomil",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95911,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/distil-whisper-models/157873/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225577,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-06-04T05:43:21.862Z",
"cooked": "<p>Maybe this setting?</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/hub/spaces-config-reference\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/hub/spaces-config-reference\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"FAF8F2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/hub/spaces-config-reference\" target=\"_blank\" rel=\"noopener\">Spaces Configuration Reference</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p><strong><code>startup_duration_timeout</code></strong>: <em>string</em><br>\nSet a custom startup duration timeout for your Space. This is the maximum time your Space is allowed to start before it times out and is flagged as unhealthy. Defaults to 30 minutes, but any valid duration (like <code>1h</code>, <code>30m</code>) is acceptable.</p>\n</blockquote>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-04T05:43:21.862Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 1.2,
"yours": false,
"topic_id": 157873,
"topic_slug": "distil-whisper-models",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/hub/spaces-config-reference",
"internal": false,
"reflection": false,
"title": "Spaces Configuration Reference",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/distil-whisper-models/157873/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225694,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-04T17:43:51.330Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-06-04T17:43:51.330Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 157873,
"topic_slug": "distil-whisper-models",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/distil-whisper-models/157873/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I need to distil whisper models. I have the python file that do that. It work in my pc, but i want to distil the large models.<br>
I try to do that using the spaces (not free space) but i got the next message:<br>
Launch timed out space was not healthy after 30 min<br>
How to increment the launch time?</p>
|
<p>Maybe this setting?</p>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/hub/spaces-config-reference">
<header class="source">
<a href="https://huggingface.co/docs/hub/spaces-config-reference" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png" class="thumbnail" data-dominant-color="FAF8F2" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/hub/spaces-config-reference" target="_blank" rel="noopener">Spaces Configuration Reference</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<blockquote>
<p><strong><code>startup_duration_timeout</code></strong>: <em>string</em><br>
Set a custom startup duration timeout for your Space. This is the maximum time your Space is allowed to start before it times out and is flagged as unhealthy. Defaults to 30 minutes, but any valid duration (like <code>1h</code>, <code>30m</code>) is acceptable.</p>
</blockquote>
|
Adding labels from different files
|
https://discuss.huggingface.co/t/adding-labels-from-different-files/157864
| 157,864
| 5
|
2025-06-03T16:34:10.583000Z
|
[
{
"id": 225476,
"name": "zacharia husain",
"username": "zacharia-husain",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/z/90ced4/{size}.png",
"created_at": "2025-06-03T16:34:10.654Z",
"cooked": "<p>If I have multiple texts in a folder and a csv file with token classification labels, how would I merge them together so when I index the dataset the text and labels will be in the same index (like how in the examples the imdb dataset has sentiment and text at the same index). My understanding is that you can only pass one file type to load_datasets, and map I cant figure out how to use map when the size of the labels varies (it depends on amount of tokens).</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-03T16:34:10.654Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 9,
"reads": 6,
"readers_count": 5,
"score": 66.2,
"yours": false,
"topic_id": 157864,
"topic_slug": "adding-labels-from-different-files",
"display_username": "zacharia husain",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95904,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-labels-from-different-files/157864/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225479,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-06-03T16:48:56.739Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"zacharia-husain\" data-post=\"1\" data-topic=\"157864\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/z/90ced4/48.png\" class=\"avatar\"> zacharia-husain:</div>\n<blockquote>\n<p>If I have multiple texts in a folder and a csv file with token classification labels, how would I merge them together so when I index the dataset the text and labels will be in the same index (like how in the examples the imdb dataset has sentiment and text at the same index). My understanding is that you can only pass one file type to load_datasets, and map I cant figure out how to use map when the size of the labels varies (it depends on amount of tokens</p>\n</blockquote>\n</aside>\n<p>What I would do is:</p>\n<p>Read in your files<br>\nAlign your labels to your tokenized text. Try using tokenizer(…, return_offsets_mapping=True) helps you align labels to tokens.<br>\nThen create a dataset object manually.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-06-03T16:48:56.739Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 1,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 21.2,
"yours": false,
"topic_id": 157864,
"topic_slug": "adding-labels-from-different-files",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-labels-from-different-files/157864/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225663,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-04T14:58:44.199Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-06-04T14:58:44.199Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 157864,
"topic_slug": "adding-labels-from-different-files",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-labels-from-different-files/157864/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>If I have multiple texts in a folder and a csv file with token classification labels, how would I merge them together so when I index the dataset the text and labels will be in the same index (like how in the examples the imdb dataset has sentiment and text at the same index). My understanding is that you can only pass one file type to load_datasets, and map I cant figure out how to use map when the size of the labels varies (it depends on amount of tokens).</p>
|
<aside class="quote no-group" data-username="zacharia-husain" data-post="1" data-topic="157864">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/z/90ced4/48.png" class="avatar"> zacharia-husain:</div>
<blockquote>
<p>If I have multiple texts in a folder and a csv file with token classification labels, how would I merge them together so when I index the dataset the text and labels will be in the same index (like how in the examples the imdb dataset has sentiment and text at the same index). My understanding is that you can only pass one file type to load_datasets, and map I cant figure out how to use map when the size of the labels varies (it depends on amount of tokens</p>
</blockquote>
</aside>
<p>What I would do is:</p>
<p>Read in your files<br>
Align your labels to your tokenized text. Try using tokenizer(…, return_offsets_mapping=True) helps you align labels to tokens.<br>
Then create a dataset object manually.</p>
|
Generate: using k-v cache is faster but no difference to memory usage
|
https://discuss.huggingface.co/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272
| 31,272
| 9
|
2023-02-07T16:01:35.032000Z
|
[
{
"id": 57259,
"name": "Sanchit Gandhi",
"username": "sanchit-gandhi",
"avatar_template": "/user_avatar/discuss.huggingface.co/sanchit-gandhi/{size}/21280_2.png",
"created_at": "2023-02-07T16:01:35.122Z",
"cooked": "<p>Hello! <img src=\"https://emoji.discourse-cdn.com/apple/wave.png?v=12\" title=\":wave:\" class=\"emoji\" alt=\":wave:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>I’m benchmarking inference performance using Whisper and the <code>.generate()</code> method, switching between using/not using the <a href=\"https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.use_cache\">k-v cache</a>).</p>\n<p>My understanding is that when using the cache, inference should be faster (since we don’t recompute k-v states and cache them instead), but VRAM usage higher (since we keep the cached tensors in memory).</p>\n<p>However, I’m finding that when using cache that inference is faster, but VRAM stays the same <img src=\"https://emoji.discourse-cdn.com/apple/face_with_monocle.png?v=12\" title=\":face_with_monocle:\" class=\"emoji\" alt=\":face_with_monocle:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>Here are my results with/without cache for the tiny and base Whisper checkpoints:</p>\n<div class=\"md-table\">\n<table>\n<thead>\n<tr>\n<th></th>\n<th>Inf time with</th>\n<th>Inf time without</th>\n<th>VRAM with</th>\n<th>VRAM without</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>tiny</td>\n<td>9.0</td>\n<td>12.0</td>\n<td>1381</td>\n<td>1381</td>\n</tr>\n<tr>\n<td>base</td>\n<td>11.3</td>\n<td>18.4</td>\n<td>1523</td>\n<td>1523</td>\n</tr>\n</tbody>\n</table>\n</div><p>These experiments are run with greedy decoding, batch size of 1 and 73 eval samples on a 16GB V100. I’m computing VRAM by calling <code>nvidia-smi</code> and monitoring how much usage there is on the GPU.</p>\n<p>Is this as expected? Or should we see lower VRAM without cache?</p>\n<p>Notebook: <a href=\"https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_whisper_cache.ipynb\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">codesnippets/benchmark_whisper_cache.ipynb at main · sanchit-gandhi/codesnippets · GitHub</a></p>\n<details>\n<summary> Code snippet to reproduce: </summary>\n<pre><code class=\"lang-auto\">from datasets import load_dataset\nfrom transformers import WhisperConfig, WhisperForConditionalGeneration, WhisperProcessor\n\nimport torch\nfrom torch.utils.data import DataLoader\nimport numpy as np\n\nimport time\nfrom tqdm import tqdm\nimport subprocess as sp\nimport os\nimport sched\n\ncheckpoint_id = \"openai/whisper-tiny.en\"\nprocessor = WhisperProcessor.from_pretrained(checkpoint_id)\n\nmodel = WhisperForConditionalGeneration.from_pretrained(checkpoint_id)\nmodel.to(\"cuda\")\nmodel.half()\n\nlibrispeech = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\n\ndef preprocess(batch): \n batch[\"input_features\"] = processor(batch[\"audio\"][\"array\"], sampling_rate=16000, return_tensors=\"pt\").input_features[0]\n return batch\n\ndataset_processed = librispeech.map(preprocess, remove_columns=librispeech.column_names)\n\ndataloader = DataLoader(dataset_processed.with_format(\"torch\"), batch_size=1)\n\n\ndef get_gpu_memory():\n \"\"\"\n Python equivalent of nvidia-smi, copied from https://stackoverflow.com/a/67722676\n and verified as being equivalent ✅\n \"\"\"\n output_to_list = lambda x: x.decode('ascii').split('\\n')[:-1]\n \n COMMAND = \"nvidia-smi --query-gpu=memory.used --format=csv\"\n \n try:\n memory_use_info = output_to_list(sp.check_output(COMMAND.split(),stderr=sp.STDOUT))[1:]\n \n except sp.CalledProcessError as e:\n raise RuntimeError(\"command '{}' return with error (code {}): {}\".format(e.cmd, e.returncode, e.output))\n \n memory_use_values = [int(x.split()[0]) for i, x in enumerate(memory_use_info)]\n return memory_use_values\n\n# benchmark generation with cache\n\nstart = time.time()\nfor batch in tqdm(dataloader):\n predicted_ids = model.generate(batch[\"input_features\"].to(\"cuda\").half(), max_new_tokens=128, use_cache=True)\nruntime = time.time() - start\n\nprint(\"Runtime with: \", runtime)\nprint(\"VRAM with: \", get_gpu_memory()[0])\n\n# if we don't delete and re-load the model the GPU use is lower the second time round: warm-up effects?\ndel model\ntorch.cuda.empty_cache()\n\n# benchmark without cache\n\nmodel = WhisperForConditionalGeneration.from_pretrained(checkpoint_id)\nmodel.to(\"cuda\")\nmodel.half()\n\nstart = time.time()\nfor batch in tqdm(dataloader):\n predicted_ids = model.generate(batch[\"input_features\"].to(\"cuda\").half(), max_new_tokens=128, use_cache=False)\nruntime = time.time() - start\n\nprint(\"Runtime without: \", runtime)\nprint(\"VRAM without: \", get_gpu_memory()[0])\n</code></pre>\n<p><strong>Print Output:</strong></p>\n<pre><code class=\"lang-auto\">Runtime with: 8.990428924560547\nVRAM with: 1381\nRuntime without: 11.993675231933594\nVRAM without: 1381\n</code></pre>\n</details>\n<p>Thanks!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2023-02-08T10:05:24.408Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 15561,
"reads": 249,
"readers_count": 248,
"score": 77799.8,
"yours": false,
"topic_id": 31272,
"topic_slug": "generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage",
"display_username": "Sanchit Gandhi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 6,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.use_cache",
"internal": false,
"reflection": false,
"title": "Generation",
"clicks": 1346
},
{
"url": "https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_whisper_cache.ipynb",
"internal": false,
"reflection": false,
"title": "codesnippets/benchmark_whisper_cache.ipynb at main · sanchit-gandhi/codesnippets · GitHub",
"clicks": 297
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 9227,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272/1",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 57335,
"name": "Patrick von Platen",
"username": "patrickvonplaten",
"avatar_template": "/user_avatar/discuss.huggingface.co/patrickvonplaten/{size}/2171_2.png",
"created_at": "2023-02-08T11:56:56.097Z",
"cooked": "<p>Nice write-up!</p>\n<p>I think the decoder sequence length and the hidden states of the model might be too small to see a difference here in VRAM.</p>\n<p>The reason VRAM should be <strong>higher</strong> when caching the k,v states is because we cache the projected k,v states of every layer. This means that our cache is of size:</p>\n<p>2 * (hidden_size) * (num_layers) * (decoder_length)</p>\n<p>For VRAM computation, this memory is more or less always added to the peak memory of the computation graph.</p>\n<p>For comparison, we don’t have this memory when not caching. The memory we always have when not caching before doing the attention QK^T computation (which is probs the bottleneck) is 2 * (hidden_size) * 1 * (decoder_length) . Those are the q, v states right that are computed during attention.</p>\n<p>=> I expect that here (num_layers), (hidden_size) and (decoder_length) are too small to make a difference.</p>\n<p>The easiest thing to check here would be to use a bigger model and generate to much longer (set eos to None and generate to 256 tokens).</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2023-02-08T11:56:56.097Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 238,
"reads": 204,
"readers_count": 203,
"score": 1260.8,
"yours": false,
"topic_id": 31272,
"topic_slug": "generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage",
"display_username": "Patrick von Platen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 170,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 57336,
"name": "Patrick von Platen",
"username": "patrickvonplaten",
"avatar_template": "/user_avatar/discuss.huggingface.co/patrickvonplaten/{size}/2171_2.png",
"created_at": "2023-02-08T11:58:02.142Z",
"cooked": "<p>Overall this is an interesting finding though as it means that the k,v cache probably doesn’t play a big role in reducing VRAM for ASR and at that model size.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2023-02-08T11:58:02.142Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 43,
"reads": 187,
"readers_count": 186,
"score": 252.4,
"yours": false,
"topic_id": 31272,
"topic_slug": "generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage",
"display_username": "Patrick von Platen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 170,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 57349,
"name": "Joao Gante",
"username": "joaogante",
"avatar_template": "/user_avatar/discuss.huggingface.co/joaogante/{size}/20106_2.png",
"created_at": "2023-02-08T13:29:29.546Z",
"cooked": "<p><a class=\"mention\" href=\"/u/sanchit-gandhi\">@sanchit-gandhi</a> a few extra numbers – modifying your script to run on GPT-J with FP16 on an 3090, with <code>input_ids.shape[1]=16</code> and <code>max_new_tokens=256</code>, we get:</p>\n<ol>\n<li>\n<code>14071MB</code> of GPU usage with <code>use_cache=False</code>\n</li>\n<li>\n<code>13233MB</code> of GPU usage with <code>use_cache=True</code>\n</li>\n</ol>\n<p>The difference becomes more visible with large models and large sequence lengths <img src=\"https://emoji.discourse-cdn.com/apple/mag_right.png?v=12\" title=\":mag_right:\" class=\"emoji\" alt=\":mag_right:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2023-02-08T13:29:29.546Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 68,
"reads": 172,
"readers_count": 171,
"score": 374.4,
"yours": false,
"topic_id": 31272,
"topic_slug": "generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage",
"display_username": "Joao Gante",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 5671,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 57352,
"name": "Sanchit Gandhi",
"username": "sanchit-gandhi",
"avatar_template": "/user_avatar/discuss.huggingface.co/sanchit-gandhi/{size}/21280_2.png",
"created_at": "2023-02-08T14:21:33.999Z",
"cooked": "<p>Thank you very much for the detailed response!</p>\n<p>That makes sense that the difference in VRAM with/without using cache is not significant for a model with such low dimensionality.</p>\n<p>Repeating the experiment with the large-v2 checkpoint (hidden_size=1280, num_layers=32) and generating to 256 tokens yields measurable differences in VRAM, albeit still only marginal:</p>\n<pre><code class=\"lang-auto\">VRAM with: 7597\nVRAM without: 7515\nDiff: 82\n</code></pre>\n<p>(all values in MB)</p>\n<p>As we expect, the effect is amplified at 512 tokens, scaling (almost) linearly with <code>decoder_length</code>:</p>\n<pre><code class=\"lang-auto\">VRAM with: 7639\nVRAM without: 7519\nDiff: 120\n</code></pre>\n<p>ASR models tend to generate quite short decoder-lengths. For example, the average token length in the LibriSpeech validation corpus is just <strong>~20 tokens</strong>. Setting the max length accordingly, we get:</p>\n<pre><code class=\"lang-auto\">VRAM with: 7515\nVRAM without: 7511\nDiff: 4\n</code></pre>\n<p>So pretty insignificant! My intuition is that since VRAM difference with/without cache is proportional to decoder-length, k-v cache doesn’t have a big effect on VRAM for ASR models, even for larger checkpoints.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2023-02-08T14:21:33.999Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 220,
"reads": 164,
"readers_count": 163,
"score": 1112.8,
"yours": false,
"topic_id": 31272,
"topic_slug": "generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage",
"display_username": "Sanchit Gandhi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 9227,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225509,
"name": "vhr",
"username": "vhr1007",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/v/8e8cbc/{size}.png",
"created_at": "2025-06-03T21:25:14.414Z",
"cooked": "<p>Good Analysis, but generally you need to monitor max_cuda_allocation to know the max memory choke point in inference call, that will know usage of VRAM,</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-06-03T21:25:14.414Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 20.6,
"yours": false,
"topic_id": 31272,
"topic_slug": "generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage",
"display_username": "vhr",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95926,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/generate-using-k-v-cache-is-faster-but-no-difference-to-memory-usage/31272/6",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>Hello! <img src="https://emoji.discourse-cdn.com/apple/wave.png?v=12" title=":wave:" class="emoji" alt=":wave:" loading="lazy" width="20" height="20"></p>
<p>I’m benchmarking inference performance using Whisper and the <code>.generate()</code> method, switching between using/not using the <a href="https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.use_cache">k-v cache</a>).</p>
<p>My understanding is that when using the cache, inference should be faster (since we don’t recompute k-v states and cache them instead), but VRAM usage higher (since we keep the cached tensors in memory).</p>
<p>However, I’m finding that when using cache that inference is faster, but VRAM stays the same <img src="https://emoji.discourse-cdn.com/apple/face_with_monocle.png?v=12" title=":face_with_monocle:" class="emoji" alt=":face_with_monocle:" loading="lazy" width="20" height="20"></p>
<p>Here are my results with/without cache for the tiny and base Whisper checkpoints:</p>
<div class="md-table">
<table>
<thead>
<tr>
<th></th>
<th>Inf time with</th>
<th>Inf time without</th>
<th>VRAM with</th>
<th>VRAM without</th>
</tr>
</thead>
<tbody>
<tr>
<td>tiny</td>
<td>9.0</td>
<td>12.0</td>
<td>1381</td>
<td>1381</td>
</tr>
<tr>
<td>base</td>
<td>11.3</td>
<td>18.4</td>
<td>1523</td>
<td>1523</td>
</tr>
</tbody>
</table>
</div><p>These experiments are run with greedy decoding, batch size of 1 and 73 eval samples on a 16GB V100. I’m computing VRAM by calling <code>nvidia-smi</code> and monitoring how much usage there is on the GPU.</p>
<p>Is this as expected? Or should we see lower VRAM without cache?</p>
<p>Notebook: <a href="https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_whisper_cache.ipynb" class="inline-onebox" rel="noopener nofollow ugc">codesnippets/benchmark_whisper_cache.ipynb at main · sanchit-gandhi/codesnippets · GitHub</a></p>
<details>
<summary> Code snippet to reproduce: </summary>
<pre><code class="lang-auto">from datasets import load_dataset
from transformers import WhisperConfig, WhisperForConditionalGeneration, WhisperProcessor
import torch
from torch.utils.data import DataLoader
import numpy as np
import time
from tqdm import tqdm
import subprocess as sp
import os
import sched
checkpoint_id = "openai/whisper-tiny.en"
processor = WhisperProcessor.from_pretrained(checkpoint_id)
model = WhisperForConditionalGeneration.from_pretrained(checkpoint_id)
model.to("cuda")
model.half()
librispeech = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
def preprocess(batch):
batch["input_features"] = processor(batch["audio"]["array"], sampling_rate=16000, return_tensors="pt").input_features[0]
return batch
dataset_processed = librispeech.map(preprocess, remove_columns=librispeech.column_names)
dataloader = DataLoader(dataset_processed.with_format("torch"), batch_size=1)
def get_gpu_memory():
"""
Python equivalent of nvidia-smi, copied from https://stackoverflow.com/a/67722676
and verified as being equivalent ✅
"""
output_to_list = lambda x: x.decode('ascii').split('\n')[:-1]
COMMAND = "nvidia-smi --query-gpu=memory.used --format=csv"
try:
memory_use_info = output_to_list(sp.check_output(COMMAND.split(),stderr=sp.STDOUT))[1:]
except sp.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
memory_use_values = [int(x.split()[0]) for i, x in enumerate(memory_use_info)]
return memory_use_values
# benchmark generation with cache
start = time.time()
for batch in tqdm(dataloader):
predicted_ids = model.generate(batch["input_features"].to("cuda").half(), max_new_tokens=128, use_cache=True)
runtime = time.time() - start
print("Runtime with: ", runtime)
print("VRAM with: ", get_gpu_memory()[0])
# if we don't delete and re-load the model the GPU use is lower the second time round: warm-up effects?
del model
torch.cuda.empty_cache()
# benchmark without cache
model = WhisperForConditionalGeneration.from_pretrained(checkpoint_id)
model.to("cuda")
model.half()
start = time.time()
for batch in tqdm(dataloader):
predicted_ids = model.generate(batch["input_features"].to("cuda").half(), max_new_tokens=128, use_cache=False)
runtime = time.time() - start
print("Runtime without: ", runtime)
print("VRAM without: ", get_gpu_memory()[0])
</code></pre>
<p><strong>Print Output:</strong></p>
<pre><code class="lang-auto">Runtime with: 8.990428924560547
VRAM with: 1381
Runtime without: 11.993675231933594
VRAM without: 1381
</code></pre>
</details>
<p>Thanks!</p>
|
<p>Nice write-up!</p>
<p>I think the decoder sequence length and the hidden states of the model might be too small to see a difference here in VRAM.</p>
<p>The reason VRAM should be <strong>higher</strong> when caching the k,v states is because we cache the projected k,v states of every layer. This means that our cache is of size:</p>
<p>2 * (hidden_size) * (num_layers) * (decoder_length)</p>
<p>For VRAM computation, this memory is more or less always added to the peak memory of the computation graph.</p>
<p>For comparison, we don’t have this memory when not caching. The memory we always have when not caching before doing the attention QK^T computation (which is probs the bottleneck) is 2 * (hidden_size) * 1 * (decoder_length) . Those are the q, v states right that are computed during attention.</p>
<p>=> I expect that here (num_layers), (hidden_size) and (decoder_length) are too small to make a difference.</p>
<p>The easiest thing to check here would be to use a bigger model and generate to much longer (set eos to None and generate to 256 tokens).</p>
|
What are the most effective recent approaches for predicting social media post virality?
|
https://discuss.huggingface.co/t/what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality/157384
| 157,384
| 13
|
2025-05-30T13:30:44.236000Z
|
[
{
"id": 224822,
"name": "DB",
"username": "catpawws",
"avatar_template": "/user_avatar/discuss.huggingface.co/catpawws/{size}/48526_2.png",
"created_at": "2025-05-30T13:30:44.300Z",
"cooked": "<p>I’m currently working on a project related to virality prediction . I came across this 2024 paper that combines BERT and CNN for Twitter virality classification:<br>\n<img src=\"https://emoji.discourse-cdn.com/apple/link.png?v=14\" title=\":link:\" class=\"emoji\" alt=\":link:\" loading=\"lazy\" width=\"20\" height=\"20\"> <a href=\"https://ieeexplore.ieee.org/document/10913355\" rel=\"noopener nofollow ugc\">Virality Prediction on Twitter Using Combined CNN and BERT Models | IEEE Xplore</a></p>\n<p>Do you think this BERT+CNN hybrid is a good choice in 2024/2025?<br>\nAre there more advanced or better-performing models (e.g. graph-based, transformer-only, multimodal) that you’d recommend for this task?</p>\n<p>Any suggestions or insights from your experience would be greatly appreciated!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-30T13:30:44.300Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 52,
"reads": 7,
"readers_count": 6,
"score": 271.4,
"yours": false,
"topic_id": 157384,
"topic_slug": "what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality",
"display_username": "DB",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://ieeexplore.ieee.org/document/10913355",
"internal": false,
"reflection": false,
"title": "Virality Prediction on Twitter Using Combined CNN and BERT Models | IEEE Conference Publication | IEEE Xplore",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95548,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality/157384/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224888,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-30T23:48:53.073Z",
"cooked": "<p>I can’t find any methods other than BERT-based models…</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://ar5iv.labs.arxiv.org/html/2303.06120\">\n <header class=\"source\">\n\n <a href=\"https://ar5iv.labs.arxiv.org/html/2303.06120\" target=\"_blank\" rel=\"noopener\">ar5iv</a>\n </header>\n\n <article class=\"onebox-body\">\n <img width=\"500\" height=\"500\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/9/3975a5f8291a035912e41d87600fb10b5eace018_2_500x500.png\" class=\"thumbnail onebox-avatar\" data-dominant-color=\"A9634C\">\n\n<h3><a href=\"https://ar5iv.labs.arxiv.org/html/2303.06120\" target=\"_blank\" rel=\"noopener\">Measuring and Detecting Virality on Social Media: The Case of Twitter’s Viral...</a></h3>\n\n <p>Social media posts may go viral and reach large numbers of people within a short period of time. Such posts may threaten the public dialogue if they contain misleading content, making their early detection highly cruci…</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<p><a href=\"https://www.researchgate.net/publication/355473219_Virality_Prediction_for_News_Tweets_Using_RoBERTa\" class=\"onebox\" target=\"_blank\" rel=\"noopener\">https://www.researchgate.net/publication/355473219_Virality_Prediction_for_News_Tweets_Using_RoBERTa</a></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-30T23:48:53.073Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 5,
"readers_count": 4,
"score": 31,
"yours": false,
"topic_id": 157384,
"topic_slug": "what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://ar5iv.labs.arxiv.org/html/2303.06120",
"internal": false,
"reflection": false,
"title": "[2303.06120] Measuring and Detecting Virality on Social Media: The Case of Twitter’s Viral Tweets Topic",
"clicks": 2
},
{
"url": "https://www.researchgate.net/publication/355473219_Virality_Prediction_for_News_Tweets_Using_RoBERTa",
"internal": false,
"reflection": false,
"title": null,
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality/157384/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 225182,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-06-02T09:44:35.310Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-06-02T09:44:35.310Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 157384,
"topic_slug": "what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-are-the-most-effective-recent-approaches-for-predicting-social-media-post-virality/157384/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’m currently working on a project related to virality prediction . I came across this 2024 paper that combines BERT and CNN for Twitter virality classification:<br>
<img src="https://emoji.discourse-cdn.com/apple/link.png?v=14" title=":link:" class="emoji" alt=":link:" loading="lazy" width="20" height="20"> <a href="https://ieeexplore.ieee.org/document/10913355" rel="noopener nofollow ugc">Virality Prediction on Twitter Using Combined CNN and BERT Models | IEEE Xplore</a></p>
<p>Do you think this BERT+CNN hybrid is a good choice in 2024/2025?<br>
Are there more advanced or better-performing models (e.g. graph-based, transformer-only, multimodal) that you’d recommend for this task?</p>
<p>Any suggestions or insights from your experience would be greatly appreciated!</p>
|
<p>I can’t find any methods other than BERT-based models…</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://ar5iv.labs.arxiv.org/html/2303.06120">
<header class="source">
<a href="https://ar5iv.labs.arxiv.org/html/2303.06120" target="_blank" rel="noopener">ar5iv</a>
</header>
<article class="onebox-body">
<img width="500" height="500" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/9/3975a5f8291a035912e41d87600fb10b5eace018_2_500x500.png" class="thumbnail onebox-avatar" data-dominant-color="A9634C">
<h3><a href="https://ar5iv.labs.arxiv.org/html/2303.06120" target="_blank" rel="noopener">Measuring and Detecting Virality on Social Media: The Case of Twitter’s Viral...</a></h3>
<p>Social media posts may go viral and reach large numbers of people within a short period of time. Such posts may threaten the public dialogue if they contain misleading content, making their early detection highly cruci…</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p><a href="https://www.researchgate.net/publication/355473219_Virality_Prediction_for_News_Tweets_Using_RoBERTa" class="onebox" target="_blank" rel="noopener">https://www.researchgate.net/publication/355473219_Virality_Prediction_for_News_Tweets_Using_RoBERTa</a></p>
|
AI Agent Course
|
https://discuss.huggingface.co/t/ai-agent-course/157406
| 157,406
| 21
|
2025-05-30T16:10:43.005000Z
|
[
{
"id": 224848,
"name": "Chan Kam Wing",
"username": "WingNeville",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/e9a140/{size}.png",
"created_at": "2025-05-30T16:10:43.082Z",
"cooked": "<p>Hi everyone,</p>\n<p>I’m currently running this notebook:<br>\n<a href=\"https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/code_agents.ipynb\" class=\"inline-onebox\">unit2/smolagents/code_agents.ipynb · agents-course/notebooks at main</a>, but it’s returning an error.</p>\n<p>So far, I’ve been unable to successfully run most of the examples in the course. I’m unsure if this is due to an issue with my account settings.</p>\n<p>Do you have any suggestions?</p>\n<h2><a name=\"p-224848-error-in-generating-model-output-provider-nscale-not-supported-available-values-auto-or-any-provider-from-black-forest-labs-cerebras-cohere-fal-ai-fireworks-ai-hf-inference-hyperbolic-nebius-novita-openai-replicate-sambanova-togetherpassing-auto-default-value-will-automatically-select-the-first-provider-available-for-the-model-sorted-by-the-users-order-in-httpshfcosettingsinference-providers-step-1-duration-001-seconds-1\" class=\"anchor\" href=\"#p-224848-error-in-generating-model-output-provider-nscale-not-supported-available-values-auto-or-any-provider-from-black-forest-labs-cerebras-cohere-fal-ai-fireworks-ai-hf-inference-hyperbolic-nebius-novita-openai-replicate-sambanova-togetherpassing-auto-default-value-will-automatically-select-the-first-provider-available-for-the-model-sorted-by-the-users-order-in-httpshfcosettingsinference-providers-step-1-duration-001-seconds-1\"></a>Error in generating model output:<br>\nProvider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’,<br>\n‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’,<br>\n‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for<br>\nthe model, sorted by the user’s order in <a href=\"https://hf.co/settings/inference-providers\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">Hugging Face – The AI community building the future.</a>.<br>\n[Step 1: Duration 0.01 seconds]</h2>\n<p>ValueError Traceback (most recent call last)<br>\n/usr/local/lib/python3.11/dist-packages/smolagents/agents.py in _step_stream(self, memory_step)<br>\n1495 else:<br>\n → 1496 chat_message: ChatMessage = self.model.generate(<br>\n1497 input_messages,</p>\n<p>8 frames<br>\nValueError: Provider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’, ‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’, ‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for the model, sorted by the user’s order in <a href=\"https://hf.co/settings/inference-providers\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">Hugging Face – The AI community building the future.</a>.</p>\n<p>The above exception was the direct cause of the following exception:</p>\n<p>AgentGenerationError Traceback (most recent call last)<br>\n/usr/local/lib/python3.11/dist-packages/smolagents/agents.py in _step_stream(self, memory_step)<br>\n1516 memory_step.model_output = output_text<br>\n1517 except Exception as e:<br>\n → 1518 raise AgentGenerationError(f\"Error in generating model output:\\n{e}\", self.logger) from e<br>\n1519<br>\n1520 ### Parse output ###</p>\n<p>AgentGenerationError: Error in generating model output:<br>\nProvider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’, ‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’, ‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for the model, sorted by the user’s order in <a href=\"https://hf.co/settings/inference-providers\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">Hugging Face – The AI community building the future.</a>.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-30T16:10:43.082Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 89,
"reads": 38,
"readers_count": 37,
"score": 462.6,
"yours": false,
"topic_id": 157406,
"topic_slug": "ai-agent-course",
"display_username": "Chan Kam Wing",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/code_agents.ipynb",
"internal": false,
"reflection": false,
"title": "unit2/smolagents/code_agents.ipynb · agents-course/notebooks at main",
"clicks": 16
},
{
"url": "https://hf.co/settings/inference-providers",
"internal": false,
"reflection": false,
"title": "Hugging Face – The AI community building the future.",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95264,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/ai-agent-course/157406/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224860,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-05-30T18:41:17.819Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"WingNeville\" data-post=\"1\" data-topic=\"157406\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/w/e9a140/48.png\" class=\"avatar\"> WingNeville:</div>\n<blockquote>\n<p>Error in generating model output:<br>\nProvider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’,<br>\n‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’,<br>\n‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for<br>\nthe model, sorted by the user’s order in <a href=\"https://hf.co/settings/inference-providers\" rel=\"noopener nofollow ugc\">Hugging Face – The AI community building the future.</a>.</p>\n</blockquote>\n</aside>\n<p>You are trying to use a provider called NScale. The backend doesn’t support that provider for that Model. Switch to auto and Huggingface will pick the first provider for you for that model.<br>\nAlternatively, you can go research the model on Huggingface and see what providers are available for that model and pass that arguement accordingly.</p>\n<p>Hope that helps <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-30T18:41:17.819Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 1,
"incoming_link_count": 2,
"reads": 28,
"readers_count": 27,
"score": 45.6,
"yours": false,
"topic_id": 157406,
"topic_slug": "ai-agent-course",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/ai-agent-course/157406/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224899,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-31T06:41:50.658Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-31T06:41:50.658Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 20,
"readers_count": 19,
"score": 4,
"yours": false,
"topic_id": 157406,
"topic_slug": "ai-agent-course",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/ai-agent-course/157406/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi everyone,</p>
<p>I’m currently running this notebook:<br>
<a href="https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/code_agents.ipynb" class="inline-onebox">unit2/smolagents/code_agents.ipynb · agents-course/notebooks at main</a>, but it’s returning an error.</p>
<p>So far, I’ve been unable to successfully run most of the examples in the course. I’m unsure if this is due to an issue with my account settings.</p>
<p>Do you have any suggestions?</p>
<h2><a name="p-224848-error-in-generating-model-output-provider-nscale-not-supported-available-values-auto-or-any-provider-from-black-forest-labs-cerebras-cohere-fal-ai-fireworks-ai-hf-inference-hyperbolic-nebius-novita-openai-replicate-sambanova-togetherpassing-auto-default-value-will-automatically-select-the-first-provider-available-for-the-model-sorted-by-the-users-order-in-httpshfcosettingsinference-providers-step-1-duration-001-seconds-1" class="anchor" href="#p-224848-error-in-generating-model-output-provider-nscale-not-supported-available-values-auto-or-any-provider-from-black-forest-labs-cerebras-cohere-fal-ai-fireworks-ai-hf-inference-hyperbolic-nebius-novita-openai-replicate-sambanova-togetherpassing-auto-default-value-will-automatically-select-the-first-provider-available-for-the-model-sorted-by-the-users-order-in-httpshfcosettingsinference-providers-step-1-duration-001-seconds-1"></a>Error in generating model output:<br>
Provider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’,<br>
‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’,<br>
‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for<br>
the model, sorted by the user’s order in <a href="https://hf.co/settings/inference-providers" class="inline-onebox" rel="noopener nofollow ugc">Hugging Face – The AI community building the future.</a>.<br>
[Step 1: Duration 0.01 seconds]</h2>
<p>ValueError Traceback (most recent call last)<br>
/usr/local/lib/python3.11/dist-packages/smolagents/agents.py in _step_stream(self, memory_step)<br>
1495 else:<br>
→ 1496 chat_message: ChatMessage = self.model.generate(<br>
1497 input_messages,</p>
<p>8 frames<br>
ValueError: Provider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’, ‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’, ‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for the model, sorted by the user’s order in <a href="https://hf.co/settings/inference-providers" class="inline-onebox" rel="noopener nofollow ugc">Hugging Face – The AI community building the future.</a>.</p>
<p>The above exception was the direct cause of the following exception:</p>
<p>AgentGenerationError Traceback (most recent call last)<br>
/usr/local/lib/python3.11/dist-packages/smolagents/agents.py in _step_stream(self, memory_step)<br>
1516 memory_step.model_output = output_text<br>
1517 except Exception as e:<br>
→ 1518 raise AgentGenerationError(f"Error in generating model output:\n{e}", self.logger) from e<br>
1519<br>
1520 ### Parse output ###</p>
<p>AgentGenerationError: Error in generating model output:<br>
Provider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’, ‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’, ‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for the model, sorted by the user’s order in <a href="https://hf.co/settings/inference-providers" class="inline-onebox" rel="noopener nofollow ugc">Hugging Face – The AI community building the future.</a>.</p>
|
<aside class="quote no-group" data-username="WingNeville" data-post="1" data-topic="157406">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/w/e9a140/48.png" class="avatar"> WingNeville:</div>
<blockquote>
<p>Error in generating model output:<br>
Provider ‘nscale’ not supported. Available values: ‘auto’ or any provider from [‘black-forest-labs’, ‘cerebras’,<br>
‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’,<br>
‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for<br>
the model, sorted by the user’s order in <a href="https://hf.co/settings/inference-providers" rel="noopener nofollow ugc">Hugging Face – The AI community building the future.</a>.</p>
</blockquote>
</aside>
<p>You are trying to use a provider called NScale. The backend doesn’t support that provider for that Model. Switch to auto and Huggingface will pick the first provider for you for that model.<br>
Alternatively, you can go research the model on Huggingface and see what providers are available for that model and pass that arguement accordingly.</p>
<p>Hope that helps <img src="https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14" title=":slight_smile:" class="emoji" alt=":slight_smile:" loading="lazy" width="20" height="20"></p>
|
Space won’t start - logs not found
|
https://discuss.huggingface.co/t/space-wont-start-logs-not-found/54149
| 54,149
| 24
|
2023-09-08T18:13:54.236000Z
|
[
{
"id": 88642,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-09-08T18:13:54.291Z",
"cooked": "<p>Here’s the error I’m seeing for Container logs:</p>\n<p>Error: Failed to load logs: Not Found. Logs are persisted for 30 days after the Space stops running.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T18:13:54.291Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2222,
"reads": 105,
"readers_count": 104,
"score": 10721,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/spaces-and-building-stuck-infra-side-issue-and-how-to-troubleshoot-further/54158/5",
"internal": true,
"reflection": true,
"title": "Spaces and \"Building\" stuck, infra side issue and how to troubleshoot further?",
"clicks": 3
},
{
"url": "https://discuss.huggingface.co/t/error-failed-to-load-logs-not-found-logs-are-persisted-for-30-days-after-the-space-stops-running/66922/4",
"internal": true,
"reflection": true,
"title": "Error: Failed to load logs: Not Found. Logs are persisted for 30 days after the Space stops running",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/1",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 88645,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-09-08T18:24:27.043Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/155elkhorn\">@155elkhorn</a> could you please share more details? do you have a public Space link to share? thanks</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T18:24:27.043Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 19,
"reads": 101,
"readers_count": 100,
"score": 110.2,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 88668,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-09-08T22:51:21.783Z",
"cooked": "<p>The space isn’t public, but here’s the link to the space: <a href=\"https://huggingface.co/spaces/PikeAndVine/SD-Inpaint-POC\">https://huggingface.co/spaces/PikeAndVine/SD-Inpaint-POC</a></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T22:51:21.783Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 95,
"readers_count": 94,
"score": 39,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/PikeAndVine/SD-Inpaint-POC",
"internal": false,
"reflection": false,
"title": null,
"clicks": 98
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 88669,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-09-08T22:52:19.507Z",
"cooked": "<p>I went ahead and made it public for now in case that helps.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T22:52:19.507Z",
"reply_count": 1,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 94,
"readers_count": 93,
"score": 48.8,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 28476,
"username": "155elkhorn",
"name": "Dan Moen",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 88670,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-09-08T23:04:09.045Z",
"cooked": "<p>thanks for sharing, I duplicate your Space for testing purposes and it build and run normally</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/1/31b1f4edccbc639b56561a7868f474ee4d969899.png\" data-download-href=\"/uploads/short-url/75CFkyvScOc1PIcZGM8HzrldTJf.png?dl=1\" title=\"image\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/1/31b1f4edccbc639b56561a7868f474ee4d969899_2_690x99.png\" alt=\"image\" data-base62-sha1=\"75CFkyvScOc1PIcZGM8HzrldTJf\" width=\"690\" height=\"99\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/1/31b1f4edccbc639b56561a7868f474ee4d969899_2_690x99.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/1/31b1f4edccbc639b56561a7868f474ee4d969899_2_1035x148.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/1/31b1f4edccbc639b56561a7868f474ee4d969899_2_1380x198.png 2x\" data-dominant-color=\"F8F9F9\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2270×326 39.8 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div><br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/7/87d307a5cb99498bd53ffa806ad8d7257b65044c.png\" data-download-href=\"/uploads/short-url/jnyAs051UDo6psIfJgyJDDyEXSc.png?dl=1\" title=\"image\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/7/87d307a5cb99498bd53ffa806ad8d7257b65044c_2_379x500.png\" alt=\"image\" data-base62-sha1=\"jnyAs051UDo6psIfJgyJDDyEXSc\" width=\"379\" height=\"500\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/7/87d307a5cb99498bd53ffa806ad8d7257b65044c_2_379x500.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/7/87d307a5cb99498bd53ffa806ad8d7257b65044c_2_568x750.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/7/87d307a5cb99498bd53ffa806ad8d7257b65044c_2_758x1000.png 2x\" data-dominant-color=\"F6F6F6\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1004×1324 55.4 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>Could you please try a Factory Reboot?</p>\n<p>Another tip is, if you’re using the persistent data you set set <code>HF_HOME</code> to <code>/data/.huggingface</code> So you won’t need to re-download models every new build</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/0/6/068ea7e642bcd846faaa950a04c261b413082d53.jpeg\" data-download-href=\"/uploads/short-url/W0vdMWyRm438t9UGguQ8lrmEGD.jpeg?dl=1\" title=\"image\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/6/068ea7e642bcd846faaa950a04c261b413082d53_2_690x490.jpeg\" alt=\"image\" data-base62-sha1=\"W0vdMWyRm438t9UGguQ8lrmEGD\" width=\"690\" height=\"490\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/6/068ea7e642bcd846faaa950a04c261b413082d53_2_690x490.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/6/068ea7e642bcd846faaa950a04c261b413082d53_2_1035x735.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/6/068ea7e642bcd846faaa950a04c261b413082d53_2_1380x980.jpeg 2x\" data-dominant-color=\"C8C9C9\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1640×1166 113 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 5,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T23:04:09.045Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 33,
"reads": 88,
"readers_count": 87,
"score": 177.6,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://us1.discourse-cdn.com/hellohellohello/original/3X/3/1/31b1f4edccbc639b56561a7868f474ee4d969899.png",
"internal": false,
"reflection": false,
"title": "31b1f4edccbc639b56561a7868f474ee4d969899.png",
"clicks": 0
},
{
"url": "https://us1.discourse-cdn.com/hellohellohello/original/3X/8/7/87d307a5cb99498bd53ffa806ad8d7257b65044c.png",
"internal": false,
"reflection": false,
"title": "87d307a5cb99498bd53ffa806ad8d7257b65044c.png",
"clicks": 0
},
{
"url": "https://us1.discourse-cdn.com/hellohellohello/original/3X/0/6/068ea7e642bcd846faaa950a04c261b413082d53.jpeg",
"internal": false,
"reflection": false,
"title": "068ea7e642bcd846faaa950a04c261b413082d53.jpeg",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 28476,
"username": "155elkhorn",
"name": "Dan Moen",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 88674,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-09-08T23:09:31.854Z",
"cooked": "<p>I’ve done at least 5 factory reboots. I tried another one and here’s the error I’m getting:</p>\n<h1><a name=\"build-error-1\" class=\"anchor\" href=\"#build-error-1\"></a>Build error</h1>\n<h2><a name=\"build-failed-with-exit-code-1-2\" class=\"anchor\" href=\"#build-failed-with-exit-code-1-2\"></a>Build failed with exit code: 1</h2>\n<p>Build logs:</p>\n<pre><code class=\"lang-auto\">===== Build Queued at 2023-09-08 23:07:41 / Commit SHA: fd2693c =====\n\n--> FROM docker.io/nvidia/cuda:11.3.1-cudnn8-devel-ubuntu18.04@sha256:69cd988555eabe116f76acc754b363eee75f37674c23adb2b523f5fa32543984\nDONE 29.1s\n\n--> RUN apt-get update && apt-get install -y git make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev git-lfs \tffmpeg libsm6 libxext6 cmake libgl1-mesa-glx \t\t&& rm -rf /var/lib/apt/lists/* \t&& git lfs install\n\n--> ERROR: failed commit on ref \"layer-sha256:c89166c8ea49f8e433445b622e665a321cf96442e5a4b86ca0d3d2b2812a8b6d\": unexpected commit digest sha256:0f494b781dd9bb64e7fff4a96d5be6526ca5b57377c14a5c2c572edbc3d8f6a4, expected sha256:c89166c8ea49f8e433445b622e665a321cf96442e5a4b86ca0d3d2b2812a8b6d: failed precondition\n</code></pre>",
"post_number": 6,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T23:09:31.854Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 76,
"readers_count": 75,
"score": 55.2,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 88677,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-09-08T23:12:31.403Z",
"cooked": "<p>Sorry, that’s very odd. Did you just duplicated it and got that error? Are you using persistent storage?</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T23:12:31.403Z",
"reply_count": 1,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 70,
"readers_count": 69,
"score": 24,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 28476,
"username": "155elkhorn",
"name": "Dan Moen",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 88678,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-09-08T23:18:51.265Z",
"cooked": "<p>I just made a copy like you did and it actually started, yay!</p>\n<p>Yes, I have persistent storage turned on and I added that HF_HOME variable like you suggested.</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T23:18:51.265Z",
"reply_count": 1,
"reply_to_post_number": 7,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 72,
"readers_count": 71,
"score": 64.4,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/8",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 88680,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-09-08T23:19:54.357Z",
"cooked": "<p>Sorry, for the issues, next week we could have <a class=\"mention\" href=\"/u/chris-rannou\">@chris-rannou</a> to have a look on the infra side thanks</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T23:19:54.357Z",
"reply_count": 0,
"reply_to_post_number": 8,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 72,
"readers_count": 71,
"score": 34.4,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 28476,
"username": "155elkhorn",
"name": "Dan Moen",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 88681,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-09-08T23:20:28.714Z",
"cooked": "<p>I have quite a few scripts pointed at this space via API, so would really prefer to get it running versus moving over to the copy.</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-09-08T23:20:28.714Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 70,
"readers_count": 69,
"score": 94,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/10",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 94166,
"name": "George",
"username": "wholewhale",
"avatar_template": "/user_avatar/discuss.huggingface.co/wholewhale/{size}/20295_2.png",
"created_at": "2023-10-12T21:13:19.761Z",
"cooked": "<p>I am getting the same Log error and build failure. <a href=\"https://huggingface.co/spaces/wholewhale/causewriter-chat-with-pdf-openai?logs=build\" class=\"inline-onebox\">Chat with PDF • OpenAI - a Hugging Face Space by wholewhale</a></p>",
"post_number": 11,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-12T21:13:19.761Z",
"reply_count": 1,
"reply_to_post_number": 10,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 61,
"readers_count": 60,
"score": 42.2,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "George",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/wholewhale/causewriter-chat-with-pdf-openai?logs=build",
"internal": false,
"reflection": false,
"title": "Chat with PDF • OpenAI - a Hugging Face Space by wholewhale",
"clicks": 15
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31052,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/11",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 28476,
"username": "155elkhorn",
"name": "Dan Moen",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 94169,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-10-12T21:30:15.099Z",
"cooked": "<p>Apologies, we had some internal issues on our infra, could you please try rebooting/factory rebooting now?</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-12T21:30:15.099Z",
"reply_count": 1,
"reply_to_post_number": 11,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 60,
"readers_count": 59,
"score": 27,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/12",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 31052,
"username": "wholewhale",
"name": "George",
"avatar_template": "/user_avatar/discuss.huggingface.co/wholewhale/{size}/20295_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 94170,
"name": "George",
"username": "wholewhale",
"avatar_template": "/user_avatar/discuss.huggingface.co/wholewhale/{size}/20295_2.png",
"created_at": "2023-10-12T21:32:10.662Z",
"cooked": "<p>Getting: \" 500</p>\n<p>Internal Error - We’re working hard to fix this as soon as possible!\"</p>\n<p>(TY for the quick reply)</p>",
"post_number": 13,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-12T21:32:10.662Z",
"reply_count": 1,
"reply_to_post_number": 12,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 63,
"readers_count": 62,
"score": 37.6,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "George",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31052,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/13",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 94171,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-10-12T21:39:44.083Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"wholewhale\" data-post=\"13\" data-topic=\"54149\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img loading=\"lazy\" alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/wholewhale/48/20295_2.png\" class=\"avatar\"> wholewhale:</div>\n<blockquote>\n<p>Getting: \" 500</p>\n<p>Internal Error - We’re working hard to fix this as soon as possible!\"</p>\n</blockquote>\n</aside>\n<p>Apologies, we’re in recovery mode, I’ll ping when things are back</p>",
"post_number": 14,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-12T21:39:44.083Z",
"reply_count": 2,
"reply_to_post_number": 13,
"quote_count": 1,
"incoming_link_count": 1,
"reads": 62,
"readers_count": 61,
"score": 117.4,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 4
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/14",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 4
}
],
"current_user_reaction": null,
"reaction_users_count": 4,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 94201,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-10-13T00:39:20.381Z",
"cooked": "<p>Apologies for the interruption, it should be back to normal now.</p>",
"post_number": 15,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-13T00:39:20.381Z",
"reply_count": 0,
"reply_to_post_number": 14,
"quote_count": 0,
"incoming_link_count": 10,
"reads": 49,
"readers_count": 48,
"score": 104.8,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/15",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 94234,
"name": "Sanjana K",
"username": "SanjanaKannan",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/ce7236/{size}.png",
"created_at": "2023-10-13T06:59:25.130Z",
"cooked": "<p><a class=\"mention\" href=\"/u/radames\">@radames</a> any idea by when it will be back to normal? I’m still facing the error</p>",
"post_number": 16,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-13T06:59:25.130Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 47,
"readers_count": 46,
"score": 24.4,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Sanjana K",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28627,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/16",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 94436,
"name": "Dan Moen",
"username": "155elkhorn",
"avatar_template": "/user_avatar/discuss.huggingface.co/155elkhorn/{size}/19313_2.png",
"created_at": "2023-10-14T15:11:02.165Z",
"cooked": "<p>Spaces would not start for me this morning, but after factory resets they are running.</p>",
"post_number": 17,
"post_type": 1,
"posts_count": 24,
"updated_at": "2023-10-14T15:11:02.165Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 43,
"readers_count": 42,
"score": 88.6,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Dan Moen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 28476,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/17",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 152003,
"name": "Jose Benitez",
"username": "joselobenitezg",
"avatar_template": "/user_avatar/discuss.huggingface.co/joselobenitezg/{size}/22024_2.png",
"created_at": "2024-08-27T06:12:23.257Z",
"cooked": "<p>I have the same situation right now! ZeroGPU just freeze in ‘Running’</p>",
"post_number": 18,
"post_type": 1,
"posts_count": 24,
"updated_at": "2024-08-27T06:12:23.257Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 15,
"readers_count": 14,
"score": 13,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Jose Benitez",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 35634,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/18",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 152004,
"name": "Jose Benitez",
"username": "joselobenitezg",
"avatar_template": "/user_avatar/discuss.huggingface.co/joselobenitezg/{size}/22024_2.png",
"created_at": "2024-08-27T06:17:21.051Z",
"cooked": "<p>stuck in last commit <a href=\"https://huggingface.co/spaces/joselobenitezg/sapiens-demo\" class=\"inline-onebox\">Sapiens Demo - a Hugging Face Space by joselobenitezg</a></p>",
"post_number": 19,
"post_type": 1,
"posts_count": 24,
"updated_at": "2024-08-27T06:17:21.051Z",
"reply_count": 0,
"reply_to_post_number": 18,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 17,
"readers_count": 16,
"score": 3.4,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Jose Benitez",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/joselobenitezg/sapiens-demo",
"internal": false,
"reflection": false,
"title": "Sapiens Demo - a Hugging Face Space by joselobenitezg",
"clicks": 9
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 35634,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/19",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 35634,
"username": "joselobenitezg",
"name": "Jose Benitez",
"avatar_template": "/user_avatar/discuss.huggingface.co/joselobenitezg/{size}/22024_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 152127,
"name": "Jose Benitez",
"username": "joselobenitezg",
"avatar_template": "/user_avatar/discuss.huggingface.co/joselobenitezg/{size}/22024_2.png",
"created_at": "2024-08-27T18:09:49.244Z",
"cooked": "<p><a class=\"mention\" href=\"/u/julien-c\">@julien-c</a> any idea?</p>",
"post_number": 20,
"post_type": 1,
"posts_count": 24,
"updated_at": "2024-08-27T18:09:49.244Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 15,
"readers_count": 14,
"score": 23,
"yours": false,
"topic_id": 54149,
"topic_slug": "space-wont-start-logs-not-found",
"display_username": "Jose Benitez",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 35634,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/space-wont-start-logs-not-found/54149/20",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>Here’s the error I’m seeing for Container logs:</p>
<p>Error: Failed to load logs: Not Found. Logs are persisted for 30 days after the Space stops running.</p>
|
<p>Apologies for the interruption, it should be back to normal now.</p>
|
Why is Static Cache latency high?
|
https://discuss.huggingface.co/t/why-is-static-cache-latency-high/157280
| 157,280
| 9
|
2025-05-29T16:11:44.321000Z
|
[
{
"id": 224686,
"name": "Yuyao Huang",
"username": "exhyy",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/977dab/{size}.png",
"created_at": "2025-05-29T16:11:44.386Z",
"cooked": "<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/transformers/en/kv_cache\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/transformers/en/kv_cache\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/0/70d0e152f7d3fc4f2893b87211cdf6d62d6e763b_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F5F3ED\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/transformers/en/kv_cache\" target=\"_blank\" rel=\"noopener\">KV cache strategies</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<p>\nIn the above document, “Static Cache” is marked as having high latency. I’m finding this a bit counterintuitive. My understanding is that a Static Cache, by pre-allocating memory for the cache, should help avoid dynamic memory allocation during inference. This, in turn, should theoretically lead to a reduction in latency. Am I misunderstanding its implementation or the definition of “latency” in the document?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-29T16:11:44.386Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 30,
"reads": 4,
"readers_count": 3,
"score": 165.8,
"yours": false,
"topic_id": 157280,
"topic_slug": "why-is-static-cache-latency-high",
"display_username": "Yuyao Huang",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/en/kv_cache",
"internal": false,
"reflection": false,
"title": "KV cache strategies",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95473,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-is-static-cache-latency-high/157280/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224697,
"name": "Riley Fox",
"username": "Mdrnfox",
"avatar_template": "/user_avatar/discuss.huggingface.co/mdrnfox/{size}/47695_2.png",
"created_at": "2025-05-29T16:45:50.724Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"exhyy\" data-post=\"1\" data-topic=\"157280\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/e/977dab/48.png\" class=\"avatar\"> exhyy:</div>\n<blockquote>\n<p>In the above document, “Static Cache” is marked as having high latency. I’m finding this a bit counterintuitive. My understanding is that a Static Cache, by pre-allocating memory for the cache, should help avoid dynamic memory allocation during inference. This, in turn, should theoretically lead to a reduction in latency. Am I misunderstanding its implementation or the definition of “latency” in the document?</p>\n</blockquote>\n</aside>\n<p>This is how I interpreted it. Hugging Face docs says that Static Cache has “High” latency, it isn’t opposing the fact that pre-allocating memory can avoid dynamic allocations—instead, it’s telling you how fast generation runs by default, without any extra steps.</p>\n<p>Hope this helps <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-29T16:46:07.651Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 1,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 157280,
"topic_slug": "why-is-static-cache-latency-high",
"display_username": "Riley Fox",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94214,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-is-static-cache-latency-high/157280/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224775,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-30T08:01:14.932Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-30T08:01:14.932Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 157280,
"topic_slug": "why-is-static-cache-latency-high",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-is-static-cache-latency-high/157280/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/transformers/en/kv_cache">
<header class="source">
<a href="https://huggingface.co/docs/transformers/en/kv_cache" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/0/70d0e152f7d3fc4f2893b87211cdf6d62d6e763b_2_690x372.png" class="thumbnail" data-dominant-color="F5F3ED" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/transformers/en/kv_cache" target="_blank" rel="noopener">KV cache strategies</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>
In the above document, “Static Cache” is marked as having high latency. I’m finding this a bit counterintuitive. My understanding is that a Static Cache, by pre-allocating memory for the cache, should help avoid dynamic memory allocation during inference. This, in turn, should theoretically lead to a reduction in latency. Am I misunderstanding its implementation or the definition of “latency” in the document?</p>
|
<aside class="quote no-group" data-username="exhyy" data-post="1" data-topic="157280">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/e/977dab/48.png" class="avatar"> exhyy:</div>
<blockquote>
<p>In the above document, “Static Cache” is marked as having high latency. I’m finding this a bit counterintuitive. My understanding is that a Static Cache, by pre-allocating memory for the cache, should help avoid dynamic memory allocation during inference. This, in turn, should theoretically lead to a reduction in latency. Am I misunderstanding its implementation or the definition of “latency” in the document?</p>
</blockquote>
</aside>
<p>This is how I interpreted it. Hugging Face docs says that Static Cache has “High” latency, it isn’t opposing the fact that pre-allocating memory can avoid dynamic allocations—instead, it’s telling you how fast generation runs by default, without any extra steps.</p>
<p>Hope this helps <img src="https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14" title=":slight_smile:" class="emoji" alt=":slight_smile:" loading="lazy" width="20" height="20"></p>
|
ZeroGPU space : No CUDA GPUs are available
|
https://discuss.huggingface.co/t/zerogpu-space-no-cuda-gpus-are-available/154885
| 154,885
| 24
|
2025-05-13T12:05:09.148000Z
|
[
{
"id": 221649,
"name": "Ibaraki Douji",
"username": "IbarakiDouji",
"avatar_template": "/user_avatar/discuss.huggingface.co/ibarakidouji/{size}/47435_2.png",
"created_at": "2025-05-13T12:05:09.219Z",
"cooked": "<p>Hello there,</p>\n<p>So i’m working on a ZeroGPU space, and i was able to generate some images out of it.</p>\n<p>Tho after a day, i wanted to share it with some friends and they are not able to generate (they are not logged, no the quota is not full, i also tried without login and had the same issue).</p>\n<p>Here is the failed logs :</p>\n<pre><code class=\"lang-auto\">2025-05-13 13:50:08 - httpx - INFO - HTTP Request: POST http://device-api.zero/schedule?cgroupPath=%2Fkubepods.slice%2Fkubepods-burstable.slice%2Fkubepods-burstable-pod53d91e08_ca6f_4829_acd7_772d9f243c8d.slice%2Fcri-containerd-04c1f2c1ffa380d58455444191199b49c387cc8223de321c2ba7806ab5afb790.scope&taskId=140013534102432&enableQueue=true&tokenVersion=1&token=<hidden> \"HTTP/1.1 200 OK\"\n2025-05-13 13:50:08 - httpx - INFO - HTTP Request: POST http://device-api.zero/allow?allowToken=30dde4f1969ce8a8e2506e28f806789a21b5458a9e8618389a54bb0f851483b7&pid=4746 \"HTTP/1.1 200 OK\"\n2025-05-13 13:50:08 - httpx - INFO - HTTP Request: POST http://device-api.zero/release?allowToken=30dde4f1969ce8a8e2506e28f806789a21b5458a9e8618389a54bb0f851483b7&fail=true \"HTTP/1.1 200 OK\"\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py\", line 140, in worker_init\n torch.init(nvidia_uuid)\n File \"/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py\", line 373, in init\n torch.Tensor([0]).cuda()\n File \"/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py\", line 319, in _lazy_init\n torch._C._cuda_init()\nRuntimeError: No CUDA GPUs are available\n\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.10/site-packages/gradio/queueing.py\", line 536, in process_events\n response = await route_utils.call_process_api(\n File \"/usr/local/lib/python3.10/site-packages/gradio/route_utils.py\", line 322, in call_process_api\n output = await app.get_blocks().process_api(\n File \"/usr/local/lib/python3.10/site-packages/gradio/blocks.py\", line 1935, in process_api\n result = await self.call_function(\n File \"/usr/local/lib/python3.10/site-packages/gradio/blocks.py\", line 1520, in call_function\n prediction = await anyio.to_thread.run_sync( # type: ignore\n File \"/usr/local/lib/python3.10/site-packages/anyio/to_thread.py\", line 56, in run_sync\n return await get_async_backend().run_sync_in_worker_thread(\n File \"/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py\", line 2470, in run_sync_in_worker_thread\n return await future\n File \"/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py\", line 967, in run\n result = context.run(func, *args)\n File \"/usr/local/lib/python3.10/site-packages/gradio/utils.py\", line 826, in wrapper\n response = f(*args, **kwargs)\n File \"/usr/local/lib/python3.10/site-packages/gradio/utils.py\", line 826, in wrapper\n response = f(*args, **kwargs)\n File \"/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py\", line 214, in gradio_handler\n raise error(\"ZeroGPU worker error\", res.error_cls)\ngradio.exceptions.Error: 'RuntimeError'\n</code></pre>\n<p>and a working one :</p>\n<pre><code class=\"lang-auto\">2025-05-13 13:40:38 - httpx - INFO - HTTP Request: POST http://device-api.zero/schedule?cgroupPath=%2Fkubepods.slice%2Fkubepods-burstable.slice%2Fkubepods-burstable-pod53d91e08_ca6f_4829_acd7_772d9f243c8d.slice%2Fcri-containerd-04c1f2c1ffa380d58455444191199b49c387cc8223de321c2ba7806ab5afb790.scope&taskId=140013534102432&enableQueue=true&tokenVersion=1&token=<hidden> \"HTTP/1.1 200 OK\"\n2025-05-13 13:40:38 - httpx - INFO - HTTP Request: POST http://device-api.zero/allow?allowToken=da5eb1a48aafb766ccf710678d8812ca135ce74d51e310832bb0a7da156dd51f&pid=4523 \"HTTP/1.1 200 OK\"\n2025-05-13 13:40:41 - __main__ - INFO - Starting generation with parameters: {\n \"prompt\": \"masterpiece, best quality, amazing quality, 1girl\",\n \"negative_prompt\": \"sensitive, nsfw, explicit, bad quality, worst quality, worst detail, sketch, censor\",\n \"resolution\": \"1248 x 1824\",\n \"guidance_scale\": 7,\n \"num_inference_steps\": 28,\n \"seed\": 1857728698,\n \"sampler\": \"Euler a\",\n \"use_upscaler\": null\n}\n2025-05-13 13:40:49 - __main__ - INFO - Image 1/1 saved as ./outputs/20584bdd-e9bc-4691-8399-7bb96e8dcf7b.png\n2025-05-13 13:40:49 - __main__ - INFO - Generation completed successfully in 8.03 seconds\n2025-05-13 13:40:49 - httpx - INFO - HTTP Request: POST http://device-api.zero/release?allowToken=da5eb1a48aafb766ccf710678d8812ca135ce74d51e310832bb0a7da156dd51f&fail=false \"HTTP/1.1 200 OK\"\n</code></pre>\n<p>Yes, the <code>import spaces</code> is at the top.<br>\nNo i’m not using weird pipelines, just “lpw_stable_diffusion_xl” copied from the repo to work with “from_single file”</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-13T12:05:09.219Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 102,
"reads": 20,
"readers_count": 19,
"score": 519,
"yours": false,
"topic_id": 154885,
"topic_slug": "zerogpu-space-no-cuda-gpus-are-available",
"display_username": "Ibaraki Douji",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93790,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/zerogpu-space-no-cuda-gpus-are-available/154885/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221663,
"name": "Ibaraki Douji",
"username": "IbarakiDouji",
"avatar_template": "/user_avatar/discuss.huggingface.co/ibarakidouji/{size}/47435_2.png",
"created_at": "2025-05-13T13:12:43.972Z",
"cooked": "<p>Just after sending the message, i got the no GPU also on my account.</p>\n<p>And right now, it seems to be woking again both with and without account.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-13T13:12:43.972Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 9,
"reads": 17,
"readers_count": 16,
"score": 63.4,
"yours": false,
"topic_id": 154885,
"topic_slug": "zerogpu-space-no-cuda-gpus-are-available",
"display_username": "Ibaraki Douji",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93790,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/zerogpu-space-no-cuda-gpus-are-available/154885/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221725,
"name": "Ibaraki Douji",
"username": "IbarakiDouji",
"avatar_template": "/user_avatar/discuss.huggingface.co/ibarakidouji/{size}/47435_2.png",
"created_at": "2025-05-13T19:31:45.960Z",
"cooked": "<p>After more time it happen again.</p>\n<p>Maybe it’s just there is too much ZeroGPU spaces used at the time.</p>\n<p>Just hope that someone can clarify the real cause of it.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-13T19:31:45.960Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 16,
"readers_count": 15,
"score": 38.2,
"yours": false,
"topic_id": 154885,
"topic_slug": "zerogpu-space-no-cuda-gpus-are-available",
"display_username": "Ibaraki Douji",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93790,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/zerogpu-space-no-cuda-gpus-are-available/154885/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221752,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-14T02:44:00.213Z",
"cooked": "<p>After replicating it, it seems to work fine now. It probably just comes and goes.</p>\n<p>The Zero GPU has just been replaced, so there might be a bug, so I’ll ping it just to be safe. <a class=\"mention\" href=\"/u/hysts\">@hysts</a> <a class=\"mention\" href=\"/u/michellehbn\">@michellehbn</a></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-14T02:44:00.213Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 15,
"reads": 14,
"readers_count": 13,
"score": 122.8,
"yours": false,
"topic_id": 154885,
"topic_slug": "zerogpu-space-no-cuda-gpus-are-available",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/zerogpu-space-no-cuda-gpus-are-available/154885/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224277,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-27T09:30:20.561Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-27T09:30:20.561Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 5,
"readers_count": 4,
"score": 11,
"yours": false,
"topic_id": 154885,
"topic_slug": "zerogpu-space-no-cuda-gpus-are-available",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/zerogpu-space-no-cuda-gpus-are-available/154885/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello there,</p>
<p>So i’m working on a ZeroGPU space, and i was able to generate some images out of it.</p>
<p>Tho after a day, i wanted to share it with some friends and they are not able to generate (they are not logged, no the quota is not full, i also tried without login and had the same issue).</p>
<p>Here is the failed logs :</p>
<pre><code class="lang-auto">2025-05-13 13:50:08 - httpx - INFO - HTTP Request: POST http://device-api.zero/schedule?cgroupPath=%2Fkubepods.slice%2Fkubepods-burstable.slice%2Fkubepods-burstable-pod53d91e08_ca6f_4829_acd7_772d9f243c8d.slice%2Fcri-containerd-04c1f2c1ffa380d58455444191199b49c387cc8223de321c2ba7806ab5afb790.scope&taskId=140013534102432&enableQueue=true&tokenVersion=1&token=<hidden> "HTTP/1.1 200 OK"
2025-05-13 13:50:08 - httpx - INFO - HTTP Request: POST http://device-api.zero/allow?allowToken=30dde4f1969ce8a8e2506e28f806789a21b5458a9e8618389a54bb0f851483b7&pid=4746 "HTTP/1.1 200 OK"
2025-05-13 13:50:08 - httpx - INFO - HTTP Request: POST http://device-api.zero/release?allowToken=30dde4f1969ce8a8e2506e28f806789a21b5458a9e8618389a54bb0f851483b7&fail=true "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 140, in worker_init
torch.init(nvidia_uuid)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 373, in init
torch.Tensor([0]).cuda()
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 967, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 214, in gradio_handler
raise error("ZeroGPU worker error", res.error_cls)
gradio.exceptions.Error: 'RuntimeError'
</code></pre>
<p>and a working one :</p>
<pre><code class="lang-auto">2025-05-13 13:40:38 - httpx - INFO - HTTP Request: POST http://device-api.zero/schedule?cgroupPath=%2Fkubepods.slice%2Fkubepods-burstable.slice%2Fkubepods-burstable-pod53d91e08_ca6f_4829_acd7_772d9f243c8d.slice%2Fcri-containerd-04c1f2c1ffa380d58455444191199b49c387cc8223de321c2ba7806ab5afb790.scope&taskId=140013534102432&enableQueue=true&tokenVersion=1&token=<hidden> "HTTP/1.1 200 OK"
2025-05-13 13:40:38 - httpx - INFO - HTTP Request: POST http://device-api.zero/allow?allowToken=da5eb1a48aafb766ccf710678d8812ca135ce74d51e310832bb0a7da156dd51f&pid=4523 "HTTP/1.1 200 OK"
2025-05-13 13:40:41 - __main__ - INFO - Starting generation with parameters: {
"prompt": "masterpiece, best quality, amazing quality, 1girl",
"negative_prompt": "sensitive, nsfw, explicit, bad quality, worst quality, worst detail, sketch, censor",
"resolution": "1248 x 1824",
"guidance_scale": 7,
"num_inference_steps": 28,
"seed": 1857728698,
"sampler": "Euler a",
"use_upscaler": null
}
2025-05-13 13:40:49 - __main__ - INFO - Image 1/1 saved as ./outputs/20584bdd-e9bc-4691-8399-7bb96e8dcf7b.png
2025-05-13 13:40:49 - __main__ - INFO - Generation completed successfully in 8.03 seconds
2025-05-13 13:40:49 - httpx - INFO - HTTP Request: POST http://device-api.zero/release?allowToken=da5eb1a48aafb766ccf710678d8812ca135ce74d51e310832bb0a7da156dd51f&fail=false "HTTP/1.1 200 OK"
</code></pre>
<p>Yes, the <code>import spaces</code> is at the top.<br>
No i’m not using weird pipelines, just “lpw_stable_diffusion_xl” copied from the repo to work with “from_single file”</p>
|
<p>After replicating it, it seems to work fine now. It probably just comes and goes.</p>
<p>The Zero GPU has just been replaced, so there might be a bug, so I’ll ping it just to be safe. <a class="mention" href="/u/hysts">@hysts</a> <a class="mention" href="/u/michellehbn">@michellehbn</a></p>
|
Building something that help people who really need help using ai
|
https://discuss.huggingface.co/t/building-something-that-help-people-who-really-need-help-using-ai/154301
| 154,301
| 9
|
2025-05-09T14:15:08.458000Z
|
[
{
"id": 220825,
"name": "Adnan Ahamed Farooqui",
"username": "adnanahmedfarooqui",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/71c47a/{size}.png",
"created_at": "2025-05-09T14:15:08.520Z",
"cooked": "<p>I want to make something like that using AI automation and other tools that will help different kinds of people.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-09T14:15:08.520Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 10,
"readers_count": 9,
"score": 47,
"yours": false,
"topic_id": 154301,
"topic_slug": "building-something-that-help-people-who-really-need-help-using-ai",
"display_username": "Adnan Ahamed Farooqui",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90632,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/building-something-that-help-people-who-really-need-help-using-ai/154301/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220827,
"name": "Tonni Alex",
"username": "tonnii",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/t/a9adbd/{size}.png",
"created_at": "2025-05-09T14:19:57.020Z",
"cooked": "<p>That is a great idea. If you want to build something using AI automation and other tools to help different kinds of people, begin by deciding what problem you want to solve and who will use it. Once you know that, choose the right tools such as chatbots, automation platforms, or voice assistants, based on what is needed. Many tools are easy to use and do not require heavy coding. Build one small part at a time, test it with real users, and make sure it is simple and helpful for the people you want to support.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-09T14:19:57.164Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 10,
"readers_count": 9,
"score": 32,
"yours": false,
"topic_id": 154301,
"topic_slug": "building-something-that-help-people-who-really-need-help-using-ai",
"display_username": "Tonni Alex",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93030,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": "Automatically removed quote of whole previous post.",
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/building-something-that-help-people-who-really-need-help-using-ai/154301/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221050,
"name": "Adnan Ahamed Farooqui",
"username": "adnanahmedfarooqui",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/71c47a/{size}.png",
"created_at": "2025-05-10T17:15:39.124Z",
"cooked": "<p>I am thinking of creating an AI technology that will help in the indoor mapping of different places, fully descriptive, which will help old age people and differently abled people to access those places easily. Can anyone help me with that</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-10T17:15:39.124Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 154301,
"topic_slug": "building-something-that-help-people-who-really-need-help-using-ai",
"display_username": "Adnan Ahamed Farooqui",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90632,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/building-something-that-help-people-who-really-need-help-using-ai/154301/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221201,
"name": "Mahmut C",
"username": "mahmutc",
"avatar_template": "/user_avatar/discuss.huggingface.co/mahmutc/{size}/52583_2.png",
"created_at": "2025-05-11T13:30:21.276Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/adnanahmedfarooqui\">@adnanahmedfarooqui</a></p>\n<p>Do you think something like this?<br>\n<strong>User:</strong> “Take me to the cardiology wing.”<br>\n<strong>AI Response:</strong> “You are 20 meters from the elevator. Take the elevator to the second floor. Upon exit, turn left and follow the tactile floor markings. A staff help desk will be on your right in 30 meters.”</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-11T13:30:21.276Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 35.8,
"yours": false,
"topic_id": 154301,
"topic_slug": "building-something-that-help-people-who-really-need-help-using-ai",
"display_username": "Mahmut C",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 61570,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/building-something-that-help-people-who-really-need-help-using-ai/154301/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221330,
"name": "Adnan Ahamed Farooqui",
"username": "adnanahmedfarooqui",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/71c47a/{size}.png",
"created_at": "2025-05-12T07:27:14.582Z",
"cooked": "<p>Yess exactly like this …can make further changes by getting user input that will help people to navigate the places easily…also in our map we can mark places that is fully accessible partially accessable and not accessible in outdoor map…</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-12T07:27:14.582Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 154301,
"topic_slug": "building-something-that-help-people-who-really-need-help-using-ai",
"display_username": "Adnan Ahamed Farooqui",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90632,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/building-something-that-help-people-who-really-need-help-using-ai/154301/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 61570,
"username": "mahmutc",
"name": "Mahmut C",
"avatar_template": "/user_avatar/discuss.huggingface.co/mahmutc/{size}/52583_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 224274,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-27T09:00:06.119Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-05-27T09:00:06.119Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 154301,
"topic_slug": "building-something-that-help-people-who-really-need-help-using-ai",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/building-something-that-help-people-who-really-need-help-using-ai/154301/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I want to make something like that using AI automation and other tools that will help different kinds of people.</p>
|
<p>Yess exactly like this …can make further changes by getting user input that will help people to navigate the places easily…also in our map we can mark places that is fully accessible partially accessable and not accessible in outdoor map…</p>
|
Optimal Approach for Fine-Tuning LayoutLMv3 for Token Classification with 80 Labels
|
https://discuss.huggingface.co/t/optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels/156857
| 156,857
| 13
|
2025-05-26T11:29:11.157000Z
|
[
{
"id": 224129,
"name": "hugo pavy",
"username": "hugobee",
"avatar_template": "/user_avatar/discuss.huggingface.co/hugobee/{size}/48285_2.png",
"created_at": "2025-05-26T11:29:11.235Z",
"cooked": "<p>Hello everyone,</p>\n<p>I’m trying to extract medical information from PDF files using LayoutLMv3 for token classification.</p>\n<p>I’ve successfully fine-tuned the model for a few different kinds of tokens (name, date of birth, patient ID, etc.), but now I want to scale up to around 80 different labels.</p>\n<p>I’m wondering if it’s better to train one model for all labels or to decompose the task into multiple specialized models (like just models of around 10 labels). Any advice or experiences would be greatly appreciated!</p>\n<p>Has anyone encountered a similar issue or have any advice on the best approach? Thanks in advance for your help!</p>\n<p>Have a good day,</p>\n<p>Hugo</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-26T11:29:11.235Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 63,
"reads": 8,
"readers_count": 7,
"score": 286.6,
"yours": false,
"topic_id": 156857,
"topic_slug": "optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels",
"display_username": "hugo pavy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95134,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels/156857/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224136,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-26T13:13:15.723Z",
"cooked": "<blockquote>\n<p>if it’s better to train one model for all labels or to decompose the task into multiple specialized models (like just models of around 10 labels)</p>\n</blockquote>\n<p>Looking at the dataset used to train LayoutLMv2, it seems that a number of items within 20 is more appropriate. I think v3 probably has similar characteristics.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/datasets/FrancophonIA/XFUND\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/datasets/FrancophonIA/XFUND\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/a/8/a84a8f91d0938569e61932a18c86925e41647059_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"6854C0\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/datasets/FrancophonIA/XFUND\" target=\"_blank\" rel=\"noopener\">FrancophonIA/XFUND · Datasets at Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Well, small models are often not suitable for processing many items at once, so it is safer to divide them into multiple models. Even if you continue to train a single model, it is a good idea to save the current successful weights somewhere.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-26T13:13:15.723Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 21.2,
"yours": false,
"topic_id": 156857,
"topic_slug": "optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/datasets/FrancophonIA/XFUND",
"internal": false,
"reflection": false,
"title": "FrancophonIA/XFUND · Datasets at Hugging Face",
"clicks": 5
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels/156857/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224149,
"name": "hugo pavy",
"username": "hugobee",
"avatar_template": "/user_avatar/discuss.huggingface.co/hugobee/{size}/48285_2.png",
"created_at": "2025-05-26T14:57:05.139Z",
"cooked": "<p>Thanks you for your response! I’m gonna try that</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-26T14:57:05.139Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 156857,
"topic_slug": "optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels",
"display_username": "hugo pavy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95134,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels/156857/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 224270,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-27T08:08:12.063Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-05-27T08:08:12.063Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 156857,
"topic_slug": "optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/optimal-approach-for-fine-tuning-layoutlmv3-for-token-classification-with-80-labels/156857/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello everyone,</p>
<p>I’m trying to extract medical information from PDF files using LayoutLMv3 for token classification.</p>
<p>I’ve successfully fine-tuned the model for a few different kinds of tokens (name, date of birth, patient ID, etc.), but now I want to scale up to around 80 different labels.</p>
<p>I’m wondering if it’s better to train one model for all labels or to decompose the task into multiple specialized models (like just models of around 10 labels). Any advice or experiences would be greatly appreciated!</p>
<p>Has anyone encountered a similar issue or have any advice on the best approach? Thanks in advance for your help!</p>
<p>Have a good day,</p>
<p>Hugo</p>
|
<blockquote>
<p>if it’s better to train one model for all labels or to decompose the task into multiple specialized models (like just models of around 10 labels)</p>
</blockquote>
<p>Looking at the dataset used to train LayoutLMv2, it seems that a number of items within 20 is more appropriate. I think v3 probably has similar characteristics.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/datasets/FrancophonIA/XFUND">
<header class="source">
<a href="https://huggingface.co/datasets/FrancophonIA/XFUND" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/a/8/a84a8f91d0938569e61932a18c86925e41647059_2_690x372.png" class="thumbnail" data-dominant-color="6854C0" width="690" height="372"></div>
<h3><a href="https://huggingface.co/datasets/FrancophonIA/XFUND" target="_blank" rel="noopener">FrancophonIA/XFUND · Datasets at Hugging Face</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>Well, small models are often not suitable for processing many items at once, so it is safer to divide them into multiple models. Even if you continue to train a single model, it is a good idea to save the current successful weights somewhere.</p>
|
Need help to find old Embeddings I lost during PC installation
|
https://discuss.huggingface.co/t/need-help-to-find-old-embeddings-i-lost-during-pc-installation/156873
| 156,873
| 13
|
2025-05-26T14:26:01.784000Z
|
[
{
"id": 224147,
"name": "Mary",
"username": "fantasy-mary",
"avatar_template": "/user_avatar/discuss.huggingface.co/fantasy-mary/{size}/48307_2.png",
"created_at": "2025-05-26T14:26:01.849Z",
"cooked": "<p>Hi everyone,</p>\n<p>I am looking for help, I used some embeddings but after I reinstalled Windows to my PC I lost my StableDiffusion folder. Now I reinstalled StableDiffusion but I can’t find all embeddings.</p>\n<p>The specific embeddings I am looking for are called “fFaceDetail, SkinHairDetail, EyeDetail, OverallDetail and SkinDetailNeg-neg”. I did not rename them, I am 100% sure they are from civitai and all from one creator but I can’t find them there anymore.</p>\n<p>Maybe someone knows them, knows where I can find them or even got them by themself and are willing to share them.</p>\n<p>Thanks in advance <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-26T14:26:01.849Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 9,
"readers_count": 8,
"score": 96.8,
"yours": false,
"topic_id": 156873,
"topic_slug": "need-help-to-find-old-embeddings-i-lost-during-pc-installation",
"display_username": "Mary",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95164,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/need-help-to-find-old-embeddings-i-lost-during-pc-installation/156873/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224159,
"name": "Adrian Araya",
"username": "aaraya",
"avatar_template": "/user_avatar/discuss.huggingface.co/aaraya/{size}/48313_2.png",
"created_at": "2025-05-26T16:21:49.567Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/fantasy-mary\">@fantasy-mary</a>, it’s a shame you lost your data <img src=\"https://emoji.discourse-cdn.com/apple/frowning.png?v=14\" title=\":frowning:\" class=\"emoji\" alt=\":frowning:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nI found this while searching the web. I hope it helps!</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/0/d05ad96c87bfec3705f747eac85eb0c802590906_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5C71A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings\" target=\"_blank\" rel=\"noopener\">bad-tomich1/xl_loras_and_checkpoint at main</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Adrian Araya<br>\nMachine Learning Engineer at <a href=\"http://RidgeRun.ai\" rel=\"noopener nofollow ugc\">RidgeRun.ai</a><br>\nContact us: <a href=\"mailto:[email protected]\">[email protected]</a></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-26T16:21:49.567Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 8,
"readers_count": 7,
"score": 41.6,
"yours": false,
"topic_id": 156873,
"topic_slug": "need-help-to-find-old-embeddings-i-lost-during-pc-installation",
"display_username": "Adrian Araya",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings",
"internal": false,
"reflection": false,
"title": "bad-tomich1/xl_loras_and_checkpoint at main",
"clicks": 4
},
{
"url": "http://RidgeRun.ai",
"internal": false,
"reflection": false,
"title": null,
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 74204,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/need-help-to-find-old-embeddings-i-lost-during-pc-installation/156873/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 224162,
"name": "Mary",
"username": "fantasy-mary",
"avatar_template": "/user_avatar/discuss.huggingface.co/fantasy-mary/{size}/48307_2.png",
"created_at": "2025-05-26T16:39:42.768Z",
"cooked": "<p>Oh my god you are great, thank you !!<br>\nI searched for it the whole day and could not find them.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-26T16:39:42.768Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 36.6,
"yours": false,
"topic_id": 156873,
"topic_slug": "need-help-to-find-old-embeddings-i-lost-during-pc-installation",
"display_username": "Mary",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 95164,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/need-help-to-find-old-embeddings-i-lost-during-pc-installation/156873/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 74204,
"username": "aaraya",
"name": "Adrian Araya",
"avatar_template": "/user_avatar/discuss.huggingface.co/aaraya/{size}/48313_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 224164,
"name": "Adrian Araya",
"username": "aaraya",
"avatar_template": "/user_avatar/discuss.huggingface.co/aaraya/{size}/48313_2.png",
"created_at": "2025-05-26T16:43:11.287Z",
"cooked": "<p>I’m glad it worked for you, have a nice day!</p>\n<hr>\n<p>Adrian Araya<br>\nMachine Learning Engineer at <a href=\"http://RidgeRun.ai\" rel=\"noopener nofollow ugc\">RidgeRun.ai</a><br>\nContact us: <a href=\"mailto:[email protected]\">[email protected]</a></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-27T08:02:23.368Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 1.6,
"yours": false,
"topic_id": 156873,
"topic_slug": "need-help-to-find-old-embeddings-i-lost-during-pc-installation",
"display_username": "Adrian Araya",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://RidgeRun.ai",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 74204,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/need-help-to-find-old-embeddings-i-lost-during-pc-installation/156873/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 95164,
"username": "fantasy-mary",
"name": "Mary",
"avatar_template": "/user_avatar/discuss.huggingface.co/fantasy-mary/{size}/48307_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 224249,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-27T04:43:22.509Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-27T04:43:22.509Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 156873,
"topic_slug": "need-help-to-find-old-embeddings-i-lost-during-pc-installation",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/need-help-to-find-old-embeddings-i-lost-during-pc-installation/156873/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi everyone,</p>
<p>I am looking for help, I used some embeddings but after I reinstalled Windows to my PC I lost my StableDiffusion folder. Now I reinstalled StableDiffusion but I can’t find all embeddings.</p>
<p>The specific embeddings I am looking for are called “fFaceDetail, SkinHairDetail, EyeDetail, OverallDetail and SkinDetailNeg-neg”. I did not rename them, I am 100% sure they are from civitai and all from one creator but I can’t find them there anymore.</p>
<p>Maybe someone knows them, knows where I can find them or even got them by themself and are willing to share them.</p>
<p>Thanks in advance <img src="https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14" title=":slight_smile:" class="emoji" alt=":slight_smile:" loading="lazy" width="20" height="20"></p>
|
<p>Hi <a class="mention" href="/u/fantasy-mary">@fantasy-mary</a>, it’s a shame you lost your data <img src="https://emoji.discourse-cdn.com/apple/frowning.png?v=14" title=":frowning:" class="emoji" alt=":frowning:" loading="lazy" width="20" height="20"><br>
I found this while searching the web. I hope it helps!</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings">
<header class="source">
<a href="https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/0/d05ad96c87bfec3705f747eac85eb0c802590906_2_690x372.png" class="thumbnail" data-dominant-color="5C71A4" width="690" height="372"></div>
<h3><a href="https://huggingface.co/bad-tomich1/xl_loras_and_checkpoint/tree/main/models/embeddings" target="_blank" rel="noopener">bad-tomich1/xl_loras_and_checkpoint at main</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>Adrian Araya<br>
Machine Learning Engineer at <a href="http://RidgeRun.ai" rel="noopener nofollow ugc">RidgeRun.ai</a><br>
Contact us: <a href="mailto:[email protected]">[email protected]</a></p>
|
[RuntimeError] GPU is required to quantize or run quantize model – Qwen1.5-0.5B-Chat in my Space
|
https://discuss.huggingface.co/t/runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space/156535
| 156,535
| 5
|
2025-05-23T15:47:21.883000Z
|
[
{
"id": 223731,
"name": "I'm cute",
"username": "funme",
"avatar_template": "/user_avatar/discuss.huggingface.co/funme/{size}/48148_2.png",
"created_at": "2025-05-23T15:47:21.975Z",
"cooked": "<p>Hello everyone😊,<br>\nI’d like to test the model on the free CPU environment—do you have any suggestions?</p>\n<p>I’m encountering an error when trying to deploy the <strong>Qwen1.5-0.5B-Chat</strong> model in my Hugging Face Space running on CPU-only (free) .</p>\n<p><a href=\"https://huggingface.co/spaces/funme/MyQwen1.5-0.5B-Chat\">MyQwen1.5 0.5B Chat - a Hugging Face Space by funme</a></p>\n<p>Thank you <img src=\"https://emoji.discourse-cdn.com/apple/grinning_face.png?v=14\" title=\":grinning_face:\" class=\"emoji\" alt=\":grinning_face:\" loading=\"lazy\" width=\"20\" height=\"20\"><br>\nHere the full log: tokenizer_config.json: 0%| | 0.00/1.29k [00:00<?, ?B/s]<br>\ntokenizer_config.json: 100%|██████████| 1.29k/1.29k [00:00<00:00, 7.24MB/s]<br>\nvocab.json: 0%| | 0.00/2.78M [00:00<?, ?B/s]<br>\nvocab.json: 100%|██████████| 2.78M/2.78M [00:00<00:00, 27.1MB/s]<br>\nmerges.txt: 0%| | 0.00/1.67M [00:00<?, ?B/s]<br>\nmerges.txt: 100%|██████████| 1.67M/1.67M [00:00<00:00, 31.1MB/s]<br>\ntokenizer.json: 0%| | 0.00/7.03M [00:00<?, ?B/s]<br>\ntokenizer.json: 100%|██████████| 7.03M/7.03M [00:00<00:00, 58.3MB/s]<br>\nconfig.json: 0%| | 0.00/1.26k [00:00<?, ?B/s]<br>\nconfig.json: 100%|██████████| 1.26k/1.26k [00:00<00:00, 7.28MB/s]<br>\nTraceback (most recent call last):<br>\nFile “/home/user/app/app.py”, line 9, in <br>\nmodel = AutoModelForCausalLM.from_pretrained(<br>\nFile “/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py”, line 571, in from_pretrained<br>\nreturn model_class.from_pretrained(<br>\nFile “/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 309, in _wrapper<br>\nreturn func(*args, **kwargs)<br>\nFile “/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 4389, in from_pretrained<br>\nhf_quantizer.validate_environment(<br>\nFile “/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_gptq.py”, line 65, in validate_environment<br>\nraise RuntimeError(“GPU is required to quantize or run quantize model.”)<br>\nRuntimeError: GPU is required to quantize or run quantize model.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-23T15:47:21.975Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 185,
"reads": 6,
"readers_count": 5,
"score": 906.2,
"yours": false,
"topic_id": 156535,
"topic_slug": "runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space",
"display_username": "I'm cute",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/funme/MyQwen1.5-0.5B-Chat",
"internal": false,
"reflection": false,
"title": "MyQwen1.5 0.5B Chat - a Hugging Face Space by funme",
"clicks": 4
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94919,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space/156535/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223733,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-23T15:57:10.536Z",
"cooked": "<p>It may be possible to use a quantized model in a CPU environment, but it would probably be faster to simply use a non-quantized model in this case.</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">#MODEL_ID = \"Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4\"\nMODEL_ID = \"Qwen/Qwen1.5-0.5B-Chat\"\n</code></pre>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/transformers/main/en/quantization/gptq\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/transformers/main/en/quantization/gptq\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/0/70d0e152f7d3fc4f2893b87211cdf6d62d6e763b_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F5F3ED\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/transformers/main/en/quantization/gptq\" target=\"_blank\" rel=\"noopener\">GPTQ</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"37885\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/a/47e85d/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/loading-quantized-model-on-cpu-only/37885\">Loading quantized model on CPU only</a> <a class=\"badge-category__wrapper \" href=\"/c/transformers/9\"><span data-category-id=\"9\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Transformers library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Transformers</span></span></a>\n </div>\n <blockquote>\n Im currently trying to run BloomZ 7b1 on a server with ~31GB available ram. Without quantization loading the model starts filling up swap, which is far from desirable. I tried enabling quantization with load_in_8bit: \nfrom transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer\nimport torch\n\nmodelPath = \"/mnt/backup1/BLOOM/\"\n\ndevice = torch.device(\"cpu\")\ntokenizer = AutoTokenizer.from_pretrained(modelPath)\nmodel = AutoModelForCausalLM.from_pretrained(modelPath, device_map=\"auto\",…\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-23T15:57:10.536Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 4,
"readers_count": 3,
"score": 25.8,
"yours": false,
"topic_id": 156535,
"topic_slug": "runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/main/en/quantization/gptq",
"internal": false,
"reflection": false,
"title": "GPTQ",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/loading-quantized-model-on-cpu-only/37885",
"internal": true,
"reflection": false,
"title": "Loading quantized model on CPU only",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space/156535/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223734,
"name": "I'm cute",
"username": "funme",
"avatar_template": "/user_avatar/discuss.huggingface.co/funme/{size}/48148_2.png",
"created_at": "2025-05-23T16:04:58.404Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"John6666\" data-post=\"2\" data-topic=\"156535\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png\" class=\"avatar\"> John6666:</div>\n<blockquote>\n<p><code>Qwen/Qwen1.5-0.5B-Chat</code></p>\n</blockquote>\n</aside>\n<p>Thank you😊 , I need a model size smaller than 700 MB , I’m going to change model, if I can’t use this model</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-23T16:04:58.404Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 1,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 156535,
"topic_slug": "runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space",
"display_username": "I'm cute",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94919,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space/156535/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223783,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-24T04:05:31.298Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-05-24T04:05:31.298Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 156535,
"topic_slug": "runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/runtimeerror-gpu-is-required-to-quantize-or-run-quantize-model-qwen1-5-0-5b-chat-in-my-space/156535/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello everyone😊,<br>
I’d like to test the model on the free CPU environment—do you have any suggestions?</p>
<p>I’m encountering an error when trying to deploy the <strong>Qwen1.5-0.5B-Chat</strong> model in my Hugging Face Space running on CPU-only (free) .</p>
<p><a href="https://huggingface.co/spaces/funme/MyQwen1.5-0.5B-Chat">MyQwen1.5 0.5B Chat - a Hugging Face Space by funme</a></p>
<p>Thank you <img src="https://emoji.discourse-cdn.com/apple/grinning_face.png?v=14" title=":grinning_face:" class="emoji" alt=":grinning_face:" loading="lazy" width="20" height="20"><br>
Here the full log: tokenizer_config.json: 0%| | 0.00/1.29k [00:00<?, ?B/s]<br>
tokenizer_config.json: 100%|██████████| 1.29k/1.29k [00:00<00:00, 7.24MB/s]<br>
vocab.json: 0%| | 0.00/2.78M [00:00<?, ?B/s]<br>
vocab.json: 100%|██████████| 2.78M/2.78M [00:00<00:00, 27.1MB/s]<br>
merges.txt: 0%| | 0.00/1.67M [00:00<?, ?B/s]<br>
merges.txt: 100%|██████████| 1.67M/1.67M [00:00<00:00, 31.1MB/s]<br>
tokenizer.json: 0%| | 0.00/7.03M [00:00<?, ?B/s]<br>
tokenizer.json: 100%|██████████| 7.03M/7.03M [00:00<00:00, 58.3MB/s]<br>
config.json: 0%| | 0.00/1.26k [00:00<?, ?B/s]<br>
config.json: 100%|██████████| 1.26k/1.26k [00:00<00:00, 7.28MB/s]<br>
Traceback (most recent call last):<br>
File “/home/user/app/app.py”, line 9, in <br>
model = AutoModelForCausalLM.from_pretrained(<br>
File “/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py”, line 571, in from_pretrained<br>
return model_class.from_pretrained(<br>
File “/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 309, in _wrapper<br>
return func(*args, **kwargs)<br>
File “/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 4389, in from_pretrained<br>
hf_quantizer.validate_environment(<br>
File “/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_gptq.py”, line 65, in validate_environment<br>
raise RuntimeError(“GPU is required to quantize or run quantize model.”)<br>
RuntimeError: GPU is required to quantize or run quantize model.</p>
|
<aside class="quote no-group" data-username="John6666" data-post="2" data-topic="156535">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/john6666/48/27664_2.png" class="avatar"> John6666:</div>
<blockquote>
<p><code>Qwen/Qwen1.5-0.5B-Chat</code></p>
</blockquote>
</aside>
<p>Thank you😊 , I need a model size smaller than 700 MB , I’m going to change model, if I can’t use this model</p>
|
Configuration error, deleted readme.md
|
https://discuss.huggingface.co/t/configuration-error-deleted-readme-md/39258
| 39,258
| 24
|
2023-05-09T12:39:22.525000Z
|
[
{
"id": 68623,
"name": "Javed",
"username": "JavedA",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/3bc359/{size}.png",
"created_at": "2023-05-09T12:39:22.584Z",
"cooked": "<p>Hi, I deleted my README.md pushed it and when I created a new one, pushing it won’t work.<br>\nThe repo is: <a href=\"https://huggingface.co/spaces/JavedA/master_Thesis\" class=\"inline-onebox\">Master Thesis - a Hugging Face Space by JavedA</a></p>\n<p>It tells me that there is a configuration error. However, I cannot create a README, neither locally to push it nor using the web view.</p>\n<p>Thank you for your time and effort</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2023-05-09T12:39:53.309Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 725,
"reads": 27,
"readers_count": 26,
"score": 3565.4,
"yours": false,
"topic_id": 39258,
"topic_slug": "configuration-error-deleted-readme-md",
"display_username": "Javed",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/JavedA/master_Thesis",
"internal": false,
"reflection": false,
"title": "Master Thesis - a Hugging Face Space by JavedA",
"clicks": 5
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 18152,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/configuration-error-deleted-readme-md/39258/1",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 68625,
"name": "Javed",
"username": "JavedA",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/3bc359/{size}.png",
"created_at": "2023-05-09T12:54:14.652Z",
"cooked": "<p>The issue could be solved - I do not know why it worked this time. I just copied the README from a test space and inserted it. Maybe the additional: <code>Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference</code> solved the issue.</p>\n<p>Anyhow, the issue could be resolved by simply using the following content for the readme.md</p>\n<pre><code class=\"lang-auto\">\n---\ntitle: Test\nemoji: ⚡\ncolorFrom: pink\ncolorTo: blue\nsdk: static\npinned: false\n---\n\nCheck out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2023-05-09T12:54:14.652Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 11,
"reads": 26,
"readers_count": 25,
"score": 90.2,
"yours": false,
"topic_id": 39258,
"topic_slug": "configuration-error-deleted-readme-md",
"display_username": "Javed",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 18152,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/configuration-error-deleted-readme-md/39258/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 183840,
"name": "J Blu",
"username": "johnblues",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/f475e1/{size}.png",
"created_at": "2024-11-24T05:30:03.457Z",
"cooked": "<p>For me it was also making sure of the filename case. README.md.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2024-11-24T05:30:03.457Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 13,
"readers_count": 12,
"score": 42.6,
"yours": false,
"topic_id": 39258,
"topic_slug": "configuration-error-deleted-readme-md",
"display_username": "J Blu",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 48868,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/configuration-error-deleted-readme-md/39258/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223647,
"name": "Diseph D",
"username": "sephdev",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/c4cdca/{size}.png",
"created_at": "2025-05-23T06:48:01.080Z",
"cooked": "<p>Naming the file in all caps solved mine too</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-23T06:48:39.734Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 39258,
"topic_slug": "configuration-error-deleted-readme-md",
"display_username": "Diseph D",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94869,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/configuration-error-deleted-readme-md/39258/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 48868,
"username": "johnblues",
"name": "J Blu",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/f475e1/{size}.png"
},
"action_code": null,
"via_email": null
}
] |
<p>Hi, I deleted my README.md pushed it and when I created a new one, pushing it won’t work.<br>
The repo is: <a href="https://huggingface.co/spaces/JavedA/master_Thesis" class="inline-onebox">Master Thesis - a Hugging Face Space by JavedA</a></p>
<p>It tells me that there is a configuration error. However, I cannot create a README, neither locally to push it nor using the web view.</p>
<p>Thank you for your time and effort</p>
|
<p>The issue could be solved - I do not know why it worked this time. I just copied the README from a test space and inserted it. Maybe the additional: <code>Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference</code> solved the issue.</p>
<p>Anyhow, the issue could be resolved by simply using the following content for the readme.md</p>
<pre><code class="lang-auto">
---
title: Test
emoji: ⚡
colorFrom: pink
colorTo: blue
sdk: static
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
</code></pre>
|
Synchronizing State, Trainer and Accelerate
|
https://discuss.huggingface.co/t/synchronizing-state-trainer-and-accelerate/156255
| 156,255
| 18
|
2025-05-22T01:25:10.935000Z
|
[
{
"id": 223406,
"name": "Don B",
"username": "donb",
"avatar_template": "/user_avatar/discuss.huggingface.co/donb/{size}/3744_2.png",
"created_at": "2025-05-22T01:25:10.993Z",
"cooked": "<p>Using Trainer, and it appears that if I load any class from accelerate, the Trainer doesn’t perform its accelerate magic behind the scenes, meaning I get an error like this:</p>\n<pre><code class=\"lang-auto\">[rank1]: File \"/opt/code/repos/MyProject/.venv/lib/python3.12/site-packages/transformers/modeling_utils.py\", line 5779, in caching_allocator_warmup\n[rank1]: re.compile(\"|\".join([re.escape(plan) for plan in model._tp_plan]))\n[rank1]: ^^^^^^^^^^^^^^\n[rank1]: TypeError: 'NoneType' object is not iterable\n</code></pre>\n<p>I have two use cases where I’d like slightly more control:</p>\n<ol>\n<li>\n<p>My script creates a directory with a timestamp, and there is a synchronization issue that creates two checkpoint directories, one for each GPU.</p>\n</li>\n<li>\n<p>I load two models, the second attempt to load it always fails with this error. It appears that once the Trainer/TrainingArguments go out of scope, the accelerate process is torn down and doesn’t get reinitialized.</p>\n</li>\n</ol>\n<p>How can I take more control of the process? Is there a way to manually manage accelerate with the Trainer and TrainingArguments objects? How about synchronization primitives: something that allows a function to run on the main process before forking to the subprocesses? I tried the decorators, but they cause the Trainer code to crash with the same error.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-22T01:25:41.191Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 46,
"reads": 6,
"readers_count": 5,
"score": 226,
"yours": false,
"topic_id": 156255,
"topic_slug": "synchronizing-state-trainer-and-accelerate",
"display_username": "Don B",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 5859,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/synchronizing-state-trainer-and-accelerate/156255/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223572,
"name": "Don B",
"username": "donb",
"avatar_template": "/user_avatar/discuss.huggingface.co/donb/{size}/3744_2.png",
"created_at": "2025-05-22T16:45:23.597Z",
"cooked": "<p>I have worked around this issue by modifying caching_allocator_warmup to set the tp_plan_regex to None if in addition to <code>if _torch_distributed_available and torch.distributed.is_initialized()</code> it checks if <code>model._tp_plan</code> is valid:<br>\n<code>if _torch_distributed_available and torch.distributed.is_initialized() and hasattr(model, '_tp_plan') and model._tp_plan is not None</code>.</p>\n<p>This prevents the failure and ddp is working correctly across multiple invocations inside the Trainers.</p>\n<p>I don’t know the implications of this _tp_plan modification, but my AI pair programmer suggests that when using accelerate launch and ddp, model._tp_plan should be None. (my pair programmer was not helpful in fixing this naturally - no impactful suggestions). If I understood it better I would create an issue and submit a pull request. For now, I will just monkeypatch it.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-22T16:45:23.597Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 20.8,
"yours": false,
"topic_id": 156255,
"topic_slug": "synchronizing-state-trainer-and-accelerate",
"display_username": "Don B",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 5859,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/synchronizing-state-trainer-and-accelerate/156255/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223573,
"name": "Don B",
"username": "donb",
"avatar_template": "/user_avatar/discuss.huggingface.co/donb/{size}/3744_2.png",
"created_at": "2025-05-22T16:47:29.131Z",
"cooked": "<p>Also noting that the few issues I’ve found related to the iteration over a None _tp_plan is the model’s fault and addressable through proper _post_init usage. This seems like a brittle solution and one that won’t scale across all the sources for custom models.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-22T16:47:29.131Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 15.8,
"yours": false,
"topic_id": 156255,
"topic_slug": "synchronizing-state-trainer-and-accelerate",
"display_username": "Don B",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 5859,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/synchronizing-state-trainer-and-accelerate/156255/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 5859,
"username": "donb",
"name": "Don B",
"avatar_template": "/user_avatar/discuss.huggingface.co/donb/{size}/3744_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 223634,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-23T04:48:23.208Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-05-23T04:48:23.208Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 5.6,
"yours": false,
"topic_id": 156255,
"topic_slug": "synchronizing-state-trainer-and-accelerate",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/synchronizing-state-trainer-and-accelerate/156255/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Using Trainer, and it appears that if I load any class from accelerate, the Trainer doesn’t perform its accelerate magic behind the scenes, meaning I get an error like this:</p>
<pre><code class="lang-auto">[rank1]: File "/opt/code/repos/MyProject/.venv/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5779, in caching_allocator_warmup
[rank1]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank1]: ^^^^^^^^^^^^^^
[rank1]: TypeError: 'NoneType' object is not iterable
</code></pre>
<p>I have two use cases where I’d like slightly more control:</p>
<ol>
<li>
<p>My script creates a directory with a timestamp, and there is a synchronization issue that creates two checkpoint directories, one for each GPU.</p>
</li>
<li>
<p>I load two models, the second attempt to load it always fails with this error. It appears that once the Trainer/TrainingArguments go out of scope, the accelerate process is torn down and doesn’t get reinitialized.</p>
</li>
</ol>
<p>How can I take more control of the process? Is there a way to manually manage accelerate with the Trainer and TrainingArguments objects? How about synchronization primitives: something that allows a function to run on the main process before forking to the subprocesses? I tried the decorators, but they cause the Trainer code to crash with the same error.</p>
|
<p>I have worked around this issue by modifying caching_allocator_warmup to set the tp_plan_regex to None if in addition to <code>if _torch_distributed_available and torch.distributed.is_initialized()</code> it checks if <code>model._tp_plan</code> is valid:<br>
<code>if _torch_distributed_available and torch.distributed.is_initialized() and hasattr(model, '_tp_plan') and model._tp_plan is not None</code>.</p>
<p>This prevents the failure and ddp is working correctly across multiple invocations inside the Trainers.</p>
<p>I don’t know the implications of this _tp_plan modification, but my AI pair programmer suggests that when using accelerate launch and ddp, model._tp_plan should be None. (my pair programmer was not helpful in fixing this naturally - no impactful suggestions). If I understood it better I would create an issue and submit a pull request. For now, I will just monkeypatch it.</p>
|
Can’t upload my model, stuck on “hashing”
|
https://discuss.huggingface.co/t/cant-upload-my-model-stuck-on-hashing/106539
| 106,539
| 5
|
2024-09-13T03:28:43.245000Z
|
[
{
"id": 155103,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T03:28:43.296Z",
"cooked": "<p>The title says pretty much everything. I was able to upload with a Google Colab hack, but normally, I can’t. I attached the files down below. Can anyone figure out what the deal is?</p>\n<p>I “fixed” the problem by uploading them with google colab, but I don’t like this solution. Why won’t it upload normally? Here is the colab link:</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/a/5/a5b3011d9ed4689c5ae7fafb6b661f0c273aa989.png\" class=\"site-icon\" data-dominant-color=\"F29404\" width=\"16\" height=\"16\">\n\n <a href=\"https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb\" target=\"_blank\" rel=\"noopener nofollow ugc\">colab.research.google.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <img width=\"260\" height=\"260\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/2X/5/5b77e9737d5f8f8bc5f7b35e7fc0f8088fd1ebd8.png\" class=\"thumbnail onebox-avatar\" data-dominant-color=\"F29304\">\n\n<h3><a href=\"https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb\" target=\"_blank\" rel=\"noopener nofollow ugc\">Google Colab</a></h3>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Here is the screenshot showing the huggingface refusing to hash:</p>\n<p>And here are the files that wouldn’t hash:</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/5/057e24d68e46d506e5ee6dc8597838a3315d911f_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5F73A0\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main\" target=\"_blank\" rel=\"noopener\">PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC at main</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>What’s going on?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T03:28:43.296Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 562,
"reads": 18,
"readers_count": 17,
"score": 2768.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb",
"internal": false,
"reflection": false,
"title": "Google Colab",
"clicks": 7
},
{
"url": "https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main",
"internal": false,
"reflection": false,
"title": "PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC at main",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/1",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 155107,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T03:52:10.596Z",
"cooked": "<p>I was able to upload the file normally with Firfox, am I uploading the wrong file? Is there some kind of weird environment-dependent error?</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/John6666/uploadtest\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/John6666/uploadtest\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/5/554d472355b287f26f2d72ec4862c36825027636_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5D729F\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/John6666/uploadtest\" target=\"_blank\" rel=\"noopener\">John6666/uploadtest · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T03:52:49.667Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 16,
"readers_count": 15,
"score": 23.2,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/John6666/uploadtest",
"internal": false,
"reflection": false,
"title": "John6666/uploadtest · Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 155108,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T03:53:58.653Z",
"cooked": "<p>I tried uploading with a windows virtual machine as well, and with Linux. It used to work but no longer works. This leads me to think there’s a problem on my local computer. However, uploading to google drive works just fine. Any ideas what could be wrong with my computer? I’ve tried google chrome, firefox, chromium, and microsoft edge browsers.</p>\n<p>You uploaded the right files. I just don’t get it. It must be a local problem.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T03:55:08.732Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 13,
"readers_count": 12,
"score": 17.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155109,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T03:58:46.950Z",
"cooked": "<p>In that case, it’s not your computer, it’s your ISP, or something between the CDN (I don’t know which one) that HF uses and the ISP, or something in that area.<br>\nBut since we can have a conversation on the HF forum like this, I don’t see how a normal tracert would be able to determine the cause…<br>\nAnother possibility is that HF’s file system is malfunctioning in some way.</p>\n<p>The fact that it’s reproducible is tricky. It’s not a temporary server error.<img src=\"https://emoji.discourse-cdn.com/apple/sweat.png?v=12\" title=\":sweat:\" class=\"emoji\" alt=\":sweat:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T03:58:46.950Z",
"reply_count": 1,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 13,
"readers_count": 12,
"score": 22.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64378,
"username": "PhoenixStormJr",
"name": "Phoenix Storm Jr.",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155110,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T04:03:16.742Z",
"cooked": "<p>… <img src=\"https://emoji.discourse-cdn.com/apple/cold_sweat.png?v=12\" title=\":cold_sweat:\" class=\"emoji\" alt=\":cold_sweat:\" loading=\"lazy\" width=\"20\" height=\"20\"> uuuh… I don’t think I understood… I mean, I am a beginner and stuff. Basically, I’m getting that I can’t fix it UNLESS I use Google Colab, right?</p>\n<p>(I know what an ISP is, like AT&T, but not a CDN)</p>\n<p>(So… you’re saying my PC is good then, right? It’s a network problem?)</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:07:28.900Z",
"reply_count": 1,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 13,
"readers_count": 12,
"score": 22.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155111,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T04:11:03.611Z",
"cooked": "<p>No, I’m an amateur at networking too!<br>\nUsing Colab to get around it is half right as long as it works, but <strong>something is definitely wrong on the HF side or your side or both</strong>.<br>\nIf I could isolate the problem a bit more, I could send a mentions to the HF staff to let them know, but since I can’t reproduce the problem (if the above can be uploaded, that’s OK, right?) <strong>You’re the only one who can verify</strong>…</p>\n<p>If it’s the same with Linux, it’s hard to imagine, for example, that your PC has been hit by a virus. If your router was attacked by a virus, it might be possible, but I have no experience.<br>\nIf your hard disk is corrupted, Colab must not be able to help you.<br>\nIf the problem is upstream of that, you can use a VPN to bypass it, or something like that. (If you can use Colab to get around this, maybe VPN method will work?)</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:17:10.173Z",
"reply_count": 2,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 13,
"readers_count": 12,
"score": 22.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64378,
"username": "PhoenixStormJr",
"name": "Phoenix Storm Jr.",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155112,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T04:17:09.004Z",
"cooked": "<p>Thanks for your help anyway. I’ll just keep this open and wait to see if anyone else gets this issue. I appreciate your help. <img src=\"https://emoji.discourse-cdn.com/apple/grinning.png?v=12\" title=\":grinning:\" class=\"emoji\" alt=\":grinning:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>(As for anyone else, who may be experiencing this issue, please comment! I know if it happened to me, it had to of happened to someone else.)</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:17:09.004Z",
"reply_count": 1,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 11,
"readers_count": 10,
"score": 12.2,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155113,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T04:19:24.177Z",
"cooked": "<p>So, I tested on my ANDROID Phone, and THAT worked! So I know it’s a problem with my computer specifically. It has to be.</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:19:24.177Z",
"reply_count": 0,
"reply_to_post_number": 6,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 9,
"readers_count": 8,
"score": 1.8,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155114,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T04:19:57.812Z",
"cooked": "<blockquote>\n<p>I know if it happened to me, it had to of happened to someone else.</p>\n</blockquote>\n<p>Exactly.</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:19:57.812Z",
"reply_count": 0,
"reply_to_post_number": 7,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 9,
"readers_count": 8,
"score": 6.8,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64378,
"username": "PhoenixStormJr",
"name": "Phoenix Storm Jr.",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155115,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T04:22:56.799Z",
"cooked": "<blockquote>\n<p>So I know it’s a problem with my computer specifically. It has to be.</p>\n</blockquote>\n<p>Good! (Not good)<br>\nI wonder what the problem is… is the LAN port broken? Is the cable torn? If you <strong>didn’t connect your Android to Wi-Fi</strong> and it worked, maybe your ISP is denying access to HF file server?</p>",
"post_number": 10,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:22:56.799Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 9,
"readers_count": 8,
"score": 11.8,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/10",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 155116,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T04:24:38.925Z",
"cooked": "<p>I have access to every single website on my computer and android. The only difference is huggingface. Both android and my computer are connected to the same wifi network. It’s weird, everything else in my PC is working just great, including online games. Therefore, I know it’s not my ISP.</p>",
"post_number": 11,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:25:24.384Z",
"reply_count": 1,
"reply_to_post_number": 10,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/11",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155118,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T04:30:35.118Z",
"cooked": "<p>Surely that would mean a PC problem, but what in the world are the possibilities…?<br>\nIf it’s a hardware problem, online games won’t work, and if it’s a software problem, why not even in a Linux environment?<br>\nI get it, but there’s more I don’t understand. Well, have you almost succeeded in isolating the problem?</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:30:35.118Z",
"reply_count": 1,
"reply_to_post_number": 11,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/12",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64378,
"username": "PhoenixStormJr",
"name": "Phoenix Storm Jr.",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155120,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-13T04:40:54.001Z",
"cooked": "<p>Nope. No idea what now. I just know it’s my own PC that’s the issue. That’s all I know. But it’s not a browser issue since other browsers don’t work either!</p>",
"post_number": 13,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T04:40:54.001Z",
"reply_count": 1,
"reply_to_post_number": 12,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 6.4,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/13",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 155134,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-13T06:31:17.979Z",
"cooked": "<p>I was thinking vaguely about it while working on my own, but I couldn’t come up with anything!</p>\n<p>If the PC is also connected via Wi-Fi, the only thing I can think of is that maybe the PC has some special designation in the router settings (you need it sometimes for internet games or something), or maybe the PC’s Wi-Fi adapter is in bad shape or has a bad setting. It’s not impossible, since <strong>smartphones are often a newer generation and more powerful when it comes to Wi-Fi</strong>.<br>\nThe easy way to test if this is the cause is to <strong>plug the LAN cable from the router directly into the PC</strong>, but that’s a pain if you don’t have a cable at home.</p>",
"post_number": 14,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-13T06:31:17.979Z",
"reply_count": 1,
"reply_to_post_number": 13,
"quote_count": 0,
"incoming_link_count": 30,
"reads": 6,
"readers_count": 5,
"score": 156.2,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/error-uploading-model-using-website-drag-and-drop-interface/76071/5",
"internal": true,
"reflection": true,
"title": "Error uploading model using website drag and drop interface",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/14",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64378,
"username": "PhoenixStormJr",
"name": "Phoenix Storm Jr.",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 158987,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2024-09-29T22:46:39.854Z",
"cooked": "<p>Thanks for the advice, but unfortunately it still didn’t work. I plugged in my ethernet cable, and tried uploading, same problem.</p>\n<p>I think there’s a security issue on Huggingface’s side. Because I can upload to ANY other website just fine. Even my college</p>\n<p>I made this repository until Huggingface manages to fix the problem:</p>\n<aside class=\"onebox githubfolder\" data-onebox-src=\"https://github.com/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/tree/main\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/tree/main\" target=\"_blank\" rel=\"noopener nofollow ugc\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <h3><a href=\"https://github.com/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/tree/main\" target=\"_blank\" rel=\"noopener nofollow ugc\">GitHub - PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab:...</a></h3>\n\n <p><a href=\"https://github.com/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/tree/main\" target=\"_blank\" rel=\"noopener nofollow ugc\">main</a></p>\n\n <p><span class=\"label1\">Huggingface has a problem with uploading files, so I made this repository to easily upload files. I don't know what the problem with huggingface is. I plan to create a forum to ask for help. - ...</span></p>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 15,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-29T23:27:06.161Z",
"reply_count": 1,
"reply_to_post_number": 14,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 11,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/tree/main",
"internal": false,
"reflection": false,
"title": "GitHub - PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab: Huggingface has a problem with uploading files, so I made this repository to easily upload files. I don't know what the problem with huggingface is. I plan to create a forum to ask for ",
"clicks": 4
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/15",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 158989,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-29T23:38:09.114Z",
"cooked": "<blockquote>\n<p>I think there’s a security issue on Huggingface’s side</p>\n</blockquote>\n<p>That’s what I thought, too, but then how does HF pinpoint the restriction to <strong>just your PC</strong>, even if it’s not intentional?</p>\n<p>First of all, if they’re regulating by account, it shouldn’t even be via Colab.<br>\nIf they’re regulating by IP, then it wouldn’t work via Android Wi-Fi either.<br>\nEven the MAC address of the PC changed when you plugged in the ethernet cable, so it’s a bit odd to make this a combined problem with your router. Your router must think your PC is a different person than it was before.</p>\n<p>UA may be there because the whole browser industry has changed recently so that it doesn’t change when you change browsers. It does indeed change between Android and PC. But I’ve never heard of pristine IP + UA restrictions in HF.</p>\n<p>There was a problem with frequent 500 errors on HF, but it was resolved by the HF staff, so this is probably not the cause of the current problem either.</p>\n<p><a class=\"mention\" href=\"/u/not-lain\">@not-lain</a> <a class=\"mention\" href=\"/u/nielsr\">@nielsr</a> Do you know anything about it?</p>",
"post_number": 16,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-09-29T23:38:09.114Z",
"reply_count": 1,
"reply_to_post_number": 15,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/16",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64378,
"username": "PhoenixStormJr",
"name": "Phoenix Storm Jr.",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 159290,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-10-01T08:57:34.719Z",
"cooked": "<p>If it’s just one person, you can put it away as a coincidence, but when it’s multiple people, it’s a little suspect. Is it really a problem with the user’s connection?</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/SG161222/RealFlux_1.0b_Dev\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/SG161222/RealFlux_1.0b_Dev\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/1/5195c7eb4437c174d4df7038f0094a4a8093e60b_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5E72A0\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/SG161222/RealFlux_1.0b_Dev\" target=\"_blank\" rel=\"noopener\">SG161222/RealFlux_1.0b_Dev · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p>I encountered a problem with uploading the model to HF (my internet connection has been unstable lately). Once I resolve it, the model will be available on HF.</p>\n</blockquote>",
"post_number": 17,
"post_type": 1,
"posts_count": 20,
"updated_at": "2024-10-01T08:57:34.719Z",
"reply_count": 1,
"reply_to_post_number": 16,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 5.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/SG161222/RealFlux_1.0b_Dev",
"internal": false,
"reflection": false,
"title": "SG161222/RealFlux_1.0b_Dev · Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/17",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 223193,
"name": "Phoenix Storm Jr.",
"username": "PhoenixStormJr",
"avatar_template": "/user_avatar/discuss.huggingface.co/phoenixstormjr/{size}/31552_2.png",
"created_at": "2025-05-20T20:49:22.773Z",
"cooked": "<p>FINAL UPDATE…</p>\n<p>I tested something more in depth. The problem is, I can’t upload files LARGER than 10 Megabytes!</p>\n<p>I used THIS python script to create dummy files:</p>\n<h1><a name=\"p-223193-create-a-file-of-05-mb-filled-with-the-character-0-1\" class=\"anchor\" href=\"#p-223193-create-a-file-of-05-mb-filled-with-the-character-0-1\"></a>Create a file of 0.5 MB filled with the character ‘0’</h1>\n<p>import os<br>\nos.chdir(os.path.dirname(os.path.abspath(<strong>file</strong>)))</p>\n<p><span class=\"hashtag-raw\">#zeros</span> = 524259 # 900 MB<br>\n<span class=\"hashtag-raw\">#zeros</span> = 524317 # also 900 MB<br>\ncomment=“”\"<br>\nx = 1800<br>\nfile_size = zeros * x<br>\nfile_name = 0.5 * x<br>\nwith open(f\"{str(file_name)} mb.txt\", “w”) as f:<br>\nf.write(“0” * file_size)<br>\nx = x + 1<br>\n“”\"<br>\n<span class=\"hashtag-raw\">#print</span>(f\"zeros = {round((524259+524317)/2)}\")</p>\n<p>zeros = 524288<br>\nx = 1<br>\nwhile(x < 201):<br>\nfile_size = zeros * x<br>\nfile_name = 0.5 * x<br>\nwith open(f\"{str(file_name)} mb.txt\", “w”) as f:<br>\nf.write(“0” * file_size)<br>\nx = x + 1</p>\n<p>print(“Files created: (size) mb.txt (0.5 MB of zeros incrementals)”)</p>\n<p>the 10.5 MB file BROKE it, but the 10 MB file WORKED!</p>\n<p>THAT MEANS THE PROBLEM IS DIRECTLY ON THEIR END, SOME PIECE OF CODE SAYS:</p>\n<p>if(filesize > 10 MB):<br>\ndo something<br>\nelse:<br>\ndo something different</p>\n<p>It’s NOT my computer, it’s some glitch in THEIR system. something above 10 MB breaks it for some reason!</p>\n<p>Oh well, I use git on Google Colab anyway. No big deal I guess…</p>\n<p>My proof:</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/PhoenixStormJr/test-upload-length/tree/main\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/PhoenixStormJr/test-upload-length/tree/main\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/1/91afe5a6a4b4a4fbc8c01cdb39f3ce051ded9c7c_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5B70A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/PhoenixStormJr/test-upload-length/tree/main\" target=\"_blank\" rel=\"noopener\">PhoenixStormJr/test-upload-length at main</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>I also found documentation here:</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/huggingface_hub/v0.17.1/en/guides/upload#hub-repository-size-limitations\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/huggingface_hub/v0.17.1/en/guides/upload#hub-repository-size-limitations\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/e/cef3cd647e391927031467dbcde7613c74193f5f_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F1EFE9\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/huggingface_hub/v0.17.1/en/guides/upload#hub-repository-size-limitations\" target=\"_blank\" rel=\"noopener\">Upload files to the Hub</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Git LFS automatically handles files larger than 10MB. But for very large files (>5GB), you need to install a custom transfer agent for Git LFS:</p>\n<p>Copied</p>\n<p>huggingface-cli lfs-enable-largefiles</p>\n<p>You should install this for each repository that has a very large file. Once installed, you’ll be able to push files larger than 5GB.</p>\n<h3><a name=\"p-223193-commit-context-manager-2\" class=\"anchor\" href=\"#p-223193-commit-context-manager-2\"></a>commit context manager</h3>\n<p>The <code>commit</code> context manager handles four of the most common Git commands: pull, add, commit, and push. <code>git-lfs</code> automatically tracks any file larger than 10MB. In the following example, the <code>commit</code> context manager:</p>\n<p>That SPECIFIC number is mentioned here.</p>",
"post_number": 18,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-05-20T20:59:23.690Z",
"reply_count": 0,
"reply_to_post_number": 17,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 3,
"readers_count": 2,
"score": 25.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "Phoenix Storm Jr.",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/huggingface_hub/v0.17.1/en/guides/upload#hub-repository-size-limitations",
"internal": false,
"reflection": false,
"title": "Upload files to the Hub",
"clicks": 1
},
{
"url": "https://huggingface.co/PhoenixStormJr/test-upload-length/tree/main",
"internal": false,
"reflection": false,
"title": "PhoenixStormJr/test-upload-length at main",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64378,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/18",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 223239,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-21T05:17:32.005Z",
"cooked": "<p>Hmm… It seems to be a bug on the Hub side related to LFS…<img src=\"https://emoji.discourse-cdn.com/apple/anxious_face_with_sweat.png?v=14\" title=\":anxious_face_with_sweat:\" class=\"emoji\" alt=\":anxious_face_with_sweat:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>In a Windows environment, the explanation is simple: you need to install LFS <strong>and git itself</strong> using the installer, but I don’t think that’s the case here.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://git-lfs.com/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/f/1/f16572aa053992106b3ae7b3792264219531fd73.png\" class=\"site-icon\" data-dominant-color=\"DE4130\" width=\"48\" height=\"48\">\n\n <a href=\"https://git-lfs.com/\" target=\"_blank\" rel=\"noopener\">Git Large File Storage</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:262/500;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/5/6591624baacb3d731d5b5f5fe3259e07eb8f9b28_2_690x362.png\" class=\"thumbnail\" data-dominant-color=\"E4E2DA\" width=\"690\" height=\"362\"></div>\n\n<h3><a href=\"https://git-lfs.com/\" target=\"_blank\" rel=\"noopener\">Git Large File Storage</a></h3>\n\n <p>Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://git-scm.com/downloads/win\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/5/4/54bfe79549d01fbf460686e6300d86f9480651bb.png\" class=\"site-icon\" data-dominant-color=\"F64D27\" width=\"32\" height=\"32\">\n\n <a href=\"https://git-scm.com/downloads/win\" target=\"_blank\" rel=\"noopener\">git-scm.com</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://git-scm.com/downloads/win\" target=\"_blank\" rel=\"noopener\">Git - Downloading Package</a></h3>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 19,
"post_type": 1,
"posts_count": 20,
"updated_at": "2025-05-21T05:17:32.005Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://git-scm.com/downloads/win",
"internal": false,
"reflection": false,
"title": "Git - Downloading Package",
"clicks": 0
},
{
"url": "https://git-lfs.com/",
"internal": false,
"reflection": false,
"title": "Git Large File Storage | Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/19",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223604,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-23T00:14:02.304Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 20,
"post_type": 3,
"posts_count": 20,
"updated_at": "2025-05-23T00:14:02.304Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 106539,
"topic_slug": "cant-upload-my-model-stuck-on-hashing",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/cant-upload-my-model-stuck-on-hashing/106539/20",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>The title says pretty much everything. I was able to upload with a Google Colab hack, but normally, I can’t. I attached the files down below. Can anyone figure out what the deal is?</p>
<p>I “fixed” the problem by uploading them with google colab, but I don’t like this solution. Why won’t it upload normally? Here is the colab link:</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/a/5/a5b3011d9ed4689c5ae7fafb6b661f0c273aa989.png" class="site-icon" data-dominant-color="F29404" width="16" height="16">
<a href="https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb" target="_blank" rel="noopener nofollow ugc">colab.research.google.com</a>
</header>
<article class="onebox-body">
<img width="260" height="260" src="https://us1.discourse-cdn.com/hellohellohello/original/2X/5/5b77e9737d5f8f8bc5f7b35e7fc0f8088fd1ebd8.png" class="thumbnail onebox-avatar" data-dominant-color="F29304">
<h3><a href="https://colab.research.google.com/github/PhoenixStormJr/Upload-File-To-Huggingface-With-Google-Colab/blob/main/Upload_File_To_Huggingface.ipynb" target="_blank" rel="noopener nofollow ugc">Google Colab</a></h3>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>Here is the screenshot showing the huggingface refusing to hash:</p>
<p>And here are the files that wouldn’t hash:</p>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main">
<header class="source">
<a href="https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/5/057e24d68e46d506e5ee6dc8597838a3315d911f_2_690x372.png" class="thumbnail" data-dominant-color="5F73A0" width="690" height="372"></div>
<h3><a href="https://huggingface.co/PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC/tree/main" target="_blank" rel="noopener">PhoenixStormJr/Megaman-NT-Warrior-Aki-RVC at main</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>What’s going on?</p>
|
<p>Hmm… It seems to be a bug on the Hub side related to LFS…<img src="https://emoji.discourse-cdn.com/apple/anxious_face_with_sweat.png?v=14" title=":anxious_face_with_sweat:" class="emoji" alt=":anxious_face_with_sweat:" loading="lazy" width="20" height="20"></p>
<p>In a Windows environment, the explanation is simple: you need to install LFS <strong>and git itself</strong> using the installer, but I don’t think that’s the case here.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://git-lfs.com/">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/f/1/f16572aa053992106b3ae7b3792264219531fd73.png" class="site-icon" data-dominant-color="DE4130" width="48" height="48">
<a href="https://git-lfs.com/" target="_blank" rel="noopener">Git Large File Storage</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:262/500;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/6/5/6591624baacb3d731d5b5f5fe3259e07eb8f9b28_2_690x362.png" class="thumbnail" data-dominant-color="E4E2DA" width="690" height="362"></div>
<h3><a href="https://git-lfs.com/" target="_blank" rel="noopener">Git Large File Storage</a></h3>
<p>Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://git-scm.com/downloads/win">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/5/4/54bfe79549d01fbf460686e6300d86f9480651bb.png" class="site-icon" data-dominant-color="F64D27" width="32" height="32">
<a href="https://git-scm.com/downloads/win" target="_blank" rel="noopener">git-scm.com</a>
</header>
<article class="onebox-body">
<h3><a href="https://git-scm.com/downloads/win" target="_blank" rel="noopener">Git - Downloading Package</a></h3>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
How to organize hundreds of pre-trained models
|
https://discuss.huggingface.co/t/how-to-organize-hundreds-of-pre-trained-models/42682
| 42,682
| 5
|
2023-06-09T16:37:47.869000Z
|
[
{
"id": 73328,
"name": "Adam Stewart",
"username": "ajstewart",
"avatar_template": "/user_avatar/discuss.huggingface.co/ajstewart/{size}/47937_2.png",
"created_at": "2023-06-09T16:37:47.925Z",
"cooked": "<p>We (<a href=\"http://hf.co/torchgeo\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">torchgeo (TorchGeo)</a>) are working on a project that will generate 100+ pre-trained models. In the past, we’ve made a separate repository for each model, but with 100+ models we’ve started to wonder whether or not it would make more sense to stuff all of our models in a few repos instead of having 100+ separate repos. What features or functionality would we lose by doing so? Our users primarily load weights through the TorchGeo library (using timm or smp) and don’t even know that HF exists, it’s just the place we chose to distribute the files.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2023-06-09T16:37:47.925Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 60,
"reads": 12,
"readers_count": 11,
"score": 332.4,
"yours": false,
"topic_id": 42682,
"topic_slug": "how-to-organize-hundreds-of-pre-trained-models",
"display_username": "Adam Stewart",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://hf.co/torchgeo",
"internal": false,
"reflection": false,
"title": "torchgeo (TorchGeo)",
"clicks": 2
},
{
"url": "https://discuss.huggingface.co/t/how-to-handle-very-large-datasets/42686",
"internal": true,
"reflection": true,
"title": "How to handle very large datasets",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 21698,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-organize-hundreds-of-pre-trained-models/42682/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223270,
"name": "Lucain Pouget",
"username": "Wauplin",
"avatar_template": "/user_avatar/discuss.huggingface.co/wauplin/{size}/40815_2.png",
"created_at": "2025-05-21T07:21:38.516Z",
"cooked": "<p>Late to the party, but it’s always recommended to do 1 pretrained model == 1 repo. It allows to have a download counter per model (allowing you to know which models are getting more traction), better discoverability for users on the Hub, dedicated community tabs per variant, etc.</p>\n<p>(related: <a href=\"https://github.com/huggingface/huggingface.js/pull/1464#discussion_r2098481444\" class=\"inline-onebox\">Add TorchGeo to libraries by isaaccorley · Pull Request #1464 · huggingface/huggingface.js · GitHub</a>)</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-21T07:21:38.516Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 36.2,
"yours": false,
"topic_id": 42682,
"topic_slug": "how-to-organize-hundreds-of-pre-trained-models",
"display_username": "Lucain Pouget",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/huggingface.js/pull/1464#discussion_r2098481444",
"internal": false,
"reflection": false,
"title": "Add TorchGeo to libraries by isaaccorley · Pull Request #1464 · huggingface/huggingface.js · GitHub",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 9207,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-organize-hundreds-of-pre-trained-models/42682/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223372,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-21T19:21:51.055Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-21T19:21:51.055Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 42682,
"topic_slug": "how-to-organize-hundreds-of-pre-trained-models",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-organize-hundreds-of-pre-trained-models/42682/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>We (<a href="http://hf.co/torchgeo" class="inline-onebox" rel="noopener nofollow ugc">torchgeo (TorchGeo)</a>) are working on a project that will generate 100+ pre-trained models. In the past, we’ve made a separate repository for each model, but with 100+ models we’ve started to wonder whether or not it would make more sense to stuff all of our models in a few repos instead of having 100+ separate repos. What features or functionality would we lose by doing so? Our users primarily load weights through the TorchGeo library (using timm or smp) and don’t even know that HF exists, it’s just the place we chose to distribute the files.</p>
|
<p>Late to the party, but it’s always recommended to do 1 pretrained model == 1 repo. It allows to have a download counter per model (allowing you to know which models are getting more traction), better discoverability for users on the Hub, dedicated community tabs per variant, etc.</p>
<p>(related: <a href="https://github.com/huggingface/huggingface.js/pull/1464#discussion_r2098481444" class="inline-onebox">Add TorchGeo to libraries by isaaccorley · Pull Request #1464 · huggingface/huggingface.js · GitHub</a>)</p>
|
How to iterate over values of a column in the IterableDataset?
|
https://discuss.huggingface.co/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649
| 135,649
| 10
|
2025-01-14T11:33:40.731000Z
|
[
{
"id": 195452,
"name": "Svyatoslav V. Pchelintsev",
"username": "Innovator2K",
"avatar_template": "/user_avatar/discuss.huggingface.co/innovator2k/{size}/38148_2.png",
"created_at": "2025-01-14T11:33:40.784Z",
"cooked": "<p>Suppose we have a simple iterable dataset from the <a href=\"https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.IterableDataset.from_generator\">documentation</a>:</p>\n<pre><code class=\"lang-auto\">def gen():\n yield {\"text\": \"Good\", \"label\": 0}\n yield {\"text\": \"Bad\", \"label\": 1}\n\nds = IterableDataset.from_generator(gen)\n</code></pre>\n<p>and suppose I want to iterate over the <code>\"text\"</code> column values. An obvious solution can be the following:</p>\n<pre><code class=\"lang-auto\">column_values_only_ds = map(lambda x: x[\"text\"], ds)\n</code></pre>\n<p>But the problem with this solution is that <code>map</code> is not an iterable, i.e., it cannot be re-iterated:</p>\n<pre><code class=\"lang-auto\">for v in column_values_only_ds:\n print(v) # Prints \"Good\" and \"Bad\"\nfor v in column_values_only_ds:\n print(v) # Prints nothing\n</code></pre>\n<p>So, how can I create an <strong>iterable</strong> that returns only column values?</p>\n<p>P.S. I’m building a single interface for running experiments with different models and, e.g., FastText requires only lists of strings, not dictionaries.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-01-14T11:33:40.784Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 74,
"reads": 10,
"readers_count": 9,
"score": 367,
"yours": false,
"topic_id": 135649,
"topic_slug": "how-to-iterate-over-values-of-a-column-in-the-iterabledataset",
"display_username": "Svyatoslav V. Pchelintsev",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.IterableDataset.from_generator",
"internal": false,
"reflection": false,
"title": "Main classes",
"clicks": 2
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 35404,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 195465,
"name": "Alan turner",
"username": "Alanturner2",
"avatar_template": "/user_avatar/discuss.huggingface.co/alanturner2/{size}/37542_2.png",
"created_at": "2025-01-14T13:10:11.600Z",
"cooked": "<p>Hi there! <img src=\"https://emoji.discourse-cdn.com/apple/blush.png?v=12\" title=\":blush:\" class=\"emoji\" alt=\":blush:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>If you want to iterate over just the <code>\"text\"</code> column in your <code>IterableDataset</code> and make sure it can be re-iterated (unlike <code>map</code>), you can use a <strong>generator function</strong>. This way, you’ll always get a fresh iterable whenever you need it.</p>\n<p>Here’s how you can do it:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from datasets import IterableDataset\n\n# Your original dataset generator\ndef gen():\n yield {\"text\": \"Good\", \"label\": 0}\n yield {\"text\": \"Bad\", \"label\": 1}\n\nds = IterableDataset.from_generator(gen)\n\n# A function to pull only the \"text\" values\ndef extract_text_column(dataset):\n for item in dataset:\n yield item[\"text\"]\n\n# A callable that gives you a fresh iterator each time\ncolumn_values_only_ds = lambda: extract_text_column(ds)\n\n# Now, let's iterate over the \"text\" column\nfor v in column_values_only_ds():\n print(v) # Prints \"Good\" and \"Bad\"\n\n# You can do it again without issues!\nfor v in column_values_only_ds():\n print(v) # Prints \"Good\" and \"Bad\" again\n</code></pre>\n<ul>\n<li><strong>Generator Function</strong>: <code>extract_text_column(dataset)</code> is like a recipe to grab just the <code>\"text\"</code> values one at a time.</li>\n<li><strong>Fresh Start</strong>: Each time you call <code>column_values_only_ds()</code>, it gives you a brand-new iterator. So, no matter how many times you loop, it works!</li>\n<li><strong>Simple and Reusable</strong>: This makes it super handy if you’re building experiments or pipelines where re-iteration matters.</li>\n</ul>\n<p>I hope this clears things up and helps you with your project. Feel free to reach out if you have more questions. Happy coding! <img src=\"https://emoji.discourse-cdn.com/apple/rocket.png?v=12\" title=\":rocket:\" class=\"emoji\" alt=\":rocket:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-01-14T13:10:11.600Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 9,
"reads": 10,
"readers_count": 9,
"score": 67,
"yours": false,
"topic_id": 135649,
"topic_slug": "how-to-iterate-over-values-of-a-column-in-the-iterabledataset",
"display_username": "Alan turner",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 76958,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 195471,
"name": "Svyatoslav V. Pchelintsev",
"username": "Innovator2K",
"avatar_template": "/user_avatar/discuss.huggingface.co/innovator2k/{size}/38148_2.png",
"created_at": "2025-01-14T14:07:15.863Z",
"cooked": "<p>Thank you for the answer!</p>\n<p>While this works, it loses the functionality of the <code>IterableDataset</code> (its methods and attributes are no longer accessible), so I hoped for a built in <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=12\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\">Datasets solution, but your answer suggests that there is no such functionality. OK.</p>\n<p>By the way, something like this should also work:</p>\n<pre><code class=\"lang-auto\">class IterableDatasetColumnGetter:\n def __init__(self, dataset: IterableDataset, column_name: str) -> None:\n self.dataset = dataset\n self.column_name = column_name\n\n def __iter__(self) -> Iterator:\n return iter(map(lambda x: x[self.column_name], self.dataset))\n\niterable_column_values_only_ds = IterableDatasetColumnGetter(ds, \"text\")\n\nfor v in iterable_column_values_only_ds:\n print(v) # Prints \"Good\" and \"Bad\"\n\nfor v in iterable_column_values_only_ds:\n print(v) # Prints \"Good\" and \"Bad\" again\n</code></pre>\n<p>but again it looks like it is not a good solution due to the loss of the original functionality.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-01-14T14:11:01.305Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 10,
"readers_count": 9,
"score": 42,
"yours": false,
"topic_id": 135649,
"topic_slug": "how-to-iterate-over-values-of-a-column-in-the-iterabledataset",
"display_username": "Svyatoslav V. Pchelintsev",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 35404,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76958,
"username": "Alanturner2",
"name": "Alan turner",
"avatar_template": "/user_avatar/discuss.huggingface.co/alanturner2/{size}/37542_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 195574,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-01-15T02:07:22.561Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-01-15T02:07:22.561Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 8,
"readers_count": 7,
"score": 6.6,
"yours": false,
"topic_id": 135649,
"topic_slug": "how-to-iterate-over-values-of-a-column-in-the-iterabledataset",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
},
{
"id": 198129,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2025-01-27T10:42:47.008Z",
"cooked": "<p>Hi ! Could it be interesting to implement a IterableColumn ? What do you think of something like this ?</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">def gen():\n yield {\"text\": \"Good\", \"label\": 0}\n yield {\"text\": \"Bad\", \"label\": 1}\n\nds = IterableDataset.from_generator(gen)\ntexts = ds[\"text\"] # `texts` is an IterableColumn object\n\nfor v in texts:\n print(v)\n</code></pre>\n<p>If you like this API, feel free to suggest it in an issue on <a href=\"https://github.com/huggingface/datasets\">gtihub</a> or open a PR <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=12\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-01-27T10:42:47.008Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 9,
"readers_count": 8,
"score": 46.8,
"yours": false,
"topic_id": 135649,
"topic_slug": "how-to-iterate-over-values-of-a-column-in-the-iterabledataset",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets",
"internal": false,
"reflection": false,
"title": "GitHub - huggingface/datasets: 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 223121,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2025-05-20T11:13:15.186Z",
"cooked": "<p>Hi ! it’s now possible to iterate on a column directly, thanks <a class=\"mention\" href=\"/u/innovator2k\">@Innovator2K</a> !</p>\n<p>The PR is here <a href=\"https://github.com/huggingface/datasets/pull/7564\" class=\"inline-onebox\">Implementation of iteration over values of a column in an IterableDataset object by TopCoder2K · Pull Request #7564 · huggingface/datasets · GitHub</a> and the feature will be available in the next release <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">>>> from datasets import load_dataset\n>>> dataset = load_dataset(\"allenai/c4\", \"en\", streaming=True, split=\"train\")\n>>> print(next(iter(dataset[\"text\"])))\nBeginners BBQ Class Taking Place in Missoula!...\n</code></pre>",
"post_number": 6,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-20T11:13:15.186Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 31.2,
"yours": false,
"topic_id": 135649,
"topic_slug": "how-to-iterate-over-values-of-a-column-in-the-iterabledataset",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets/pull/7564",
"internal": false,
"reflection": false,
"title": "Implementation of iteration over values of a column in an IterableDataset object by TopCoder2K · Pull Request #7564 · huggingface/datasets · GitHub",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649/6",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>Suppose we have a simple iterable dataset from the <a href="https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.IterableDataset.from_generator">documentation</a>:</p>
<pre><code class="lang-auto">def gen():
yield {"text": "Good", "label": 0}
yield {"text": "Bad", "label": 1}
ds = IterableDataset.from_generator(gen)
</code></pre>
<p>and suppose I want to iterate over the <code>"text"</code> column values. An obvious solution can be the following:</p>
<pre><code class="lang-auto">column_values_only_ds = map(lambda x: x["text"], ds)
</code></pre>
<p>But the problem with this solution is that <code>map</code> is not an iterable, i.e., it cannot be re-iterated:</p>
<pre><code class="lang-auto">for v in column_values_only_ds:
print(v) # Prints "Good" and "Bad"
for v in column_values_only_ds:
print(v) # Prints nothing
</code></pre>
<p>So, how can I create an <strong>iterable</strong> that returns only column values?</p>
<p>P.S. I’m building a single interface for running experiments with different models and, e.g., FastText requires only lists of strings, not dictionaries.</p>
|
<p>Hi there! <img src="https://emoji.discourse-cdn.com/apple/blush.png?v=12" title=":blush:" class="emoji" alt=":blush:" loading="lazy" width="20" height="20"></p>
<p>If you want to iterate over just the <code>"text"</code> column in your <code>IterableDataset</code> and make sure it can be re-iterated (unlike <code>map</code>), you can use a <strong>generator function</strong>. This way, you’ll always get a fresh iterable whenever you need it.</p>
<p>Here’s how you can do it:</p>
<pre data-code-wrap="python"><code class="lang-python">from datasets import IterableDataset
# Your original dataset generator
def gen():
yield {"text": "Good", "label": 0}
yield {"text": "Bad", "label": 1}
ds = IterableDataset.from_generator(gen)
# A function to pull only the "text" values
def extract_text_column(dataset):
for item in dataset:
yield item["text"]
# A callable that gives you a fresh iterator each time
column_values_only_ds = lambda: extract_text_column(ds)
# Now, let's iterate over the "text" column
for v in column_values_only_ds():
print(v) # Prints "Good" and "Bad"
# You can do it again without issues!
for v in column_values_only_ds():
print(v) # Prints "Good" and "Bad" again
</code></pre>
<ul>
<li><strong>Generator Function</strong>: <code>extract_text_column(dataset)</code> is like a recipe to grab just the <code>"text"</code> values one at a time.</li>
<li><strong>Fresh Start</strong>: Each time you call <code>column_values_only_ds()</code>, it gives you a brand-new iterator. So, no matter how many times you loop, it works!</li>
<li><strong>Simple and Reusable</strong>: This makes it super handy if you’re building experiments or pipelines where re-iteration matters.</li>
</ul>
<p>I hope this clears things up and helps you with your project. Feel free to reach out if you have more questions. Happy coding! <img src="https://emoji.discourse-cdn.com/apple/rocket.png?v=12" title=":rocket:" class="emoji" alt=":rocket:" loading="lazy" width="20" height="20"></p>
|
Coreference Resolution
|
https://discuss.huggingface.co/t/coreference-resolution/11394
| 11,394
| 5
|
2021-11-05T14:46:36.546000Z
|
[
{
"id": 24583,
"name": "Pierre Snell",
"username": "ierezell",
"avatar_template": "/user_avatar/discuss.huggingface.co/ierezell/{size}/2517_2.png",
"created_at": "2021-11-05T14:46:36.618Z",
"cooked": "<p>Hi,</p>\n<p>I’m quite familiar with the Huggingface ecosystem and I used it a lot.</p>\n<p>However, I cannot find resources/models / tutorials for coreference resolution except for <a href=\"https://github.com/huggingface/neuralcoref\" rel=\"noopener nofollow ugc\">neuralcoref</a> which last commit was years ago…</p>\n<p>I also saw some <a href=\"https://huggingface.co/models?sort=downloads&search=corefe\">models</a> but there is not any clue on how to use them (I guess a TokenClassification Head ?)</p>\n<p>Does anyone have any starting point for implementing a coreference resolution pipeline?<br>\n(I will start will neuralcoref if there is nothing better)</p>\n<p>Thanks in advance for any help,<br>\nHave a great day.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2021-11-05T14:48:20.497Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3913,
"reads": 59,
"readers_count": 58,
"score": 19521.8,
"yours": false,
"topic_id": 11394,
"topic_slug": "coreference-resolution",
"display_username": "Pierre Snell",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/neuralcoref",
"internal": false,
"reflection": false,
"title": "GitHub - huggingface/neuralcoref: ✨Fast Coreference Resolution in spaCy with Neural Networks",
"clicks": 94
},
{
"url": "https://huggingface.co/models?sort=downloads&search=corefe",
"internal": false,
"reflection": false,
"title": "Models - Hugging Face",
"clicks": 55
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 863,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/coreference-resolution/11394/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 24667,
"name": "Niels Rogge",
"username": "nielsr",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png",
"created_at": "2021-11-08T08:36:40.298Z",
"cooked": "<p>Hi,</p>\n<p>I suggest to take a look at this repo: <a href=\"https://github.com/mandarjoshi90/coref\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">GitHub - mandarjoshi90/coref: BERT for Coreference Resolution</a></p>\n<p>It includes multiple models (BERT, SpanBERT) fine-tuned on OntoNotes, an important benchmark for coreference resolution.</p>\n<p>There’s also a <a href=\"https://colab.research.google.com/drive/1SlERO9Uc9541qv6yH26LJz5IM9j7YVra#scrollTo=H0xPknceFORt\" rel=\"noopener nofollow ugc\">demo notebook</a>, showcasing how to run inference for a new piece of text to find all entity clusters.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2021-11-08T08:36:40.298Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 163,
"reads": 53,
"readers_count": 52,
"score": 875.6,
"yours": false,
"topic_id": 11394,
"topic_slug": "coreference-resolution",
"display_username": "Niels Rogge",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/mandarjoshi90/coref",
"internal": false,
"reflection": false,
"title": "GitHub - mandarjoshi90/coref: BERT for Coreference Resolution",
"clicks": 632
},
{
"url": "https://colab.research.google.com/drive/1SlERO9Uc9541qv6yH26LJz5IM9j7YVra#scrollTo=H0xPknceFORt",
"internal": false,
"reflection": false,
"title": "Google Colab",
"clicks": 314
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 205,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/coreference-resolution/11394/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
},
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222878,
"name": "Anushka",
"username": "anuyash49",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/4af34b/{size}.png",
"created_at": "2025-05-19T06:05:54.578Z",
"cooked": "<p>not updated. can’t run SpanBERT</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-19T06:05:54.578Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 3,
"readers_count": 2,
"score": 45.6,
"yours": false,
"topic_id": 11394,
"topic_slug": "coreference-resolution",
"display_username": "Anushka",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94410,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/coreference-resolution/11394/3",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 205,
"username": "nielsr",
"name": "Niels Rogge",
"avatar_template": "/user_avatar/discuss.huggingface.co/nielsr/{size}/39617_2.png"
},
"action_code": null,
"via_email": null
}
] |
<p>Hi,</p>
<p>I’m quite familiar with the Huggingface ecosystem and I used it a lot.</p>
<p>However, I cannot find resources/models / tutorials for coreference resolution except for <a href="https://github.com/huggingface/neuralcoref" rel="noopener nofollow ugc">neuralcoref</a> which last commit was years ago…</p>
<p>I also saw some <a href="https://huggingface.co/models?sort=downloads&search=corefe">models</a> but there is not any clue on how to use them (I guess a TokenClassification Head ?)</p>
<p>Does anyone have any starting point for implementing a coreference resolution pipeline?<br>
(I will start will neuralcoref if there is nothing better)</p>
<p>Thanks in advance for any help,<br>
Have a great day.</p>
|
<p>Hi,</p>
<p>I suggest to take a look at this repo: <a href="https://github.com/mandarjoshi90/coref" class="inline-onebox" rel="noopener nofollow ugc">GitHub - mandarjoshi90/coref: BERT for Coreference Resolution</a></p>
<p>It includes multiple models (BERT, SpanBERT) fine-tuned on OntoNotes, an important benchmark for coreference resolution.</p>
<p>There’s also a <a href="https://colab.research.google.com/drive/1SlERO9Uc9541qv6yH26LJz5IM9j7YVra#scrollTo=H0xPknceFORt" rel="noopener nofollow ugc">demo notebook</a>, showcasing how to run inference for a new piece of text to find all entity clusters.</p>
|
Best model to extract text from old Church records written in cursive?
|
https://discuss.huggingface.co/t/best-model-to-extract-text-from-old-church-records-written-in-cursive/155677
| 155,677
| 13
|
2025-05-17T18:07:35.911000Z
|
[
{
"id": 222667,
"name": "Danijel Meglen",
"username": "podtalnica",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/p/65b543/{size}.png",
"created_at": "2025-05-17T18:07:35.963Z",
"cooked": "<p>Hello! I have a bunch of Church records that I got from Matricula Online (a website that stores church registers like books of birth, marriage and death). They are from 16th all the way to early 20th century. I would like to store their contents in a .txt file. Records are written in cursive in a mix between Slovene and German. <a href=\"https://data.matricula-online.eu/en/slovenia/ljubljana/zagradec/04415/?pg=12\" rel=\"noopener nofollow ugc\">Here</a>’s a random page so you can see what I mean. I have a GTX 1060 6GB so naturally I would like a model that I can run on my computer without major performance issues. What would be the best model to do this? Thank you in advance!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-17T18:07:35.963Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 34,
"reads": 7,
"readers_count": 6,
"score": 171.4,
"yours": false,
"topic_id": 155677,
"topic_slug": "best-model-to-extract-text-from-old-church-records-written-in-cursive",
"display_username": "Danijel Meglen",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://data.matricula-online.eu/en/slovenia/ljubljana/zagradec/04415/?pg=12",
"internal": false,
"reflection": false,
"title": "Krstna knjiga / Taufbuch - 04415 | Zagradec | Nadškofijski arhiv Ljubljana | Slovenia | Matricula Online",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 94287,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/best-model-to-extract-text-from-old-church-records-written-in-cursive/155677/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222716,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-18T00:16:26.225Z",
"cooked": "<p>Basically, this task can be performed using VLM, but recognizing actual handwritten characters and text is quite difficult. I recommend trying out various models online and using the ones that work well locally. With VRAM savings through quantization, there are models that can run with 6GB.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/microsoft/trocr-large-handwritten\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/microsoft/trocr-large-handwritten\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/8/58a023be01f4684d1da9cce52148e50c3fe48a91_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5C71A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/microsoft/trocr-large-handwritten\" target=\"_blank\" rel=\"noopener\">microsoft/trocr-large-handwritten · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"39422\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/k/4491bb/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/handwriting-recognition-cant-recognize-multiline-words/39422\">Handwriting recognition. Can't recognize multiline words</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n I expect the model trocr-base-handwritten to extract all the text from the picture. \n <a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/f/6/f6bc6717f6a697facab06af2e09ee4377b4987a6.png\" data-download-href=\"/uploads/short-url/zcJ6JJZMSLrlY4cXkMyXazFra0C.png?dl=1\" title=\"16e9e061da2.9e37232443debf53\" rel=\"noopener nofollow ugc\">[16e9e061da2.9e37232443debf53]</a> \nBut the result is got from it is sentiment. \nFull code: \nfrom transformers import TrOCRProcessor, VisionEncoderDecoderModel\nfrom PIL import Image\n\np = 'picture.png'\nprocessor = TrOCRProcessor.from_pretrained(\"trocr-base-handwritten/\")\nmodel = VisionEncoderDecoderModel.from_pretrained(\"trocr-base-handwritten/\")\nimage = Image.open(p)\nimage_rgb = image.convert('RGB')\npixels = proces…\n </blockquote>\n</aside>\n<aside class=\"quote\" data-post=\"1\" data-topic=\"143476\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/riccardodemaria/48/39915_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/handwritten-ocr-w-confidence-scores/143476\">Handwritten OCR w/ confidence scores</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n Hello everyone, \nI am currently looking for suggestions to implement a handwritten unstructured invoice parsing pipeline. \nWhat open-source models do you recommend for handwritten ocr/parsing? \nI have tried EaysOCR, Qwen, Intern-MPO, LayoutLM but they all seem to achieve poor results with handwritten invoices. \nThe idea is to find an open-source alternative to Textract OCR, so that I can fine-tune it when Textract performs poorly. \nThank you!\n </blockquote>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces?sort=trending&search=vl\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces?sort=trending&search=vl\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/3/f/3f219d23b16d4a243a12070474512a6d6730c841.png\" class=\"thumbnail\" data-dominant-color=\"F1F1F1\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces?sort=trending&search=vl\" target=\"_blank\" rel=\"noopener\">Spaces - Hugging Face</a></h3>\n\n <p>Discover amazing ML apps made by the community</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-18T00:16:26.225Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 21.4,
"yours": false,
"topic_id": 155677,
"topic_slug": "best-model-to-extract-text-from-old-church-records-written-in-cursive",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/microsoft/trocr-large-handwritten",
"internal": false,
"reflection": false,
"title": "microsoft/trocr-large-handwritten · Hugging Face",
"clicks": 5
},
{
"url": "https://huggingface.co/spaces?sort=trending&search=vl",
"internal": false,
"reflection": false,
"title": "Spaces - Hugging Face",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/handwriting-recognition-cant-recognize-multiline-words/39422",
"internal": true,
"reflection": false,
"title": "Handwriting recognition. Can't recognize multiline words",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/handwritten-ocr-w-confidence-scores/143476",
"internal": true,
"reflection": false,
"title": "Handwritten OCR w/ confidence scores",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/best-model-to-extract-text-from-old-church-records-written-in-cursive/155677/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222778,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-18T12:17:19.657Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-18T12:17:19.657Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 5,
"readers_count": 4,
"score": 6,
"yours": false,
"topic_id": 155677,
"topic_slug": "best-model-to-extract-text-from-old-church-records-written-in-cursive",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/best-model-to-extract-text-from-old-church-records-written-in-cursive/155677/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello! I have a bunch of Church records that I got from Matricula Online (a website that stores church registers like books of birth, marriage and death). They are from 16th all the way to early 20th century. I would like to store their contents in a .txt file. Records are written in cursive in a mix between Slovene and German. <a href="https://data.matricula-online.eu/en/slovenia/ljubljana/zagradec/04415/?pg=12" rel="noopener nofollow ugc">Here</a>’s a random page so you can see what I mean. I have a GTX 1060 6GB so naturally I would like a model that I can run on my computer without major performance issues. What would be the best model to do this? Thank you in advance!</p>
|
<p>Basically, this task can be performed using VLM, but recognizing actual handwritten characters and text is quite difficult. I recommend trying out various models online and using the ones that work well locally. With VRAM savings through quantization, there are models that can run with 6GB.</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/microsoft/trocr-large-handwritten">
<header class="source">
<a href="https://huggingface.co/microsoft/trocr-large-handwritten" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/8/58a023be01f4684d1da9cce52148e50c3fe48a91_2_690x372.png" class="thumbnail" data-dominant-color="5C71A4" width="690" height="372"></div>
<h3><a href="https://huggingface.co/microsoft/trocr-large-handwritten" target="_blank" rel="noopener">microsoft/trocr-large-handwritten · Hugging Face</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="quote quote-modified" data-post="1" data-topic="39422">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/k/4491bb/48.png" class="avatar">
<a href="https://discuss.huggingface.co/t/handwriting-recognition-cant-recognize-multiline-words/39422">Handwriting recognition. Can't recognize multiline words</a> <a class="badge-category__wrapper " href="/c/beginners/5"><span data-category-id="5" style="--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!"><span class="badge-category__name">Beginners</span></span></a>
</div>
<blockquote>
I expect the model trocr-base-handwritten to extract all the text from the picture.
<a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/f/6/f6bc6717f6a697facab06af2e09ee4377b4987a6.png" data-download-href="/uploads/short-url/zcJ6JJZMSLrlY4cXkMyXazFra0C.png?dl=1" title="16e9e061da2.9e37232443debf53" rel="noopener nofollow ugc">[16e9e061da2.9e37232443debf53]</a>
But the result is got from it is sentiment.
Full code:
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
p = 'picture.png'
processor = TrOCRProcessor.from_pretrained("trocr-base-handwritten/")
model = VisionEncoderDecoderModel.from_pretrained("trocr-base-handwritten/")
image = Image.open(p)
image_rgb = image.convert('RGB')
pixels = proces…
</blockquote>
</aside>
<aside class="quote" data-post="1" data-topic="143476">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/riccardodemaria/48/39915_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/handwritten-ocr-w-confidence-scores/143476">Handwritten OCR w/ confidence scores</a> <a class="badge-category__wrapper " href="/c/beginners/5"><span data-category-id="5" style="--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!"><span class="badge-category__name">Beginners</span></span></a>
</div>
<blockquote>
Hello everyone,
I am currently looking for suggestions to implement a handwritten unstructured invoice parsing pipeline.
What open-source models do you recommend for handwritten ocr/parsing?
I have tried EaysOCR, Qwen, Intern-MPO, LayoutLM but they all seem to achieve poor results with handwritten invoices.
The idea is to find an open-source alternative to Textract OCR, so that I can fine-tune it when Textract performs poorly.
Thank you!
</blockquote>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces?sort=trending&search=vl">
<header class="source">
<a href="https://huggingface.co/spaces?sort=trending&search=vl" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/3/f/3f219d23b16d4a243a12070474512a6d6730c841.png" class="thumbnail" data-dominant-color="F1F1F1" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces?sort=trending&search=vl" target="_blank" rel="noopener">Spaces - Hugging Face</a></h3>
<p>Discover amazing ML apps made by the community</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Can I write to the file system?
|
https://discuss.huggingface.co/t/can-i-write-to-the-file-system/155246
| 155,246
| 24
|
2025-05-14T21:45:09.585000Z
|
[
{
"id": 222086,
"name": "Pablo Villanueva Domingo",
"username": "PabloVD",
"avatar_template": "/user_avatar/discuss.huggingface.co/pablovd/{size}/34178_2.png",
"created_at": "2025-05-14T21:45:09.637Z",
"cooked": "<p>I have an app where I need to write files to the file system, like:</p>\n<pre><code class=\"lang-auto\">os.makedirs(work_dir)\n</code></pre>\n<p>Is that possible? I tried with a docker image but I got a <code>PermissionError: [Errno 13] Permission denied</code> in that line. Any way to overcome that?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-14T21:45:31.658Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 44,
"reads": 8,
"readers_count": 7,
"score": 236.6,
"yours": false,
"topic_id": 155246,
"topic_slug": "can-i-write-to-the-file-system",
"display_username": "Pablo Villanueva Domingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 69899,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-write-to-the-file-system/155246/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222116,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-15T02:30:47.801Z",
"cooked": "<p>I think you can basically access the directory under <code>/home/user/</code> (or possibly <code>/home/</code> ?) using that method. There is no way to access a path higher up…</p>\n<p>(This also causes an error in <code>Dockerfile</code>’s <code>WORKDIR</code>, etc.)</p><aside class=\"quote\" data-post=\"1\" data-topic=\"152177\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/p/c6cbf5/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/error-code-137-cache-error/152177\">Error code 137 - cache error</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n build error \nJob failed with exit code: 137 \nthe docker image is FROM <a href=\"http://ghcr.io/open-webui/open-webui:latest\" rel=\"noopener nofollow ugc\">ghcr.io/open-webui/open-webui:latest</a>. \nmy cpu is of Upgrade, persistent storage = small. \nThis was working perfectly and stopped i think since 10 days ago\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-15T02:30:47.801Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 7,
"readers_count": 6,
"score": 21.4,
"yours": false,
"topic_id": 155246,
"topic_slug": "can-i-write-to-the-file-system",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/error-code-137-cache-error/152177",
"internal": true,
"reflection": false,
"title": "Error code 137 - cache error",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-write-to-the-file-system/155246/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222415,
"name": "Pablo Villanueva Domingo",
"username": "PabloVD",
"avatar_template": "/user_avatar/discuss.huggingface.co/pablovd/{size}/34178_2.png",
"created_at": "2025-05-16T08:36:31.656Z",
"cooked": "<p>That was the reason! I needed to create an user and work in the user folder. The steps to follow are explained <a href=\"https://huggingface.co/docs/hub/spaces-sdks-docker\">here</a>.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-16T08:36:31.656Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 155246,
"topic_slug": "can-i-write-to-the-file-system",
"display_username": "Pablo Villanueva Domingo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/hub/spaces-sdks-docker",
"internal": false,
"reflection": false,
"title": "Docker Spaces",
"clicks": 8
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 69899,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-write-to-the-file-system/155246/3",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222553,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-16T20:36:50.624Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-05-16T20:36:50.624Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 2,
"readers_count": 1,
"score": 10.4,
"yours": false,
"topic_id": 155246,
"topic_slug": "can-i-write-to-the-file-system",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-write-to-the-file-system/155246/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have an app where I need to write files to the file system, like:</p>
<pre><code class="lang-auto">os.makedirs(work_dir)
</code></pre>
<p>Is that possible? I tried with a docker image but I got a <code>PermissionError: [Errno 13] Permission denied</code> in that line. Any way to overcome that?</p>
|
<p>I think you can basically access the directory under <code>/home/user/</code> (or possibly <code>/home/</code> ?) using that method. There is no way to access a path higher up…</p>
<p>(This also causes an error in <code>Dockerfile</code>’s <code>WORKDIR</code>, etc.)</p><aside class="quote" data-post="1" data-topic="152177">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/p/c6cbf5/48.png" class="avatar">
<a href="https://discuss.huggingface.co/t/error-code-137-cache-error/152177">Error code 137 - cache error</a> <a class="badge-category__wrapper " href="/c/beginners/5"><span data-category-id="5" style="--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!"><span class="badge-category__name">Beginners</span></span></a>
</div>
<blockquote>
build error
Job failed with exit code: 137
the docker image is FROM <a href="http://ghcr.io/open-webui/open-webui:latest" rel="noopener nofollow ugc">ghcr.io/open-webui/open-webui:latest</a>.
my cpu is of Upgrade, persistent storage = small.
This was working perfectly and stopped i think since 10 days ago
</blockquote>
</aside>
|
Model loading in Colab but not Jupyterlab?!
|
https://discuss.huggingface.co/t/model-loading-in-colab-but-not-jupyterlab/154082
| 154,082
| 24
|
2025-05-08T08:37:41.707000Z
|
[
{
"id": 220538,
"name": "David Mathew",
"username": "Dagriffpatchfan",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/d07c76/{size}.png",
"created_at": "2025-05-08T08:37:41.764Z",
"cooked": "<p>Hi,<br>\nI just finetuned Tiny-Llama as tiny-sajar, a little experiment to test finetuning. Running the following code in google colab:</p>\n<pre><code class=\"lang-auto\">from transformers import AutoModelForCausalLM, AutoTokenizer\n\n# Replace with your model's path on the Hub\nmodel = AutoModelForCausalLM.from_pretrained(\"Dagriffpatchfan/tiny-sajar\")\ntokenizer = AutoTokenizer.from_pretrained(\"Dagriffpatchfan/tiny-sajar\")\n\n</code></pre>\n<p>Worked perfectly, loading the model. I was then able to run the following code:</p>\n<pre><code class=\"lang-auto\">questions = [\n \"Questions here\",\n]\n\nfor question in questions:\n prompt = f\"{question}\"\n inputs = tokenizer(prompt, return_tensors=\"pt\")\n outputs = model.generate(\n inputs.input_ids,\n max_length=100, # Maximum number of tokens to generate\n num_return_sequences=1, # Number of separate completions to generate\n temperature=0.7, # Sampling temperature (lower is more focused, higher is more random)\n top_p=0.9, # Nucleus sampling\n do_sample=True # Enable sampling\n )\n\n # Decode the generated text\n generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)\n print(f\"**{question}**\\n{generated_text}\\n\")\n\n</code></pre>\n<p>Which generated text as expected. I went to try this in a jupyterlab space and to my complete surprise I got the following error when I tried to load the model:<br>\n--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 4 1 from transformers import AutoModelForCausalLM, AutoTokenizer 3 # Replace with your model’s path on the Hub ----> 4 model = AutoModelForCausalLM.from_pretrained(“Dagriffpatchfan/tiny-sajar”) 5 tokenizer = AutoTokenizer.from_pretrained(“Dagriffpatchfan/tiny-sajar”) 7 questions = [ 8 “Who are you, and what is your role in the story?”, 9 “How did you come to know David and the Avengers?”, (…) 17 “If you had to pick one person to go on a mission with, who would it be and why?” 18 ] File <a href=\"https://dagriffpatchfan-jupyterlab.hf.space/lab/tree/~/miniconda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py#line=530\" rel=\"noopener nofollow ugc\">~/miniconda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py:531</a>, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 528 if kwargs.get(“quantization_config”, None) is not None: 529 _ = kwargs.pop(“quantization_config”) → 531 config, kwargs = AutoConfig.from_pretrained( 532 pretrained_model_name_or_path, 533 return_unused_kwargs=True, 534 trust_remote_code=trust_remote_code, 535 code_revision=code_revision, 536 _commit_hash=commit_hash, 537 **hub_kwargs, 538 **kwargs, 539 ) 541 # if torch_dtype=auto was passed here, ensure to pass it on 542 if kwargs_orig.get(“torch_dtype”, None) == “auto”: File <a href=\"https://dagriffpatchfan-jupyterlab.hf.space/lab/tree/~/miniconda/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py#line=1150\" rel=\"noopener nofollow ugc\">~/miniconda/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:1151</a>, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 1148 if pattern in str(pretrained_model_name_or_path): 1149 return CONFIG_MAPPING[pattern].from_dict(config_dict, **unused_kwargs) → 1151 raise ValueError( 1152 f\"Unrecognized model in {pretrained_model_name_or_path}. \" 1153 f\"Should have a <code>model_type</code> key in its {CONFIG_NAME}, or contain one of the following strings \" 1154 f\"in its name: {', '.join(CONFIG_MAPPING.keys())}\" 1155 ) ValueError: Unrecognized model in Dagriffpatchfan/tiny-sajar. Should have a <code>model_type</code> key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth</p>\n<p>I found this very confusing…does anyone know what I am experiencing?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-08T08:37:41.764Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 32,
"reads": 4,
"readers_count": 3,
"score": 155.8,
"yours": false,
"topic_id": 154082,
"topic_slug": "model-loading-in-colab-but-not-jupyterlab",
"display_username": "David Mathew",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://dagriffpatchfan-jupyterlab.hf.space/lab/tree/~/miniconda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py#line=530",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
},
{
"url": "https://dagriffpatchfan-jupyterlab.hf.space/lab/tree/~/miniconda/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py#line=1150",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90119,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/model-loading-in-colab-but-not-jupyterlab/154082/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220688,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-08T23:55:50.918Z",
"cooked": "<p>Since it includes models close to the latest ones such as Gemma 3, the Transoformers version is likely to be almost the latest. In fact, even older Transoformers models should work with the Llama architecture. This is indeed a strange error. The cause is probably not the code or the model itself.</p>\n<p>There seems to be a possibility of errors occurring in hf_transfer related to Jupyter. In other words, there may be an error in the download.</p><aside class=\"quote quote-modified\" data-post=\"4\" data-topic=\"153809\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/smostafanejad/48/34306_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/4\">AutoTokenizer.from_pretrained() suddenly raises an error</a> <a class=\"badge-category__wrapper \" href=\"/c/transformers/9\"><span data-category-id=\"9\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Transformers library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Transformers</span></span></a>\n </div>\n <blockquote>\n OK since this was an EnvironmentError I checked everything and I think I have found the culprit. \nIn my bashrc, I had export HF_HUB_ENABLE_HF_TRANSFER=1 set which means the problem might have something to do with an inconsistency with the hf-transfer package. Reading Hugging Face’s <a href=\"https://huggingface.co/docs/huggingface_hub/v0.31.0/package_reference/environment_variables\">Environment Variable documentation</a> gave the clue about the possibility of such incidents and undefined behavior \nHF_HUB_ENABLE_HF_TRANSFER\n\nSet to True to download files from the Hub using hf_transfer. It’s a Rust-bas…\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-08T23:55:50.918Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 5.8,
"yours": false,
"topic_id": 154082,
"topic_slug": "model-loading-in-colab-but-not-jupyterlab",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/4",
"internal": true,
"reflection": false,
"title": "AutoTokenizer.from_pretrained() suddenly raises an error",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/model-loading-in-colab-but-not-jupyterlab/154082/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221277,
"name": "David Mathew",
"username": "Dagriffpatchfan",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/d07c76/{size}.png",
"created_at": "2025-05-11T22:21:32.620Z",
"cooked": "<p>So I should set<br>\n<code>export HF_HUB_ENABLE_HF_TRANSFER=1</code><br>\nto 0 instead of 1?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-11T22:21:44.188Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 25.8,
"yours": false,
"topic_id": 154082,
"topic_slug": "model-loading-in-colab-but-not-jupyterlab",
"display_username": "David Mathew",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90119,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/model-loading-in-colab-but-not-jupyterlab/154082/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221281,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-11T23:28:05.454Z",
"cooked": "<p>Yea. Or maybe try reinstalling <code>hf_transfer</code>. If that’s the cause.</p>\n<pre><code class=\"lang-auto\">pip install -U hf_transfer hf_xet\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-11T23:28:05.454Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 1,
"yours": false,
"topic_id": 154082,
"topic_slug": "model-loading-in-colab-but-not-jupyterlab",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/model-loading-in-colab-but-not-jupyterlab/154082/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222337,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-15T23:33:42.138Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-15T23:33:42.138Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 154082,
"topic_slug": "model-loading-in-colab-but-not-jupyterlab",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/model-loading-in-colab-but-not-jupyterlab/154082/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi,<br>
I just finetuned Tiny-Llama as tiny-sajar, a little experiment to test finetuning. Running the following code in google colab:</p>
<pre><code class="lang-auto">from transformers import AutoModelForCausalLM, AutoTokenizer
# Replace with your model's path on the Hub
model = AutoModelForCausalLM.from_pretrained("Dagriffpatchfan/tiny-sajar")
tokenizer = AutoTokenizer.from_pretrained("Dagriffpatchfan/tiny-sajar")
</code></pre>
<p>Worked perfectly, loading the model. I was then able to run the following code:</p>
<pre><code class="lang-auto">questions = [
"Questions here",
]
for question in questions:
prompt = f"{question}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
inputs.input_ids,
max_length=100, # Maximum number of tokens to generate
num_return_sequences=1, # Number of separate completions to generate
temperature=0.7, # Sampling temperature (lower is more focused, higher is more random)
top_p=0.9, # Nucleus sampling
do_sample=True # Enable sampling
)
# Decode the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"**{question}**\n{generated_text}\n")
</code></pre>
<p>Which generated text as expected. I went to try this in a jupyterlab space and to my complete surprise I got the following error when I tried to load the model:<br>
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 4 1 from transformers import AutoModelForCausalLM, AutoTokenizer 3 # Replace with your model’s path on the Hub ----> 4 model = AutoModelForCausalLM.from_pretrained(“Dagriffpatchfan/tiny-sajar”) 5 tokenizer = AutoTokenizer.from_pretrained(“Dagriffpatchfan/tiny-sajar”) 7 questions = [ 8 “Who are you, and what is your role in the story?”, 9 “How did you come to know David and the Avengers?”, (…) 17 “If you had to pick one person to go on a mission with, who would it be and why?” 18 ] File <a href="https://dagriffpatchfan-jupyterlab.hf.space/lab/tree/~/miniconda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py#line=530" rel="noopener nofollow ugc">~/miniconda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py:531</a>, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 528 if kwargs.get(“quantization_config”, None) is not None: 529 _ = kwargs.pop(“quantization_config”) → 531 config, kwargs = AutoConfig.from_pretrained( 532 pretrained_model_name_or_path, 533 return_unused_kwargs=True, 534 trust_remote_code=trust_remote_code, 535 code_revision=code_revision, 536 _commit_hash=commit_hash, 537 **hub_kwargs, 538 **kwargs, 539 ) 541 # if torch_dtype=auto was passed here, ensure to pass it on 542 if kwargs_orig.get(“torch_dtype”, None) == “auto”: File <a href="https://dagriffpatchfan-jupyterlab.hf.space/lab/tree/~/miniconda/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py#line=1150" rel="noopener nofollow ugc">~/miniconda/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:1151</a>, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 1148 if pattern in str(pretrained_model_name_or_path): 1149 return CONFIG_MAPPING[pattern].from_dict(config_dict, **unused_kwargs) → 1151 raise ValueError( 1152 f"Unrecognized model in {pretrained_model_name_or_path}. " 1153 f"Should have a <code>model_type</code> key in its {CONFIG_NAME}, or contain one of the following strings " 1154 f"in its name: {', '.join(CONFIG_MAPPING.keys())}" 1155 ) ValueError: Unrecognized model in Dagriffpatchfan/tiny-sajar. Should have a <code>model_type</code> key in its config.json, or contain one of the following strings in its name: albert, align, altclip, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, colpali, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v3, deformable_detr, deit, depth_anything, depth_pro, deta, detr, diffllama, dinat, dinov2, dinov2_with_registers, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, emu3, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, git, glm, glm4, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hiera, hubert, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mistral3, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_vl, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam_vision_model, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, textnet, time_series_transformer, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zamba2, zoedepth</p>
<p>I found this very confusing…does anyone know what I am experiencing?</p>
|
<p>Yea. Or maybe try reinstalling <code>hf_transfer</code>. If that’s the cause.</p>
<pre><code class="lang-auto">pip install -U hf_transfer hf_xet
</code></pre>
|
Load a COCO format database from disk for DETR
|
https://discuss.huggingface.co/t/load-a-coco-format-database-from-disk-for-detr/153752
| 153,752
| 10
|
2025-05-06T12:13:56.072000Z
|
[
{
"id": 220090,
"name": "RAOUNAK LOUDAD",
"username": "Godouche",
"avatar_template": "/user_avatar/discuss.huggingface.co/godouche/{size}/46990_2.png",
"created_at": "2025-05-06T12:13:56.138Z",
"cooked": "<p>I have a COCO database in my disk (with a JSON in the annotations folder that contains image directions) and I would like to load it in HF dataset in orther to use CV models.</p>\n<p>Is there a function that allows that?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-06T12:13:56.138Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 283,
"reads": 9,
"readers_count": 8,
"score": 1381.8,
"yours": false,
"topic_id": 153752,
"topic_slug": "load-a-coco-format-database-from-disk-for-detr",
"display_username": "RAOUNAK LOUDAD",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/what-bounding-boxes-format-does-grounding-dino-use/161851/2",
"internal": true,
"reflection": true,
"title": "What bounding boxes format does Grounding DINO use?",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93025,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/load-a-coco-format-database-from-disk-for-detr/153752/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220222,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-07T01:56:39.463Z",
"cooked": "<p>Hmm… This?</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/datasets/issues/2526\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/datasets/issues/2526\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/datasets</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/datasets/issues/2526\" target=\"_blank\" rel=\"noopener\">Add COCO datasets</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2021-06-21\" data-time=\"07:48:32\" data-timezone=\"UTC\">07:48AM - 21 Jun 21 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/NielsRogge\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/2/d/2d192fe183e1cec5bb0c49111fce79b2203c0804.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"7B6C60\">\n NielsRogge\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n dataset request\n </span>\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n vision\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">## Adding a Dataset\n- **Name:** COCO\n- **Description:** COCO is a large-scale <span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">object detection, segmentation, and captioning dataset.\n- **Paper + website:** https://cocodataset.org/#home\n- **Data:** https://cocodataset.org/#download\n- **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API.\n\nInstructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-07T01:56:39.463Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 9,
"readers_count": 8,
"score": 56.8,
"yours": false,
"topic_id": 153752,
"topic_slug": "load-a-coco-format-database-from-disk-for-detr",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets/issues/2526",
"internal": false,
"reflection": false,
"title": "Add COCO datasets · Issue #2526 · huggingface/datasets · GitHub",
"clicks": 34
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/load-a-coco-format-database-from-disk-for-detr/153752/2",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220344,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2025-05-07T12:45:42.759Z",
"cooked": "<aside class=\"quote no-group\" data-username=\"Godouche\" data-post=\"1\" data-topic=\"153752\" data-full=\"true\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/godouche/48/46990_2.png\" class=\"avatar\"> Godouche:</div>\n<blockquote>\n<p>I have a COCO database in my disk (with a JSON in the annotations folder that contains image directions) and I would like to load it in HF dataset in orther to use CV models.</p>\n<p>Is there a function that allows that?</p>\n</blockquote>\n</aside>\n<p>There is no COCO loader in the <code>datasets</code> library, but it would be a welcomed contribution in my opinion.</p>\n<p>All the existing data modules are listed <a href=\"https://github.com/huggingface/datasets/tree/main/src/datasets/packaged_modules\">here</a></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-07T12:45:42.759Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 1,
"incoming_link_count": 11,
"reads": 6,
"readers_count": 5,
"score": 86.2,
"yours": false,
"topic_id": 153752,
"topic_slug": "load-a-coco-format-database-from-disk-for-detr",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/datasets/tree/main/src/datasets/packaged_modules",
"internal": false,
"reflection": false,
"title": "datasets/src/datasets/packaged_modules at main · huggingface/datasets · GitHub",
"clicks": 14
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/load-a-coco-format-database-from-disk-for-detr/153752/3",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221922,
"name": "RAOUNAK LOUDAD",
"username": "Godouche",
"avatar_template": "/user_avatar/discuss.huggingface.co/godouche/{size}/46990_2.png",
"created_at": "2025-05-14T12:48:46.156Z",
"cooked": "<p>I wrote this code for loading COCO datasets in hugging face datasets that works with DETR,</p>\n<p>Adaptations:</p>\n<ul>\n<li>features of your COCO JSON file</li>\n<li>path to COCO folder in local</li>\n</ul>\n<pre><code class=\"lang-auto\">import json\nimport os\nimport subprocess\nfrom datasets import DatasetDict, Dataset, Features, Value, Sequence, ClassLabel, Image\n\n# Ensure the datasets module is installed\nsubprocess.check_call([\"pip\", \"install\", \"datasets\"])\n\nclass CocoDatasetLoader:\n def __init__(self, coco_folder):\n self.coco_folder = coco_folder\n\n def group_by_key_id(self, data, key_id, category_id_to_index):\n \"\"\"\n Groups data by a specified key and maps category IDs to indices.\n \n Args:\n data (list): List of dictionaries containing the data.\n key_id (str): The key to group by.\n category_id_to_index (dict): Mapping from category IDs to indices.\n \n Returns:\n dict: Grouped data.\n \"\"\"\n grouped_data = {}\n for item in data:\n key_value = item[key_id]\n if key_value not in grouped_data:\n grouped_data[key_value] = {k: [] for k in item.keys() if k != key_id}\n for k, v in item.items():\n if k != key_id:\n grouped_data[key_value][k].append(v)\n grouped_data[key_value]['category'] = [category_id_to_index[x] for x in grouped_data[key_value]['category_id']]\n return grouped_data\n \n def load_coco_hf_dataset(self, split):\n \"\"\"\n Loads COCO dataset and processes it into a format suitable for Hugging Face datasets.\n \n Args:\n split (str): Dataset split (e.g., 'Train', 'Test', 'Validation').\n \n Returns:\n Dataset: HuggingFace Dataset of the split of COCO dataset.\n \"\"\"\n # Load the JSON file\n json_file_path = os.path.join(self.coco_folder, f'annotations/instances_{split}.json')\n try:\n with open(json_file_path, 'r') as f:\n coco_data = json.load(f)\n except FileNotFoundError:\n print(f\"File not found: {json_file_path}\")\n return []\n\n # Extract category names and create a mapping from category IDs to indices\n category_names = [cat['name'] for cat in coco_data['categories']]\n category_id_to_index = {cat['id']: idx for idx, cat in enumerate(coco_data['categories'])}\n\n # Group annotations by 'image_id'\n grouped_annotations = self.group_by_key_id(coco_data['annotations'], 'image_id', category_id_to_index)\n\n # Create a dictionary of images\n grouped_images = {item['id']: item for item in coco_data['images']}\n\n # Initialize 'objects' field in grouped_images\n annotations_keys = list(grouped_annotations.values())[0].keys()\n for k, v in grouped_images.items():\n grouped_images[k]['objects'] = {key: [] for key in annotations_keys}\n\n # Populate 'objects' field with annotations\n for k, v in grouped_annotations.items():\n grouped_images[k]['objects'] = v\n\n # Add image paths and IDs\n for k, v in grouped_images.items():\n v['image'] = os.path.join(self.coco_folder, 'images', split, v['file_name'])\n v['image_id'] = v['id']\n\n # Create a Hugging Face dataset from the custom data using from_list for efficiency\n hf_dataset = Dataset.from_list(list(grouped_images.values()))\n\n # Define the features for the main dataset\n features = Features({\n 'id': Value('int64'),\n 'image_id': Value('int64'),\n 'image': Image(),\n 'file_name': Value('string'),\n 'license': Value('string'),\n 'flickr_url': Value('string'),\n 'coco_url': Value('string'),\n 'date_captured': Value('string'),\n 'width': Value('int64'),\n 'height': Value('int64'),\n 'objects': Sequence({\n 'id': Value('int64'),\n 'area': Value('float32'),\n 'bbox': Sequence(Value('float32')),\n 'category': ClassLabel(names=category_names),\n 'attributes': {'occluded': Value('bool')},\n 'category_id': Value('int64'),\n 'iscrowd': Value('int64'),\n 'segmentation': {\n 'counts': Sequence(Value('int64')),\n 'size': Sequence(Value('int64'))\n }\n })\n })\n\n # Cast the features for the Hugging Face dataset\n hf_dataset = hf_dataset.cast(features)\n\n return hf_dataset\n\n# Initialize the CocoDatasetLoader class\ncoco_loader = CocoDatasetLoader('/path/to/coco/folder/')\n\nhf_dataset_dict = DatasetDict()\nfor split in ['Train', 'Test', 'Validation']:\n # Load the COCO dataset for each split\n hf_dataset = coco_loader.load_coco_hf_dataset(split)\n \n # Print the dataset\n print(f\"Dataset for {split} split:\")\n print(hf_dataset)\n \n # Create a DatasetDict with the split\n hf_dataset_dict[split.lower()] = hf_dataset\n\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-14T12:48:46.156Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 23,
"reads": 5,
"readers_count": 4,
"score": 126,
"yours": false,
"topic_id": 153752,
"topic_slug": "load-a-coco-format-database-from-disk-for-detr",
"display_username": "RAOUNAK LOUDAD",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93025,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/load-a-coco-format-database-from-disk-for-detr/153752/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 222100,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-15T00:48:58.730Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-15T00:48:58.730Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 10.8,
"yours": false,
"topic_id": 153752,
"topic_slug": "load-a-coco-format-database-from-disk-for-detr",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/load-a-coco-format-database-from-disk-for-detr/153752/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have a COCO database in my disk (with a JSON in the annotations folder that contains image directions) and I would like to load it in HF dataset in orther to use CV models.</p>
<p>Is there a function that allows that?</p>
|
<p>I wrote this code for loading COCO datasets in hugging face datasets that works with DETR,</p>
<p>Adaptations:</p>
<ul>
<li>features of your COCO JSON file</li>
<li>path to COCO folder in local</li>
</ul>
<pre><code class="lang-auto">import json
import os
import subprocess
from datasets import DatasetDict, Dataset, Features, Value, Sequence, ClassLabel, Image
# Ensure the datasets module is installed
subprocess.check_call(["pip", "install", "datasets"])
class CocoDatasetLoader:
def __init__(self, coco_folder):
self.coco_folder = coco_folder
def group_by_key_id(self, data, key_id, category_id_to_index):
"""
Groups data by a specified key and maps category IDs to indices.
Args:
data (list): List of dictionaries containing the data.
key_id (str): The key to group by.
category_id_to_index (dict): Mapping from category IDs to indices.
Returns:
dict: Grouped data.
"""
grouped_data = {}
for item in data:
key_value = item[key_id]
if key_value not in grouped_data:
grouped_data[key_value] = {k: [] for k in item.keys() if k != key_id}
for k, v in item.items():
if k != key_id:
grouped_data[key_value][k].append(v)
grouped_data[key_value]['category'] = [category_id_to_index[x] for x in grouped_data[key_value]['category_id']]
return grouped_data
def load_coco_hf_dataset(self, split):
"""
Loads COCO dataset and processes it into a format suitable for Hugging Face datasets.
Args:
split (str): Dataset split (e.g., 'Train', 'Test', 'Validation').
Returns:
Dataset: HuggingFace Dataset of the split of COCO dataset.
"""
# Load the JSON file
json_file_path = os.path.join(self.coco_folder, f'annotations/instances_{split}.json')
try:
with open(json_file_path, 'r') as f:
coco_data = json.load(f)
except FileNotFoundError:
print(f"File not found: {json_file_path}")
return []
# Extract category names and create a mapping from category IDs to indices
category_names = [cat['name'] for cat in coco_data['categories']]
category_id_to_index = {cat['id']: idx for idx, cat in enumerate(coco_data['categories'])}
# Group annotations by 'image_id'
grouped_annotations = self.group_by_key_id(coco_data['annotations'], 'image_id', category_id_to_index)
# Create a dictionary of images
grouped_images = {item['id']: item for item in coco_data['images']}
# Initialize 'objects' field in grouped_images
annotations_keys = list(grouped_annotations.values())[0].keys()
for k, v in grouped_images.items():
grouped_images[k]['objects'] = {key: [] for key in annotations_keys}
# Populate 'objects' field with annotations
for k, v in grouped_annotations.items():
grouped_images[k]['objects'] = v
# Add image paths and IDs
for k, v in grouped_images.items():
v['image'] = os.path.join(self.coco_folder, 'images', split, v['file_name'])
v['image_id'] = v['id']
# Create a Hugging Face dataset from the custom data using from_list for efficiency
hf_dataset = Dataset.from_list(list(grouped_images.values()))
# Define the features for the main dataset
features = Features({
'id': Value('int64'),
'image_id': Value('int64'),
'image': Image(),
'file_name': Value('string'),
'license': Value('string'),
'flickr_url': Value('string'),
'coco_url': Value('string'),
'date_captured': Value('string'),
'width': Value('int64'),
'height': Value('int64'),
'objects': Sequence({
'id': Value('int64'),
'area': Value('float32'),
'bbox': Sequence(Value('float32')),
'category': ClassLabel(names=category_names),
'attributes': {'occluded': Value('bool')},
'category_id': Value('int64'),
'iscrowd': Value('int64'),
'segmentation': {
'counts': Sequence(Value('int64')),
'size': Sequence(Value('int64'))
}
})
})
# Cast the features for the Hugging Face dataset
hf_dataset = hf_dataset.cast(features)
return hf_dataset
# Initialize the CocoDatasetLoader class
coco_loader = CocoDatasetLoader('/path/to/coco/folder/')
hf_dataset_dict = DatasetDict()
for split in ['Train', 'Test', 'Validation']:
# Load the COCO dataset for each split
hf_dataset = coco_loader.load_coco_hf_dataset(split)
# Print the dataset
print(f"Dataset for {split} split:")
print(hf_dataset)
# Create a DatasetDict with the split
hf_dataset_dict[split.lower()] = hf_dataset
</code></pre>
|
Potential issue with spaces analytics not working
|
https://discuss.huggingface.co/t/potential-issue-with-spaces-analytics-not-working/154627
| 154,627
| 24
|
2025-05-12T04:43:13.552000Z
|
[
{
"id": 221314,
"name": "Nolan Zandi",
"username": "nolanzandi",
"avatar_template": "/user_avatar/discuss.huggingface.co/nolanzandi/{size}/45859_2.png",
"created_at": "2025-05-12T04:43:13.613Z",
"cooked": "<p>I have been averaging about 300-400 visits per week for a few months, but about a week ago new visits stopped registering and it shows no visits in the last week:<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8.png\" data-download-href=\"/uploads/short-url/kj9msD530FM0M7mCmuuDDNyjHf2.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_690x327.png\" alt=\"image\" data-base62-sha1=\"kj9msD530FM0M7mCmuuDDNyjHf2\" width=\"690\" height=\"327\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_690x327.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_1035x490.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_1380x654.png 2x\" data-dominant-color=\"FDFDFD\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1920×911 61.8 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>However, my logs still show plenty of visitors using the space and I’ve had colleagues etc visit the site during the time frame without their visit being tracked and so it seems to be an issue with the tracking itself.</p>\n<p>Has anyone else been noticing this issue? Relatively minor issue in the grand scheme of things but I have seen my place on the trending list completely fall off so it does seem to have some sort of effect that I’d like to fix if possible.</p>\n<p>Thanks!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-12T04:43:13.613Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 43,
"reads": 7,
"readers_count": 6,
"score": 231.4,
"yours": false,
"topic_id": 154627,
"topic_slug": "potential-issue-with-spaces-analytics-not-working",
"display_username": "Nolan Zandi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91249,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/potential-issue-with-spaces-analytics-not-working/154627/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221325,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-12T06:36:34.442Z",
"cooked": "<p>This seems like a bug… <a class=\"mention\" href=\"/u/pierric\">@pierric</a> <a class=\"mention\" href=\"/u/wauplin\">@Wauplin</a><br>\nIt seems that bug reports for Hub and Spaces can be submitted here.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://github.com/huggingface/hub-docs/issues\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/huggingface/hub-docs/issues\" target=\"_blank\" rel=\"noopener\">GitHub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/344;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/8/78177a161ae913cd4757fff65d40f0b0b4b2e0a0_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"F4F2EB\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://github.com/huggingface/hub-docs/issues\" target=\"_blank\" rel=\"noopener\">Issues · huggingface/hub-docs</a></h3>\n\n <p>Docs of the Hugging Face Hub. Contribute to huggingface/hub-docs development by creating an account on GitHub.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-12T06:36:34.442Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 7,
"readers_count": 6,
"score": 46.4,
"yours": false,
"topic_id": 154627,
"topic_slug": "potential-issue-with-spaces-analytics-not-working",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/hub-docs/issues",
"internal": false,
"reflection": false,
"title": "GitHub · Where software is built",
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/potential-issue-with-spaces-analytics-not-working/154627/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221689,
"name": "Megan Riley",
"username": "meganariley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png",
"created_at": "2025-05-13T15:17:37.522Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/nolanzandi\">@nolanzandi</a> thanks for reporting! We’re looking into it and I’ll update you soon.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-13T15:17:37.522Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 36,
"yours": false,
"topic_id": 154627,
"topic_slug": "potential-issue-with-spaces-analytics-not-working",
"display_username": "Megan Riley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 31941,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/potential-issue-with-spaces-analytics-not-working/154627/3",
"reactions": [
{
"id": "clap",
"type": "emoji",
"count": 1
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221703,
"name": "Nolan Zandi",
"username": "nolanzandi",
"avatar_template": "/user_avatar/discuss.huggingface.co/nolanzandi/{size}/45859_2.png",
"created_at": "2025-05-13T16:11:19.467Z",
"cooked": "<p>Thank you so much <a class=\"mention\" href=\"/u/meganariley\">@meganariley</a>. I appreciate it!</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-13T16:11:19.467Z",
"reply_count": 1,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 20.8,
"yours": false,
"topic_id": 154627,
"topic_slug": "potential-issue-with-spaces-analytics-not-working",
"display_username": "Nolan Zandi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91249,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/potential-issue-with-spaces-analytics-not-working/154627/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 31941,
"username": "meganariley",
"name": "Megan Riley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 221864,
"name": "Megan Riley",
"username": "meganariley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png",
"created_at": "2025-05-14T09:38:49.608Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/nolanzandi\">@nolanzandi</a> thanks for waiting! This is now fixed. Let us know if you continue running into issues.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-14T09:38:49.608Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 30.8,
"yours": false,
"topic_id": 154627,
"topic_slug": "potential-issue-with-spaces-analytics-not-working",
"display_username": "Megan Riley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 31941,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/potential-issue-with-spaces-analytics-not-working/154627/5",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 91249,
"username": "nolanzandi",
"name": "Nolan Zandi",
"avatar_template": "/user_avatar/discuss.huggingface.co/nolanzandi/{size}/45859_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 222085,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-14T21:39:45.766Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-05-14T21:39:45.766Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 1,
"readers_count": 0,
"score": 0.2,
"yours": false,
"topic_id": 154627,
"topic_slug": "potential-issue-with-spaces-analytics-not-working",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/potential-issue-with-spaces-analytics-not-working/154627/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have been averaging about 300-400 visits per week for a few months, but about a week ago new visits stopped registering and it shows no visits in the last week:<br>
<div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8.png" data-download-href="/uploads/short-url/kj9msD530FM0M7mCmuuDDNyjHf2.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_690x327.png" alt="image" data-base62-sha1="kj9msD530FM0M7mCmuuDDNyjHf2" width="690" height="327" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_690x327.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_1035x490.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/8/e/8e55ad3c34bf42b46a0e4a1db3c101e0f4cc21f8_2_1380x654.png 2x" data-dominant-color="FDFDFD"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1920×911 61.8 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<p>However, my logs still show plenty of visitors using the space and I’ve had colleagues etc visit the site during the time frame without their visit being tracked and so it seems to be an issue with the tracking itself.</p>
<p>Has anyone else been noticing this issue? Relatively minor issue in the grand scheme of things but I have seen my place on the trending list completely fall off so it does seem to have some sort of effect that I’d like to fix if possible.</p>
<p>Thanks!</p>
|
<p>Hi <a class="mention" href="/u/nolanzandi">@nolanzandi</a> thanks for waiting! This is now fixed. Let us know if you continue running into issues.</p>
|
Is there any agent that can search google
|
https://discuss.huggingface.co/t/is-there-any-agent-that-can-search-google/141016
| 141,016
| 25
|
2025-02-15T18:22:08.966000Z
|
[
{
"id": 202756,
"name": "elkahtib",
"username": "Abdelkareem",
"avatar_template": "/user_avatar/discuss.huggingface.co/abdelkareem/{size}/30422_2.png",
"created_at": "2025-02-15T18:22:09.024Z",
"cooked": "<p>I want to build a smolagent that can search the results of google search ?<br>\nthere is the google API search but i don’t want to use it’s limit is very bad to me.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-02-15T18:22:09.024Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 254,
"reads": 53,
"readers_count": 52,
"score": 1290.6,
"yours": false,
"topic_id": 141016,
"topic_slug": "is-there-any-agent-that-can-search-google",
"display_username": "elkahtib",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 19484,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/is-there-any-agent-that-can-search-google/141016/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 204566,
"name": "Michael Joiner",
"username": "Saxanth",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/ce73a5/{size}.png",
"created_at": "2025-02-22T12:35:22.936Z",
"cooked": "<p>Setting up your own search engine for this task is more rewarding, and costs less.</p>\n<p>This is what I use for web search:</p><aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/searxng/searxng\">\n <header class=\"source\">\n\n <a href=\"https://github.com/searxng/searxng\" target=\"_blank\" rel=\"noopener nofollow ugc\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/b/9beecdfdf8f3be6609446a05179d77a6238f4e22_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"EFF1F4\">\n\n <h3><a href=\"https://github.com/searxng/searxng\" target=\"_blank\" rel=\"noopener nofollow ugc\">GitHub - searxng/searxng: SearXNG is a free internet metasearch engine which...</a></h3>\n\n <p><span class=\"github-repo-description\">SearXNG is a free internet metasearch engine which aggregates results from various search services and databases. Users are neither tracked nor profiled.</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-02-22T12:35:22.936Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 11,
"reads": 40,
"readers_count": 39,
"score": 93,
"yours": false,
"topic_id": 141016,
"topic_slug": "is-there-any-agent-that-can-search-google",
"display_username": "Michael Joiner",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/searxng/searxng",
"internal": false,
"reflection": false,
"title": "GitHub - searxng/searxng: SearXNG is a free internet metasearch engine which aggregates results from various search services and databases. Users are neither tracked nor profiled.",
"clicks": 41
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 81771,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/is-there-any-agent-that-can-search-google/141016/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 205862,
"name": "gael",
"username": "gael1130",
"avatar_template": "/user_avatar/discuss.huggingface.co/gael1130/{size}/42164_2.png",
"created_at": "2025-02-28T10:40:19.048Z",
"cooked": "<p>Yes, you can use the GoogleSearchTool, which is one of the default tools of smolagents.</p>\n<pre><code class=\"lang-auto\">import os\nfrom smolagents import GoogleSearchTool, HfApiModel\nos.environ[\"SERPAPI_API_KEY\"] = userdata.get('SERPAPI_API_KEY')\n\nmodel = HfApiModel(model_id=\"Qwen/Qwen2.5-Coder-32B-Instruct\", provider=\"together\")\n\nagent = CodeAgent(\n model=model,\n tools=[GoogleSearchTool()]\n)\n</code></pre>\n<p><a href=\"https://serpapi.com/\" rel=\"noopener nofollow ugc\">The link to get your Serp API key</a>.</p>\n<p>And if you want to go beyond, you can use the <code>DuckDuckGoSearchTool</code>. It also has limits but maybe a combination of both can help?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-02-28T10:40:19.048Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 29,
"readers_count": 28,
"score": 85.8,
"yours": false,
"topic_id": 141016,
"topic_slug": "is-there-any-agent-that-can-search-google",
"display_username": "gael",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://serpapi.com/",
"internal": false,
"reflection": false,
"title": "SerpApi: Google Search API",
"clicks": 18
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 85367,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/is-there-any-agent-that-can-search-google/141016/3",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
},
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221651,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-13T12:09:37.100Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-05-13T12:09:37.100Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 11,
"readers_count": 10,
"score": 7.2,
"yours": false,
"topic_id": 141016,
"topic_slug": "is-there-any-agent-that-can-search-google",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/is-there-any-agent-that-can-search-google/141016/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I want to build a smolagent that can search the results of google search ?<br>
there is the google API search but i don’t want to use it’s limit is very bad to me.</p>
|
<p>Yes, you can use the GoogleSearchTool, which is one of the default tools of smolagents.</p>
<pre><code class="lang-auto">import os
from smolagents import GoogleSearchTool, HfApiModel
os.environ["SERPAPI_API_KEY"] = userdata.get('SERPAPI_API_KEY')
model = HfApiModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct", provider="together")
agent = CodeAgent(
model=model,
tools=[GoogleSearchTool()]
)
</code></pre>
<p><a href="https://serpapi.com/" rel="noopener nofollow ugc">The link to get your Serp API key</a>.</p>
<p>And if you want to go beyond, you can use the <code>DuckDuckGoSearchTool</code>. It also has limits but maybe a combination of both can help?</p>
|
Facing issue using a model hosted on HuggingFace Server and talking to it using API_KEY
|
https://discuss.huggingface.co/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529
| 154,529
| 5
|
2025-05-11T09:15:16.256000Z
|
[
{
"id": 221171,
"name": "S",
"username": "Shaleensr",
"avatar_template": "/user_avatar/discuss.huggingface.co/shaleensr/{size}/47299_2.png",
"created_at": "2025-05-11T09:15:16.322Z",
"cooked": "<p>I am trying to create a simple langchain app on text-generation using API to communicate with models on HuggingFace servers.</p>\n<p>I created a “.env” file and stored by KEY in the variable: “HUGGINGFACEHUB_API_TOKEN”<br>\nI also checked it, API token is valid.</p>\n<p>Post that, I tried running this code snippet:</p>\n<pre><code class=\"lang-auto\"> from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint\n from dotenv import load_dotenv\n\n load_dotenv()\n\n llm = HuggingFaceEndpoint(\n repo_id=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\",\n task=\"text-generation\"\n )\n\n model = ChatHuggingFace(llm=llm)\n result = model.invoke(\"What is the capital of India\")\n print(result.content)\n</code></pre>\n<p>This is giving an error. I tried multiple things around it, but nothing worked.</p>\n<p>Here is the error log:<br>\nTraceback (most recent call last):<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\2.ChatModels\\2_chatmodel_hf_api.py”, line 13, in <br>\nresult = model.invoke(“What is the capital of India”)<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py”, line 370, in invoke<br>\nself.generate_prompt(<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py”, line 947, in generate_prompt<br>\nreturn self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py”, line 766, in generate<br>\nself._generate_with_cache(<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py”, line 1012, in _generate_with_cache<br>\nresult = self._generate(<br>\n^^^^^^^^^^^^^^^<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\langchain_huggingface\\chat_models\\huggingface.py”, line 574, in <em>generate<br>\nanswer = self.llm.client.chat_completion(messages=message_dicts, **params)<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nFile “C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\huggingface_hub\\inference_client.py”, line 886, in chat_completion<br>\nprovider_helper = get_provider_helper(<br>\n^^^^^^^^^^^^^^^^^^^^<br>\nFile \"C:\\Users\\SS\\Desktop\\Camp_langchain_models\\venv\\Lib\\site-packages\\huggingface_hub\\inference_providers_<em>init</em></em>.py\", line 165, in get_provider_helper<br>\nprovider = next(iter(provider_mapping))<br>\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>\nStopIteration</p>\n<p>I am new to it. Any guidance around this is much appreciated. Thank you.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T09:15:16.322Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 420,
"reads": 37,
"readers_count": 36,
"score": 2107.4,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "S",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/stopiteration-error/155463/2",
"internal": true,
"reflection": true,
"title": "Stopiteration error",
"clicks": 7
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93574,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221179,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-11T10:04:01.158Z",
"cooked": "<p>I think LangChain has not yet caught up with the changes in Hugging Face’s specifications.</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/huggingface_hub/issues/2966\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/huggingface_hub/issues/2966\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/huggingface_hub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/huggingface_hub/issues/2966\" target=\"_blank\" rel=\"noopener\">API Request issue</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2025-03-31\" data-time=\"12:54:25\" data-timezone=\"UTC\">12:54PM - 31 Mar 25 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/surya7N\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/d/5/d533d2405b2e9653457ccfec70dafe17cf25e41a.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"D5C3E8\">\n surya7N\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\redi<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">s\\connection.py:77: UserWarning: redis-py works best with hiredis. Please consider installing\n warnings.warn(msg)\nWrite Query Here: describe\nC:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\huggingface_hub\\utils\\_deprecation.py:131: FutureWarning: 'post' (from 'huggingface_hub.inference._client') is deprecated and will be removed from version '0.31.0'. Making direct POST requests to the inference server is not supported anymore. Please use task methods instead (e.g. `InferenceClient.chat_completion`). If your use case is not supported, please open an issue in https://github.com/huggingface/huggingface_hub.\n warnings.warn(warning_message, FutureWarning)\nTraceback (most recent call last):\n File \"C:\\Users\\Public\\CHATBOT\\llm_memory_with_Model.py\", line 59, in <module>\n response=qa_chain.invoke({'query': user_query})\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 170, in invoke\n raise e\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 160, in invoke\n self._call(inputs, run_manager=run_manager)\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\retrieval_qa\\base.py\", line 154, in _call\n answer = self.combine_documents_chain.run(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py\", line 181, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 611, in run \n return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py\", line 181, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 389, in __call__\n return self.invoke(\n ^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 170, in invoke\n raise e\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 160, in invoke\n self._call(inputs, run_manager=run_manager)\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\combine_documents\\base.py\", line 138, in _call\n output, extra_return_dict = self.combine_docs(\n ^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\combine_documents\\stuff.py\", line 259, in combine_docs\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\llm.py\", line 318, in predict\n return self(kwargs, callbacks=callbacks)[self.output_key]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\_api\\deprecation.py\", line 181, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 389, in __call__\n return self.invoke(\n ^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 170, in invoke\n raise e\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\base.py\", line 160, in invoke\n self._call(inputs, run_manager=run_manager)\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\llm.py\", line 126, in _call \n response = self.generate([inputs], run_manager=run_manager)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain\\chains\\llm.py\", line 138, in generate\n return self.llm.generate_prompt(\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\language_models\\llms.py\", line 763, in generate_prompt\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\language_models\\llms.py\", line 966, in generate\n output = self._generate_helper(\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\language_models\\llms.py\", line 787, in _generate_helper\n self._generate(\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_core\\language_models\\llms.py\", line 1526, in _generate\n self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\langchain_huggingface\\llms\\huggingface_endpoint.py\", line 312, in _call\n response = self.client.post(\n ^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\huggingface_hub\\utils\\_deprecation.py\", line 132, in inner_f\n return f(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\huggingface_hub\\inference\\_client.py\", line 302, in post\n mapped_model = provider_helper._prepare_mapped_model(model or self.model)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\huggingface_hub\\inference\\_providers\\hf_inference.py\", line 35, in _prepare_mapped_model\n _check_supported_task(model_id, self.task)\n File \"C:\\Users\\suboyina\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\huggingface_hub\\inference\\_providers\\hf_inference.py\", line 156, in _check_supported_task\n raise ValueError(\nValueError: Model 'mistralai/Mistral-7B-Instruct-v0.3' doesn't support task 'unknown'. Supported tasks: 'text-generation', got: 'unknown'</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p>Meanwhile, one possible solution would be to downgrade your <code>huggingface-hub</code> version to 0.27.1 or below.</p>\n</blockquote>\n<pre><code class=\"lang-auto\">pip install huggingface_hub<=0.27.1\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T10:04:01.158Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 35,
"readers_count": 34,
"score": 62,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/huggingface_hub/issues/2966",
"internal": false,
"reflection": false,
"title": "API Request issue · Issue #2966 · huggingface/huggingface_hub · GitHub",
"clicks": 18
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221213,
"name": "NITESH KUMAR",
"username": "niteshburnwal",
"avatar_template": "/user_avatar/discuss.huggingface.co/niteshburnwal/{size}/47260_2.png",
"created_at": "2025-05-11T15:13:25.742Z",
"cooked": "<p>I am also facing similar issue<br>\nplease let me know if you found any solution</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T15:13:25.742Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 15,
"reads": 32,
"readers_count": 31,
"score": 101.4,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "NITESH KUMAR",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93503,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221218,
"name": "Mahmut C",
"username": "mahmutc",
"avatar_template": "/user_avatar/discuss.huggingface.co/mahmutc/{size}/52583_2.png",
"created_at": "2025-05-11T16:04:11.421Z",
"cooked": "<p><code>pip install langchain-huggingface langchain</code></p>\n<pre><code class=\"lang-auto\">from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint\nllm = HuggingFaceEndpoint(\n repo_id=\"deepseek-ai/DeepSeek-R1\",\n provider=\"together\"\n)\nmodel = ChatHuggingFace(llm=llm)\nresult = model.invoke(\"What is the capital of India\")\n</code></pre>\n<p>This works for me with the following setup:</p>\n<pre><code class=\"lang-auto\">$ pip freeze | grep huggingface\nhuggingface-hub==0.31.1\nlangchain-huggingface==0.2.0\n$ pip freeze | grep langchain\nlangchain==0.3.25\nlangchain-core==0.3.59\nlangchain-huggingface==0.2.0\nlangchain-text-splitters==0.3.8\n</code></pre>",
"post_number": 4,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T16:05:29.747Z",
"reply_count": 2,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 15,
"reads": 31,
"readers_count": 30,
"score": 121.2,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "Mahmut C",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/huggingface-hub-utils-errors-hfhubhttperror-404-client-error-not-found-for-url/161277/2",
"internal": true,
"reflection": true,
"title": "huggingface_hub.utils._errors.HfHubHTTPError: 404 Client Error: Not Found for url:",
"clicks": 0
}
],
"read": true,
"user_title": "",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 61570,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221219,
"name": "Mahmut C",
"username": "mahmutc",
"avatar_template": "/user_avatar/discuss.huggingface.co/mahmutc/{size}/52583_2.png",
"created_at": "2025-05-11T16:11:55.644Z",
"cooked": "<p>Please note the following regarding <a href=\"https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0\">TinyLlama/TinyLlama-1.1B-Chat-v1.0</a>:</p>\n<blockquote>\n<p>This model isn’t deployed by any Inference Provider.</p>\n</blockquote>",
"post_number": 5,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T16:12:40.609Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 32,
"readers_count": 31,
"score": 61.4,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "Mahmut C",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"internal": false,
"reflection": false,
"title": "TinyLlama/TinyLlama-1.1B-Chat-v1.0 · Hugging Face",
"clicks": 20
}
],
"read": true,
"user_title": "",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 61570,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 221221,
"name": "S",
"username": "Shaleensr",
"avatar_template": "/user_avatar/discuss.huggingface.co/shaleensr/{size}/47299_2.png",
"created_at": "2025-05-11T16:25:46.336Z",
"cooked": "<p>Thank you <a class=\"mention\" href=\"/u/mahmutc\">@mahmutc</a>. This code snippet worked for me.</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T16:25:46.336Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 29,
"readers_count": 28,
"score": 25.8,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "S",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93574,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/6",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 61570,
"username": "mahmutc",
"name": "Mahmut C",
"avatar_template": "/user_avatar/discuss.huggingface.co/mahmutc/{size}/52583_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 221222,
"name": "S",
"username": "Shaleensr",
"avatar_template": "/user_avatar/discuss.huggingface.co/shaleensr/{size}/47299_2.png",
"created_at": "2025-05-11T16:28:01.145Z",
"cooked": "<p>The below snippet by mahmutc worked for me:</p>\n<aside class=\"quote no-group quote-modified\" data-username=\"mahmutc\" data-post=\"4\" data-topic=\"154529\">\n<div class=\"title\">\n<div class=\"quote-controls\"></div>\n<img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/mahmutc/48/52583_2.png\" class=\"avatar\"> mahmutc:</div>\n<blockquote>\n<pre><code class=\"lang-auto\">from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint\nllm = HuggingFaceEndpoint(\n repo_id=\"deepseek-ai/DeepSeek-R1\",\n provider=\"together\"\n)\nmodel = ChatHuggingFace(llm=llm)\nresult = model.invoke(\"What is the capital of India\")\n```from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint\nllm = HuggingFaceEndpoint(\n repo_id=\"deepseek-ai/DeepSeek-R1\",\n provider=\"together\"\n)\nmodel = ChatHuggingFace(llm=llm)\nresult = model.invoke(\"What is the capital of India\")\n</code></pre>\n</blockquote>\n</aside>",
"post_number": 7,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-05-11T16:28:01.145Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 1,
"incoming_link_count": 5,
"reads": 29,
"readers_count": 28,
"score": 45.8,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "S",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93574,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/7",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 93503,
"username": "niteshburnwal",
"name": "NITESH KUMAR",
"avatar_template": "/user_avatar/discuss.huggingface.co/niteshburnwal/{size}/47260_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 221312,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-12T04:28:01.352Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 8,
"post_type": 3,
"posts_count": 8,
"updated_at": "2025-05-12T04:28:01.352Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 20,
"readers_count": 19,
"score": 29,
"yours": false,
"topic_id": 154529,
"topic_slug": "facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/facing-issue-using-a-model-hosted-on-huggingface-server-and-talking-to-it-using-api-key/154529/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am trying to create a simple langchain app on text-generation using API to communicate with models on HuggingFace servers.</p>
<p>I created a “.env” file and stored by KEY in the variable: “HUGGINGFACEHUB_API_TOKEN”<br>
I also checked it, API token is valid.</p>
<p>Post that, I tried running this code snippet:</p>
<pre><code class="lang-auto"> from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
from dotenv import load_dotenv
load_dotenv()
llm = HuggingFaceEndpoint(
repo_id="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
task="text-generation"
)
model = ChatHuggingFace(llm=llm)
result = model.invoke("What is the capital of India")
print(result.content)
</code></pre>
<p>This is giving an error. I tried multiple things around it, but nothing worked.</p>
<p>Here is the error log:<br>
Traceback (most recent call last):<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\2.ChatModels\2_chatmodel_hf_api.py”, line 13, in <br>
result = model.invoke(“What is the capital of India”)<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 370, in invoke<br>
self.generate_prompt(<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 947, in generate_prompt<br>
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 766, in generate<br>
self._generate_with_cache(<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\chat_models.py”, line 1012, in _generate_with_cache<br>
result = self._generate(<br>
^^^^^^^^^^^^^^^<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py”, line 574, in <em>generate<br>
answer = self.llm.client.chat_completion(messages=message_dicts, **params)<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\huggingface_hub\inference_client.py”, line 886, in chat_completion<br>
provider_helper = get_provider_helper(<br>
^^^^^^^^^^^^^^^^^^^^<br>
File "C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\huggingface_hub\inference_providers_<em>init</em></em>.py", line 165, in get_provider_helper<br>
provider = next(iter(provider_mapping))<br>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^<br>
StopIteration</p>
<p>I am new to it. Any guidance around this is much appreciated. Thank you.</p>
|
<p><code>pip install langchain-huggingface langchain</code></p>
<pre><code class="lang-auto">from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
repo_id="deepseek-ai/DeepSeek-R1",
provider="together"
)
model = ChatHuggingFace(llm=llm)
result = model.invoke("What is the capital of India")
</code></pre>
<p>This works for me with the following setup:</p>
<pre><code class="lang-auto">$ pip freeze | grep huggingface
huggingface-hub==0.31.1
langchain-huggingface==0.2.0
$ pip freeze | grep langchain
langchain==0.3.25
langchain-core==0.3.59
langchain-huggingface==0.2.0
langchain-text-splitters==0.3.8
</code></pre>
|
Inquiry Regarding Out of Memory Issue During LoRA Fine-Tuning
|
https://discuss.huggingface.co/t/inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning/153432
| 153,432
| 13
|
2025-05-04T17:04:54.737000Z
|
[
{
"id": 219683,
"name": "HSU Chin wei",
"username": "bensonbbn",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/b/f475e1/{size}.png",
"created_at": "2025-05-04T17:04:54.813Z",
"cooked": "<p>I am a student currently working on training the LLAMA-4-Scout-17B-16E-Instruct model using LoRA, running on an H100 GPU with 80GB VRAM (on Lambda Labs). However, I have encountered an out of memory error during the training process. I understand that this might fall slightly outside the scope of the course, but despite extensive research and reviewing various community discussions, I have not been able to resolve the issue.</p>\n<p>Here is a brief outline of my setup:</p>\n<p>Hardware: H100 (80GB VRAM)</p>\n<p>Model: LLAMA-4-Scout-17B-16E-Instruct (download on unsloth hugging face)</p>\n<p>Training Method: LoRA</p>\n<p>Error: CUDA out of memory</p>\n<p>Code snippet:<br>\nimport torch<br>\nfrom transformers import AutoTokenizer,TrainingArguments,Trainer,DataCollatorForLanguageModeling,AutoModelForCausalLM<br>\nfrom peft import LoraConfig, get_peft_model, TaskType<br>\nfrom datasets import load_dataset<br>\nfrom accelerate import dispatch_model<br>\nfrom accelerate import Accelerator<br>\nfrom accelerate.utils import get_balanced_memory, infer_auto_device_map<br>\nimport os<br>\nos.environ[“PYTORCH_CUDA_ALLOC_CONF”] = “expandable_segments:True”</p>\n<p>model_path = “/home/ubuntu/llama4”<br>\ndataset_path = “llama_nc_instruction_train.jsonl”<br>\noutput_dir = “./merged_llama4_nccode”</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/brain.png?v=14\" title=\":brain:\" class=\"emoji\" alt=\":brain:\" loading=\"lazy\" width=\"20\" height=\"20\"> loading tokenizer…”)<br>\ntokenizer = AutoTokenizer.from_pretrained(model_path)</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/package.png?v=14\" title=\":package:\" class=\"emoji\" alt=\":package:\" loading=\"lazy\" width=\"20\" height=\"20\"> loading model…(使用 safetensors)”)<br>\nmodel = AutoModelForCausalLM.from_pretrained(<br>\nmodel_path,<br>\ntorch_dtype=torch.bfloat16,<br>\nlow_cpu_mem_usage=True,<br>\ntrust_remote_code=True<br>\n)</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/wrench.png?v=14\" title=\":wrench:\" class=\"emoji\" alt=\":wrench:\" loading=\"lazy\" width=\"20\" height=\"20\"> applying LoRA setting…”)<br>\nlora_config = LoraConfig(<br>\nr=8,<br>\nlora_alpha=32, <span class=\"hashtag-raw\">#有人用8</span><br>\ntarget_modules=[“q_proj”, “v_proj”],<br>\nlora_dropout=0.05,<br>\nbias=“none”,<br>\ntask_type=TaskType.CAUSAL_LM,<br>\n)</p>\n<p>model = get_peft_model(model, lora_config)</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/page_facing_up.png?v=14\" title=\":page_facing_up:\" class=\"emoji\" alt=\":page_facing_up:\" loading=\"lazy\" width=\"20\" height=\"20\"> loading data…”)<br>\ndataset = load_dataset(“json”, data_files=dataset_path, split=“train”)</p>\n<p>def tokenize(example):<br>\ntokenized_inputs = tokenizer(<br>\nexample[“text”],<br>\ntruncation=True,<br>\npadding=“max_length”,<br>\nmax_length=4196<br>\n)<br>\nreturn tokenized_inputs</p>\n<p>tokenized_dataset = dataset.map(tokenize, batched=True, remove_columns=[“text”])</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/bullseye.png?v=14\" title=\":bullseye:\" class=\"emoji\" alt=\":bullseye:\" loading=\"lazy\" width=\"20\" height=\"20\"> establish Trainer…”)<br>\ntraining_args = TrainingArguments(<br>\noutput_dir=“./lora_tmp”,<br>\nnum_train_epochs=3,<br>\nper_device_train_batch_size=1, <span class=\"hashtag-raw\">#有人用64</span><br>\ngradient_accumulation_steps=512,<br>\nlearning_rate=2e-4,<br>\nlogging_steps=10,<br>\nsave_strategy=“no”,<br>\n)</p>\n<p>trainer = Trainer(<br>\nmodel=model,<br>\nargs=training_args,<br>\ntrain_dataset=tokenized_dataset,<br>\ntokenizer=tokenizer,<br>\ndata_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False),<br>\n)</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/rocket.png?v=14\" title=\":rocket:\" class=\"emoji\" alt=\":rocket:\" loading=\"lazy\" width=\"20\" height=\"20\"> training…”)<br>\ntrainer.train()</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/floppy_disk.png?v=14\" title=\":floppy_disk:\" class=\"emoji\" alt=\":floppy_disk:\" loading=\"lazy\" width=\"20\" height=\"20\"> merge LoRA weight…”)<br>\nmodel = model.merge_and_unload()</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/package.png?v=14\" title=\":package:\" class=\"emoji\" alt=\":package:\" loading=\"lazy\" width=\"20\" height=\"20\"> save model to:”, output_dir)<br>\nmodel.save_pretrained(output_dir)<br>\ntokenizer.save_pretrained(output_dir)</p>\n<p>print(“<img src=\"https://emoji.discourse-cdn.com/apple/white_check_mark.png?v=14\" title=\":white_check_mark:\" class=\"emoji\" alt=\":white_check_mark:\" loading=\"lazy\" width=\"20\" height=\"20\"> finish!”)</p>\n<p>and this is the error:</p>\n<p><img src=\"https://emoji.discourse-cdn.com/apple/brain.png?v=14\" title=\":brain:\" class=\"emoji\" alt=\":brain:\" loading=\"lazy\" width=\"20\" height=\"20\"> 載入 tokenizer…<br>\n<img src=\"https://emoji.discourse-cdn.com/apple/package.png?v=14\" title=\":package:\" class=\"emoji\" alt=\":package:\" loading=\"lazy\" width=\"20\" height=\"20\"> 載入模型…(使用 safetensors)<br>\nLoading checkpoint shards: 100%|███████████████████████████████████████████████████████| 50/50 [00:00<00:00, 457.56it/s]<br>\n<img src=\"https://emoji.discourse-cdn.com/apple/wrench.png?v=14\" title=\":wrench:\" class=\"emoji\" alt=\":wrench:\" loading=\"lazy\" width=\"20\" height=\"20\"> 套用 LoRA 設定…<br>\n<img src=\"https://emoji.discourse-cdn.com/apple/page_facing_up.png?v=14\" title=\":page_facing_up:\" class=\"emoji\" alt=\":page_facing_up:\" loading=\"lazy\" width=\"20\" height=\"20\"> 載入資料中…<br>\n<img src=\"https://emoji.discourse-cdn.com/apple/bullseye.png?v=14\" title=\":bullseye:\" class=\"emoji\" alt=\":bullseye:\" loading=\"lazy\" width=\"20\" height=\"20\"> 建立 Trainer…<br>\n/home/ubuntu/CNC代碼定義訓練黨TEST.py:68: FutureWarning: tokenizer is deprecated and will be removed in version 5.0.0 for Trainer.<strong>init</strong>. Use processing_class instead.<br>\ntrainer = Trainer(<br>\nTraceback (most recent call last):<br>\nFile “/home/ubuntu/CNC代碼定義訓練黨TEST.py”, line 68, in<br>\ntrainer = Trainer(<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/transformers/utils/deprecation.py”, line 172, in wrapped_func<br>\nreturn func(*args, **kwargs)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/transformers/trainer.py”, line 614, in init<br>\nself._move_model_to_device(model, args.device)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/transformers/trainer.py”, line 901, in _move_model_to_device<br>\nmodel = model.to(device)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1355, in to<br>\nreturn self._apply(convert)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 915, in _apply<br>\nmodule._apply(fn)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 915, in _apply<br>\nmodule._apply(fn)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 915, in _apply<br>\nmodule._apply(fn)<br>\n[Previous line repeated 4 more times]<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 942, in _apply<br>\nparam_applied = fn(param)<br>\nFile “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1341, in convert<br>\nreturn t.to(<br>\ntorch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacity of 79.19 GiB of which 359.06 MiB is free. Including non-PyTorch memory, this process has 78.83 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 8.21 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (<a href=\"https://pytorch.org/docs/stable/notes/cuda.html#environment-variables\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">CUDA semantics — PyTorch 2.7 documentation</a>)</p>\n<p>Would anyone kindly offer any suggestions or best practices to address this issue? Are there specific parameters I should consider adjusting (e.g., batch size, gradient checkpointing, LoRA rank, etc.) to make it fit within the memory constraints?<br>\nOr is this simply a case of hardware limitation, and even 80GB VRAM is not enough for this model.And i have tried the QLORA method,encountering the same question.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-04T17:28:21.682Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 373,
"reads": 11,
"readers_count": 10,
"score": 1782,
"yours": false,
"topic_id": 153432,
"topic_slug": "inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning",
"display_username": "HSU Chin wei",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://pytorch.org/docs/stable/notes/cuda.html#environment-variables",
"internal": false,
"reflection": false,
"title": "CUDA semantics — PyTorch 2.7 documentation",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 92799,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning/153432/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219762,
"name": "Andrew J tokar",
"username": "Zelgodiz",
"avatar_template": "/user_avatar/discuss.huggingface.co/zelgodiz/{size}/45662_2.png",
"created_at": "2025-05-05T04:06:43.896Z",
"cooked": "<p>It looks like you’re running into a <strong>CUDA out of memory</strong> issue while fine-tuning <strong>LLAMA-4-Scout-17B-16E-Instruct</strong> using LoRA on an <strong>H100 GPU with 80GB VRAM</strong>. Even though 80GB is a lot, large models like this can still exceed memory limits, especially with high batch sizes and gradient accumulation steps.</p>\n<h3><a name=\"p-219762-possible-causes-1\" class=\"anchor\" href=\"#p-219762-possible-causes-1\"></a><strong>Possible Causes</strong></h3>\n<ol>\n<li><strong>Batch Size Too Large</strong> – Even though you set <code>per_device_train_batch_size=1</code>, your <code>gradient_accumulation_steps=512</code> might be causing excessive memory usage.</li>\n<li><strong>LoRA Rank & Target Modules</strong> – The LoRA rank (<code>r=8</code>) and target modules (<code>q_proj</code>, <code>v_proj</code>) might be consuming more memory than expected.</li>\n<li><strong>Token Length Too High</strong> – Your <code>max_length=4196</code> is quite large, leading to high memory consumption per sample.</li>\n<li><strong>Memory Fragmentation</strong> – Even though you set <code>PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True</code>, fragmentation might still be an issue.</li>\n</ol>\n<h3><a name=\"p-219762-potential-fixes-2\" class=\"anchor\" href=\"#p-219762-potential-fixes-2\"></a><strong>Potential Fixes</strong></h3>\n<h4><a name=\"p-219762-h-1-reduce-gradient-accumulation-steps-3\" class=\"anchor\" href=\"#p-219762-h-1-reduce-gradient-accumulation-steps-3\"></a><strong>1. Reduce Gradient Accumulation Steps</strong></h4>\n<p>Try lowering <code>gradient_accumulation_steps</code> to <strong>128 or 64</strong> instead of 512:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">training_args = TrainingArguments(\n output_dir=\"./lora_tmp\",\n num_train_epochs=3,\n per_device_train_batch_size=1,\n gradient_accumulation_steps=64, # Reduce from 512\n learning_rate=2e-4,\n logging_steps=10,\n save_strategy=\"no\",\n)\n</code></pre>\n<p>This will reduce memory usage significantly.</p>\n<h4><a name=\"p-219762-h-2-lower-token-length-4\" class=\"anchor\" href=\"#p-219762-h-2-lower-token-length-4\"></a><strong>2. Lower Token Length</strong></h4>\n<p>Try reducing <code>max_length</code> from <strong>4196</strong> to <strong>2048</strong>:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">tokenized_inputs = tokenizer(\n example[\"text\"],\n truncation=True,\n padding=\"max_length\",\n max_length=2048 # Reduce from 4196\n)\n</code></pre>\n<p>This will cut memory usage per sample in half.</p>\n<h4><a name=\"p-219762-h-3-enable-gradient-checkpointing-5\" class=\"anchor\" href=\"#p-219762-h-3-enable-gradient-checkpointing-5\"></a><strong>3. Enable Gradient Checkpointing</strong></h4>\n<p>This helps reduce memory usage by recomputing activations instead of storing them:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">model.gradient_checkpointing_enable()\n</code></pre>\n<h4><a name=\"p-219762-h-4-use-torchcompile-for-optimization-6\" class=\"anchor\" href=\"#p-219762-h-4-use-torchcompile-for-optimization-6\"></a><strong>4. Use <code>torch.compile()</code> for Optimization</strong></h4>\n<p>If you’re using PyTorch 2.0+, try compiling the model for better memory efficiency:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">model = torch.compile(model)\n</code></pre>\n<h4><a name=\"p-219762-h-5-offload-model-to-cpu-7\" class=\"anchor\" href=\"#p-219762-h-5-offload-model-to-cpu-7\"></a><strong>5. Offload Model to CPU</strong></h4>\n<p>If memory is still an issue, offload parts of the model to CPU using <code>accelerate</code>:</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from accelerate import infer_auto_device_map, dispatch_model\n\ndevice_map = infer_auto_device_map(model, max_memory={\"cuda\": \"75GB\", \"cpu\": \"20GB\"})\nmodel = dispatch_model(model, device_map=device_map)\n</code></pre>\n<p>This ensures that only essential parts stay on the GPU.</p>\n<h3><a name=\"p-219762-next-steps-8\" class=\"anchor\" href=\"#p-219762-next-steps-8\"></a><strong>Next Steps</strong></h3>\n<p>Try these adjustments one by one and monitor memory usage. If the issue persists, consider switching to <strong>QLoRA</strong> with <strong>4-bit quantization</strong>, which significantly reduces VRAM usage.</p>\n<p>Let me know if you need help implementing these fixes! <img src=\"https://emoji.discourse-cdn.com/apple/rocket.png?v=14\" title=\":rocket:\" class=\"emoji\" alt=\":rocket:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-05T04:06:43.896Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 24,
"reads": 10,
"readers_count": 9,
"score": 141.8,
"yours": false,
"topic_id": 153432,
"topic_slug": "inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning",
"display_username": "Andrew J tokar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90984,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning/153432/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220836,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-09T15:08:51.365Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-09T15:08:51.365Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 4,
"readers_count": 3,
"score": 15.6,
"yours": false,
"topic_id": 153432,
"topic_slug": "inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/inquiry-regarding-out-of-memory-issue-during-lora-fine-tuning/153432/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am a student currently working on training the LLAMA-4-Scout-17B-16E-Instruct model using LoRA, running on an H100 GPU with 80GB VRAM (on Lambda Labs). However, I have encountered an out of memory error during the training process. I understand that this might fall slightly outside the scope of the course, but despite extensive research and reviewing various community discussions, I have not been able to resolve the issue.</p>
<p>Here is a brief outline of my setup:</p>
<p>Hardware: H100 (80GB VRAM)</p>
<p>Model: LLAMA-4-Scout-17B-16E-Instruct (download on unsloth hugging face)</p>
<p>Training Method: LoRA</p>
<p>Error: CUDA out of memory</p>
<p>Code snippet:<br>
import torch<br>
from transformers import AutoTokenizer,TrainingArguments,Trainer,DataCollatorForLanguageModeling,AutoModelForCausalLM<br>
from peft import LoraConfig, get_peft_model, TaskType<br>
from datasets import load_dataset<br>
from accelerate import dispatch_model<br>
from accelerate import Accelerator<br>
from accelerate.utils import get_balanced_memory, infer_auto_device_map<br>
import os<br>
os.environ[“PYTORCH_CUDA_ALLOC_CONF”] = “expandable_segments:True”</p>
<p>model_path = “/home/ubuntu/llama4”<br>
dataset_path = “llama_nc_instruction_train.jsonl”<br>
output_dir = “./merged_llama4_nccode”</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/brain.png?v=14" title=":brain:" class="emoji" alt=":brain:" loading="lazy" width="20" height="20"> loading tokenizer…”)<br>
tokenizer = AutoTokenizer.from_pretrained(model_path)</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/package.png?v=14" title=":package:" class="emoji" alt=":package:" loading="lazy" width="20" height="20"> loading model…(使用 safetensors)”)<br>
model = AutoModelForCausalLM.from_pretrained(<br>
model_path,<br>
torch_dtype=torch.bfloat16,<br>
low_cpu_mem_usage=True,<br>
trust_remote_code=True<br>
)</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/wrench.png?v=14" title=":wrench:" class="emoji" alt=":wrench:" loading="lazy" width="20" height="20"> applying LoRA setting…”)<br>
lora_config = LoraConfig(<br>
r=8,<br>
lora_alpha=32, <span class="hashtag-raw">#有人用8</span><br>
target_modules=[“q_proj”, “v_proj”],<br>
lora_dropout=0.05,<br>
bias=“none”,<br>
task_type=TaskType.CAUSAL_LM,<br>
)</p>
<p>model = get_peft_model(model, lora_config)</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/page_facing_up.png?v=14" title=":page_facing_up:" class="emoji" alt=":page_facing_up:" loading="lazy" width="20" height="20"> loading data…”)<br>
dataset = load_dataset(“json”, data_files=dataset_path, split=“train”)</p>
<p>def tokenize(example):<br>
tokenized_inputs = tokenizer(<br>
example[“text”],<br>
truncation=True,<br>
padding=“max_length”,<br>
max_length=4196<br>
)<br>
return tokenized_inputs</p>
<p>tokenized_dataset = dataset.map(tokenize, batched=True, remove_columns=[“text”])</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/bullseye.png?v=14" title=":bullseye:" class="emoji" alt=":bullseye:" loading="lazy" width="20" height="20"> establish Trainer…”)<br>
training_args = TrainingArguments(<br>
output_dir=“./lora_tmp”,<br>
num_train_epochs=3,<br>
per_device_train_batch_size=1, <span class="hashtag-raw">#有人用64</span><br>
gradient_accumulation_steps=512,<br>
learning_rate=2e-4,<br>
logging_steps=10,<br>
save_strategy=“no”,<br>
)</p>
<p>trainer = Trainer(<br>
model=model,<br>
args=training_args,<br>
train_dataset=tokenized_dataset,<br>
tokenizer=tokenizer,<br>
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False),<br>
)</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/rocket.png?v=14" title=":rocket:" class="emoji" alt=":rocket:" loading="lazy" width="20" height="20"> training…”)<br>
trainer.train()</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/floppy_disk.png?v=14" title=":floppy_disk:" class="emoji" alt=":floppy_disk:" loading="lazy" width="20" height="20"> merge LoRA weight…”)<br>
model = model.merge_and_unload()</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/package.png?v=14" title=":package:" class="emoji" alt=":package:" loading="lazy" width="20" height="20"> save model to:”, output_dir)<br>
model.save_pretrained(output_dir)<br>
tokenizer.save_pretrained(output_dir)</p>
<p>print(“<img src="https://emoji.discourse-cdn.com/apple/white_check_mark.png?v=14" title=":white_check_mark:" class="emoji" alt=":white_check_mark:" loading="lazy" width="20" height="20"> finish!”)</p>
<p>and this is the error:</p>
<p><img src="https://emoji.discourse-cdn.com/apple/brain.png?v=14" title=":brain:" class="emoji" alt=":brain:" loading="lazy" width="20" height="20"> 載入 tokenizer…<br>
<img src="https://emoji.discourse-cdn.com/apple/package.png?v=14" title=":package:" class="emoji" alt=":package:" loading="lazy" width="20" height="20"> 載入模型…(使用 safetensors)<br>
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 50/50 [00:00<00:00, 457.56it/s]<br>
<img src="https://emoji.discourse-cdn.com/apple/wrench.png?v=14" title=":wrench:" class="emoji" alt=":wrench:" loading="lazy" width="20" height="20"> 套用 LoRA 設定…<br>
<img src="https://emoji.discourse-cdn.com/apple/page_facing_up.png?v=14" title=":page_facing_up:" class="emoji" alt=":page_facing_up:" loading="lazy" width="20" height="20"> 載入資料中…<br>
<img src="https://emoji.discourse-cdn.com/apple/bullseye.png?v=14" title=":bullseye:" class="emoji" alt=":bullseye:" loading="lazy" width="20" height="20"> 建立 Trainer…<br>
/home/ubuntu/CNC代碼定義訓練黨TEST.py:68: FutureWarning: tokenizer is deprecated and will be removed in version 5.0.0 for Trainer.<strong>init</strong>. Use processing_class instead.<br>
trainer = Trainer(<br>
Traceback (most recent call last):<br>
File “/home/ubuntu/CNC代碼定義訓練黨TEST.py”, line 68, in<br>
trainer = Trainer(<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/transformers/utils/deprecation.py”, line 172, in wrapped_func<br>
return func(*args, **kwargs)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/transformers/trainer.py”, line 614, in init<br>
self._move_model_to_device(model, args.device)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/transformers/trainer.py”, line 901, in _move_model_to_device<br>
model = model.to(device)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1355, in to<br>
return self._apply(convert)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 915, in _apply<br>
module._apply(fn)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 915, in _apply<br>
module._apply(fn)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 915, in _apply<br>
module._apply(fn)<br>
[Previous line repeated 4 more times]<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 942, in _apply<br>
param_applied = fn(param)<br>
File “/home/ubuntu/llama_env/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1341, in convert<br>
return t.to(<br>
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacity of 79.19 GiB of which 359.06 MiB is free. Including non-PyTorch memory, this process has 78.83 GiB memory in use. Of the allocated memory 78.38 GiB is allocated by PyTorch, and 8.21 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (<a href="https://pytorch.org/docs/stable/notes/cuda.html#environment-variables" class="inline-onebox" rel="noopener nofollow ugc">CUDA semantics — PyTorch 2.7 documentation</a>)</p>
<p>Would anyone kindly offer any suggestions or best practices to address this issue? Are there specific parameters I should consider adjusting (e.g., batch size, gradient checkpointing, LoRA rank, etc.) to make it fit within the memory constraints?<br>
Or is this simply a case of hardware limitation, and even 80GB VRAM is not enough for this model.And i have tried the QLORA method,encountering the same question.</p>
|
<p>It looks like you’re running into a <strong>CUDA out of memory</strong> issue while fine-tuning <strong>LLAMA-4-Scout-17B-16E-Instruct</strong> using LoRA on an <strong>H100 GPU with 80GB VRAM</strong>. Even though 80GB is a lot, large models like this can still exceed memory limits, especially with high batch sizes and gradient accumulation steps.</p>
<h3><a name="p-219762-possible-causes-1" class="anchor" href="#p-219762-possible-causes-1"></a><strong>Possible Causes</strong></h3>
<ol>
<li><strong>Batch Size Too Large</strong> – Even though you set <code>per_device_train_batch_size=1</code>, your <code>gradient_accumulation_steps=512</code> might be causing excessive memory usage.</li>
<li><strong>LoRA Rank & Target Modules</strong> – The LoRA rank (<code>r=8</code>) and target modules (<code>q_proj</code>, <code>v_proj</code>) might be consuming more memory than expected.</li>
<li><strong>Token Length Too High</strong> – Your <code>max_length=4196</code> is quite large, leading to high memory consumption per sample.</li>
<li><strong>Memory Fragmentation</strong> – Even though you set <code>PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True</code>, fragmentation might still be an issue.</li>
</ol>
<h3><a name="p-219762-potential-fixes-2" class="anchor" href="#p-219762-potential-fixes-2"></a><strong>Potential Fixes</strong></h3>
<h4><a name="p-219762-h-1-reduce-gradient-accumulation-steps-3" class="anchor" href="#p-219762-h-1-reduce-gradient-accumulation-steps-3"></a><strong>1. Reduce Gradient Accumulation Steps</strong></h4>
<p>Try lowering <code>gradient_accumulation_steps</code> to <strong>128 or 64</strong> instead of 512:</p>
<pre data-code-wrap="python"><code class="lang-python">training_args = TrainingArguments(
output_dir="./lora_tmp",
num_train_epochs=3,
per_device_train_batch_size=1,
gradient_accumulation_steps=64, # Reduce from 512
learning_rate=2e-4,
logging_steps=10,
save_strategy="no",
)
</code></pre>
<p>This will reduce memory usage significantly.</p>
<h4><a name="p-219762-h-2-lower-token-length-4" class="anchor" href="#p-219762-h-2-lower-token-length-4"></a><strong>2. Lower Token Length</strong></h4>
<p>Try reducing <code>max_length</code> from <strong>4196</strong> to <strong>2048</strong>:</p>
<pre data-code-wrap="python"><code class="lang-python">tokenized_inputs = tokenizer(
example["text"],
truncation=True,
padding="max_length",
max_length=2048 # Reduce from 4196
)
</code></pre>
<p>This will cut memory usage per sample in half.</p>
<h4><a name="p-219762-h-3-enable-gradient-checkpointing-5" class="anchor" href="#p-219762-h-3-enable-gradient-checkpointing-5"></a><strong>3. Enable Gradient Checkpointing</strong></h4>
<p>This helps reduce memory usage by recomputing activations instead of storing them:</p>
<pre data-code-wrap="python"><code class="lang-python">model.gradient_checkpointing_enable()
</code></pre>
<h4><a name="p-219762-h-4-use-torchcompile-for-optimization-6" class="anchor" href="#p-219762-h-4-use-torchcompile-for-optimization-6"></a><strong>4. Use <code>torch.compile()</code> for Optimization</strong></h4>
<p>If you’re using PyTorch 2.0+, try compiling the model for better memory efficiency:</p>
<pre data-code-wrap="python"><code class="lang-python">model = torch.compile(model)
</code></pre>
<h4><a name="p-219762-h-5-offload-model-to-cpu-7" class="anchor" href="#p-219762-h-5-offload-model-to-cpu-7"></a><strong>5. Offload Model to CPU</strong></h4>
<p>If memory is still an issue, offload parts of the model to CPU using <code>accelerate</code>:</p>
<pre data-code-wrap="python"><code class="lang-python">from accelerate import infer_auto_device_map, dispatch_model
device_map = infer_auto_device_map(model, max_memory={"cuda": "75GB", "cpu": "20GB"})
model = dispatch_model(model, device_map=device_map)
</code></pre>
<p>This ensures that only essential parts stay on the GPU.</p>
<h3><a name="p-219762-next-steps-8" class="anchor" href="#p-219762-next-steps-8"></a><strong>Next Steps</strong></h3>
<p>Try these adjustments one by one and monitor memory usage. If the issue persists, consider switching to <strong>QLoRA</strong> with <strong>4-bit quantization</strong>, which significantly reduces VRAM usage.</p>
<p>Let me know if you need help implementing these fixes! <img src="https://emoji.discourse-cdn.com/apple/rocket.png?v=14" title=":rocket:" class="emoji" alt=":rocket:" loading="lazy" width="20" height="20"></p>
|
Error in Autotrain Training
|
https://discuss.huggingface.co/t/error-in-autotrain-training/154069
| 154,069
| 5
|
2025-05-08T07:41:32.858000Z
|
[
{
"id": 220520,
"name": "Lukas",
"username": "LuuWee",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/l/4af34b/{size}.png",
"created_at": "2025-05-08T07:41:32.922Z",
"cooked": "<p>Hello everyone I am very new and im experimenting with the Huggingface Autotrain UI but im having a little trouble getting the training started. I am trying to train a meta-llama/Llama-3.1-8b-Instruct Model with an example dataset that i found<br>\nalpaca1k.csv<br>\nwhich i uploaded as a local file.<br>\nI have not made any changes to any other parameters. When i then click start training i get an error.</p>\n<p>ERROR | 2025-05-08 07:39:20 | autotrain.trainers.common:wrapper:215 - train has failed due to an exception: Traceback (most recent call last):<br>\nFile “/app/env/lib/python3.10/site-packages/autotrain/trainers/common.py”, line 212, in wrapper<br>\nreturn func(*args, **kwargs)<br>\nFile “/app/env/lib/python3.10/site-packages/autotrain/trainers/clm/<strong>main</strong>.py”, line 28, in train<br>\ntrain_sft(config)<br>\nFile “/app/env/lib/python3.10/site-packages/autotrain/trainers/clm/train_clm_sft.py”, line 27, in train<br>\nmodel = utils.get_model(config, tokenizer)<br>\nFile “/app/env/lib/python3.10/site-packages/autotrain/trainers/clm/utils.py”, line 943, in get_model<br>\nmodel = AutoModelForCausalLM.from_pretrained(<br>\nFile “/app/env/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py”, line 564, in from_pretrained<br>\nreturn model_class.from_pretrained(<br>\nFile “/app/env/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 3620, in from_pretrained<br>\nhf_quantizer.validate_environment(<br>\nFile “/app/env/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py”, line 83, in validate_environment<br>\nvalidate_bnb_backend_availability(raise_exception=True)<br>\nFile “/app/env/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py”, line 559, in validate_bnb_backend_availability<br>\nreturn _validate_bnb_cuda_backend_availability(raise_exception)<br>\nFile “/app/env/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py”, line 537, in _validate_bnb_cuda_backend_availability<br>\nraise RuntimeError(log_msg)<br>\nRuntimeError: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at <a href=\"https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend\" class=\"inline-onebox\">Installation Guide</a></p>\n<p>ERROR | 2025-05-08 07:39:20 | autotrain.trainers.common:wrapper:216 - CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at <a href=\"https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend\" class=\"inline-onebox\">Installation Guide</a><br>\nINFO | 2025-05-08 07:39:20 | autotrain.trainers.common:pause_space:156 - Pausing space…</p>\n<p>I not sure how i can fix this. Any help is appreciated</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-08T07:41:32.922Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 231,
"reads": 11,
"readers_count": 10,
"score": 1147.2,
"yours": false,
"topic_id": 154069,
"topic_slug": "error-in-autotrain-training",
"display_username": "Lukas",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend",
"internal": false,
"reflection": false,
"title": "Installation Guide",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93248,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/error-in-autotrain-training/154069/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220527,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-08T08:06:56.954Z",
"cooked": "<p>In some cases, the problem can be resolved by installing bitsandbytes as indicated in the error message. However, in other cases, reinstalling PyTorch and the CUDA Toolkit may be necessary.</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1384\">\n <header class=\"source\">\n\n <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1384\" target=\"_blank\" rel=\"noopener\">github.com/bitsandbytes-foundation/bitsandbytes</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1384\" target=\"_blank\" rel=\"noopener\">An error occurred: CUDA is required but not available for bitsandbytes.</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-10-09\" data-time=\"17:11:44\" data-timezone=\"UTC\">05:11PM - 09 Oct 24 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/GaoDalie\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/c/e/ceade68577cb05ab525d6eaba2ddbd652720f7e1.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"4E9769\">\n GaoDalie\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n CUDA Setup\n </span>\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n Proposing to Close\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">### System Info\n\nplease I have tried many ways but I couldn't address the issues<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">, could anyone please give me a hint or help me to solve this bug because I couldn't figure it where the problem coming from \n\nnote: I have installed Cuda in my env, but I am still getting an error \n\nhere is the error : \n\nAn error occurred: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend\n\nthank you so much\n\n \n\n### Reproduction\n\nimport torch\nfrom transformers import AutoModelForVision2Seq, AutoProcessor, BitsAndBytesConfig\n \n# Hugging Face model id\ntry:\n model_id = \"Qwen/Qwen2-VL-7B-Instruct\" \n \n # BitsAndBytesConfig int-4 config\n bnb_config = BitsAndBytesConfig(\n load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type=\"nf4\", bnb_4bit_compute_dtype=torch.bfloat16\n )\n \n # Load model and tokenizer\n model = AutoModelForVision2Seq.from_pretrained(\n model_id,\n device_map=\"auto\",\n torch_dtype=torch.bfloat16,\n quantization_config=bnb_config\n )\nexcept Exception as e:\n print(f\"An error occurred: {e}\")\n\n### Expected behavior\n\nplease I have tried many ways but I couldn't address the issues, could anyone please give me a hint or help me to solve this bug because I couldn't figure it where the problem coming from \n\nnote: I have installed Cuda in my env, but I am still getting an error \n\nhere is the error : \n\nAn error occurred: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend\n\nthank you so much</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1093\">\n <header class=\"source\">\n\n <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1093\" target=\"_blank\" rel=\"noopener\">github.com/bitsandbytes-foundation/bitsandbytes</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1093\" target=\"_blank\" rel=\"noopener\">RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-02-27\" data-time=\"14:22:41\" data-timezone=\"UTC\">02:22PM - 27 Feb 24 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-08-07\" data-time=\"09:53:03\" data-timezone=\"UTC\">09:53AM - 07 Aug 24 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/SumaiyaSultan2002\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/a/f/af26d1401e3888796c7949620d0535f2831416a5.png\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"D3CAE6\">\n SumaiyaSultan2002\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">### System Info\n\n```\nThe `load_in_4bit` and `load_in_8bit` arguments are depr<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">ecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.\nTraceback (most recent call last):\n File \"c:\\SQl coder\\app.py\", line 22, in <module>\n model = AutoModelForCausalLM.from_pretrained(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 563, in from_pretrained\n return model_class.from_pretrained(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 3026, in from_pretrained\n hf_quantizer.validate_environment(\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\quantizers\\quantizer_bnb_8bit.py\", line 62, in validate_environment\n raise ImportError(\nImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`\n(sqlenv) PS C:\\SQl coder> pip install -i https://pypi.org/simple/ bitsandbytes\nLooking in indexes: https://pypi.org/simple/, https://pypi.ngc.nvidia.com\nCollecting bitsandbytes\n Downloading bitsandbytes-0.42.0-py3-none-any.whl.metadata (9.9 kB)\nRequirement already satisfied: scipy in c:\\sql coder\\sqlenv\\lib\\site-packages (from bitsandbytes) (1.12.0)\nRequirement already satisfied: numpy<1.29.0,>=1.22.4 in c:\\sql coder\\sqlenv\\lib\\site-packages (from scipy->bitsandbytes) (1.26.4)\nDownloading bitsandbytes-0.42.0-py3-none-any.whl (105.0 MB)\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 MB 6.2 MB/s eta 0:00:00\nInstalling collected packages: bitsandbytes\nSuccessfully installed bitsandbytes-0.42.0\n(sqlenv) PS C:\\SQl coder> & \"c:/SQl coder/sqlenv/Scripts/python.exe\" \"c:/SQl coder/app.py\"\nThe `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.\nFalse\n\n===================================BUG REPORT===================================\nC:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\cuda_setup\\main.py:167: UserWarning: Welcome to bitsandbytes. For bug reports, please run\n\npython -m bitsandbytes\n\n\n warn(msg)\n================================================================================\nCUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...\nThe following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}\nDEBUG: Possible options found for libcudart.so: set()\nCUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6.\nCUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md\nCUDA SETUP: Loading binary C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\libbitsandbytes_cuda118.so...\nargument of type 'WindowsPath' is not iterable\nCUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.\nCUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable \nCUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null\nCUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a\nCUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc \nCUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.\nCUDA SETUP: Solution 2a): Download CUDA install script: wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/cuda_install.sh\nCUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.\nCUDA SETUP: Solution 2b): For example, \"bash cuda_install.sh 113 ~/local/\" will download CUDA 11.3 and install into the folder ~/local\nTraceback (most recent call last):\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\utils\\import_utils.py\", line 1383, in _get_module\n return importlib.import_module(\".\" + module_name, self.__name__)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\sumai\\anaconda\\Lib\\importlib\\__init__.py\", line 126, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"<frozen importlib._bootstrap>\", line 1204, in _gcd_import\n File \"<frozen importlib._bootstrap>\", line 1176, in _find_and_load\n File \"<frozen importlib._bootstrap>\", line 1147, in _find_and_load_unlocked\n File \"<frozen importlib._bootstrap>\", line 690, in _load_unlocked\n File \"<frozen importlib._bootstrap_external>\", line 940, in exec_module\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\integrations\\bitsandbytes.py\", line 11, in <module>\n import bitsandbytes as bnb\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\__init__.py\", line 6, in <module>\n from . import cuda_setup, utils, research\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\research\\__init__.py\", line 1, in <module>\n from . import nn\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\research\\nn\\__init__.py\", line 1, in <module>\n from .modules import LinearFP8Mixed, LinearFP8Global\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\research\\nn\\modules.py\", line 8, in <module>\n from bitsandbytes.optim import GlobalOptimManager\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\optim\\__init__.py\", line 6, in <module>\n from bitsandbytes.cextension import COMPILED_WITH_CUDA\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\bitsandbytes\\cextension.py\", line 20, in <module>\n raise RuntimeError('''\nRuntimeError:\n CUDA Setup failed despite GPU being available. Please run the following command to get more information:\n\n python -m bitsandbytes\n\n Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them\n to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes\n and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"c:\\SQl coder\\app.py\", line 22, in <module>\n model = AutoModelForCausalLM.from_pretrained(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 563, in from_pretrained\n return model_class.from_pretrained(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 3391, in from_pretrained\n hf_quantizer.preprocess_model(\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\quantizers\\base.py\", line 166, in preprocess_model\n return self._process_model_before_weight_loading(model, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\quantizers\\quantizer_bnb_8bit.py\", line 219, in _process_model_before_weight_loading\n from ..integrations import get_keys_to_not_convert, replace_with_bnb_linear\n File \"<frozen importlib._bootstrap>\", line 1229, in _handle_fromlist\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\utils\\import_utils.py\", line 1373, in __getattr__\n module = self._get_module(self._class_to_module[name])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\SQl coder\\sqlenv\\Lib\\site-packages\\transformers\\utils\\import_utils.py\", line 1385, in _get_module\n raise RuntimeError(\nRuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):\n\n CUDA Setup failed despite GPU being available. Please run the following command to get more information:\n\n python -m bitsandbytes\n\n Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them\n to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes\n and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues\n```\n\n### Reproduction\n\nhttps://github.com/defog-ai/sqlcoder/blob/main/defog_sqlcoder_colab.ipynb\n\n### Expected behavior\n\ni want to run defog.ai SQLCoder-7b-2`import streamlit as st\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport sqlparse\nimport sqlite3\n\n# Model loading and configuration\nmodel_name = \"defog/sqlcoder-7b-2\"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\n\nif torch.cuda.is_available():\n available_memory = torch.cuda.memory_allocated()\n if available_memory > 15e9:\n model = AutoModelForCausalLM.from_pretrained(\n model_name,\n trust_remote_code=True,\n torch_dtype=torch.float16,\n device_map=\"auto\",\n use_cache=True,\n )\n else:\n model = AutoModelForCausalLM.from_pretrained(\n model_name,\n trust_remote_code=True,\n load_in_8bit=True,\n device_map=\"auto\",\n torch_dtype=torch.float16,\n use_cache=True,\n )\nelse:\n model = AutoModelForCausalLM.from_pretrained(\n model_name, trust_remote_code=True, use_cache=True\n )\n\nprompt = \"\"\"### Task\nGenerate a SQL query to answer [QUESTION]{question}[/QUESTION]\n\n###Instructions\n- if the question cannot be answered given the database schema, return \"I do not know\"\n- Every helpdesk ticket is associated to a space or an equipment mandatorily.\n- Every equipment or space is related to a Block in a site\n\n\n\n### Database Schema\nCREATE TABLE website_support_ticket(id INTEGER PRIMARY KEY,\nsla_active BOOLEAN, --SLA is active if it is true else it is inactive\nasset_id INTEGER, --Space for which the ticket is created\nequipment_id INTEGER, --Equipment for which the ticket is created\nequipment_location_id INTEGER, --Space where the Equipment is located\nmaintenance_team_id INTEGER, --Maintenance Team that is responsible for the ticket actions\nat_start_mro BOOLEAN, --Photo is required to start a work order\nat_done_mro BOOLEAN, --Photo is required to close a work order\nat_review_mro BOOLEAN, --Photo is required to review a work order\nmro_order_id INTEGER, --Order related to the ticket\nemployee_id INTEGER, --Employee related to the ticket\npause_reason_id INTEGER, --Reason for Pause\nequip_block_id INTEGER, --Block of an equipment for which the ticket is created\nspace_block_id INTEGER, --Block of an space for which the ticket is created\nrequestee_id INTEGER, --Requestor of the ticket\nregion_id INTEGER, --Region of the ticket\nis_reopen BOOLEAN, --Ticket was reopned if this is set to True\nreopen_count INTEGER, --Number of times this ticket was reopened\non_hold_date TIMESTAMP WITHOUT TIME ZONE, --Date on which the ticket was moved to On-Hold\ndoc_count INTEGER, --Count of Attachments\nsla_end_date TIMESTAMP WITHOUT TIME ZONE, --Planned End date for SLA\npriority_id INTEGER, --Priority of the Ticket\ncategory_id INTEGER, --Category of the Problem\nsub_category_id INTEGER, --Sub Category of the Problem\nstate_id INTEGER, --Status of the ticket (Open, InProgress, Closed, Paused)\ncompany_id INTEGER, --Company of the ticket\nclose_time TIMESTAMP WITHOUT TIME ZONE, --Ticket Closed Date time\nclosed_by_id INTEGER, --Technician who closed the ticet\nticket_type CHARACTER VARYING, --Proactive or Reactive\nsla_status CHARACTER VARYING, --To show within SLA or SLA elapsed\nstate_category_id CHARACTER VARYING, --Category to which the Status belongs to\nsubject CHARACTER VARYING, --Subject line of the Problem\nissue_type CHARACTER VARYING, --Issue Type of the Ticket\nclose_comment CHARACTER VARYING, --Comments that was enetered while closing the ticket\ncurrent_escalation_level CHARACTER VARYING, --To show the current escalationlevel\ntype_category CHARACTER VARYING, --Type category of the ticket\nstate_name CHARACTER VARYING, --State to which the site belongs to\ncity_name CHARACTER VARYING, --City to which the site belongs to\nlast_commented_by CHARACTER VARYING, --Comment\nregion CHARACTER VARYING, --Region of the ticket\nmro_state CHARACTER VARYING, --Status of the Work order\n\n);\n\nCREATE TABLE res_company (\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(20)\n);\n\nCREATE TABLE mro_maintenance_team(\n\tid INTEGER INTEGER PRIMARY KEY,\n\tname VARCHAR VARCHAR(20)\n);\n\nCREATE TABLE mro_equipment_location(\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(50)\n);\n\nCREATE TABLE mro_equipment(\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(50)\n);\n\nCREATE TABLE website_support_ticket_state(\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(50));\n\nCREATE TABLE mro_order(\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(50));\n\nCREATE TABLE website_support_ticket_category(\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(50))\n\t;\n\nCREATE TABLE website_support_ticket_subcategory(\n\tid INTEGER PRIMARY KEY,\n\tname VARCHAR(50));\n\nCREATE TABLE website_support_ticket_priority(\n id INTEGER PRIMARY KEY,\n name VARCHAR(50));\n\n\n\n-website_support_ticket.company_id can be joined with res_company.id\n-website_support_ticket.maintenance_team_id can be joined with mro_maintenance_team.id\n-website_support_ticket.asset_id can be joined with mro_equipment_location.id\n-website_support_ticket.equipment_id can be joined with mro_equipment.id\n-website_support_ticket.state_id can be joined with website_support_ticket_state.id\n-website_support_ticket.mro_order_id can be joined with mro_order.id\n-website_support_ticket.category_id can be joined with website_support_ticket_category.id\n-website_support_ticket.sub_category_id can be joined with website_support_ticket_subcategory.id\n-website_support_ticket.priority_id can be joined with website_support_ticket_priority.id\n\n\n\n\n### Answer\nGiven the database schema, here is the SQL query that answers [QUESTION]{question}[/QUESTION]\n[SQL]\n\"\"\"\ndef generate_query(question):\n updated_prompt = prompt.format(question=question)\n inputs = tokenizer(updated_prompt, return_tensors=\"pt\").to(\"cuda\")\n generated_ids = model.generate(\n **inputs,\n num_return_sequences=1,\n eos_token_id=tokenizer.eos_token_id,\n pad_token_id=tokenizer.eos_token_id,\n max_new_tokens=400,\n do_sample=False,\n num_beams=1,\n )\n outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\n\n torch.cuda.empty_cache()\n torch.cuda.synchronize()\n return sqlparse.format(outputs[0].split(\"[SQL]\")[-1], reindent=True)\n\n\ndef execute_sql(question, db_file):\n query = generate_query(question)\n\n conn = sqlite3.connect(db_file)\n cursor = conn.cursor()\n\n try:\n cursor.execute(query)\n\n # Fetch column names\n columns = [col[0] for col in cursor.description]\n\n # Fetch results into a pandas DataFrame\n df = pd.DataFrame(cursor.fetchall(), columns=columns)\n\n # Print the result as a table\n return df.to_markdown(index=False)\n except sqlite3.OperationalError as e:\n if \"ILIKE\" in str(e):\n query = query.replace(\"ILIKE\", \"LIKE\")\n return execute_query(query, db_file)\n except sqlite3.Error as e:\n print(\"Error executing query:\", e)\n return None\n\n finally:\n cursor.close()\n conn.close()\n\n\n# Streamlit app\nst.title(\"SQL Code Generator\")\n\n# Input field for the question\nuser_question = st.text_input(\"Enter your question about the database:\")\n\n# Button to generate the SQL query\nif st.button(\"Generate SQL\"):\n if user_question:\n # Generate SQL query and display it\n generated_sql = generate_query(user_question)\n st.write(\"Generated SQL Query:\")\n st.code(generated_sql)\n\n # Connect to the database (replace with your database file path)\n db_file = \"your_database.db\"\n if db_file:\n # Execute the query and display the results\n result = execute_sql(user_question, db_file)\n if result:\n st.write(\"Results:\")\n st.markdown(result)\n else:\n st.write(\"No results found.\")\n else:\n st.warning(\"Please enter a question.\")\n\n`\n\nthis is the code i am trying to run</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-08T08:06:56.954Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 154069,
"topic_slug": "error-in-autotrain-training",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1093",
"internal": false,
"reflection": false,
"title": "RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback): · Issue #1093 · bitsandbytes-foundation/bitsandbytes · GitHub",
"clicks": 8
},
{
"url": "https://github.com/bitsandbytes-foundation/bitsandbytes/issues/1384",
"internal": false,
"reflection": false,
"title": "An error occurred: CUDA is required but not available for bitsandbytes. · Issue #1384 · bitsandbytes-foundation/bitsandbytes · GitHub",
"clicks": 6
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/error-in-autotrain-training/154069/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220532,
"name": "Lukas",
"username": "LuuWee",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/l/4af34b/{size}.png",
"created_at": "2025-05-08T08:17:02.201Z",
"cooked": "<p>I found a solution by myself. Im using the free plan to there is only cpu to use and no gpu. I had to change some of the parameters. This is what i did for anyone who is wondering<br>\nDistributed Backend from ddp to deepspeed<br>\nMixed precision from fp16 to none<br>\nPEFT/LoRA from true to false</p>\n<p>Im not exactly sure what did the trick but its training now</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-08T08:17:02.201Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 7,
"readers_count": 6,
"score": 41.4,
"yours": false,
"topic_id": 154069,
"topic_slug": "error-in-autotrain-training",
"display_username": "Lukas",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93248,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/error-in-autotrain-training/154069/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220669,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-08T20:17:56.235Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-05-08T20:17:56.235Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 5,
"readers_count": 4,
"score": 36,
"yours": false,
"topic_id": 154069,
"topic_slug": "error-in-autotrain-training",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/error-in-autotrain-training/154069/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello everyone I am very new and im experimenting with the Huggingface Autotrain UI but im having a little trouble getting the training started. I am trying to train a meta-llama/Llama-3.1-8b-Instruct Model with an example dataset that i found<br>
alpaca1k.csv<br>
which i uploaded as a local file.<br>
I have not made any changes to any other parameters. When i then click start training i get an error.</p>
<p>ERROR | 2025-05-08 07:39:20 | autotrain.trainers.common:wrapper:215 - train has failed due to an exception: Traceback (most recent call last):<br>
File “/app/env/lib/python3.10/site-packages/autotrain/trainers/common.py”, line 212, in wrapper<br>
return func(*args, **kwargs)<br>
File “/app/env/lib/python3.10/site-packages/autotrain/trainers/clm/<strong>main</strong>.py”, line 28, in train<br>
train_sft(config)<br>
File “/app/env/lib/python3.10/site-packages/autotrain/trainers/clm/train_clm_sft.py”, line 27, in train<br>
model = utils.get_model(config, tokenizer)<br>
File “/app/env/lib/python3.10/site-packages/autotrain/trainers/clm/utils.py”, line 943, in get_model<br>
model = AutoModelForCausalLM.from_pretrained(<br>
File “/app/env/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py”, line 564, in from_pretrained<br>
return model_class.from_pretrained(<br>
File “/app/env/lib/python3.10/site-packages/transformers/modeling_utils.py”, line 3620, in from_pretrained<br>
hf_quantizer.validate_environment(<br>
File “/app/env/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py”, line 83, in validate_environment<br>
validate_bnb_backend_availability(raise_exception=True)<br>
File “/app/env/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py”, line 559, in validate_bnb_backend_availability<br>
return _validate_bnb_cuda_backend_availability(raise_exception)<br>
File “/app/env/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py”, line 537, in _validate_bnb_cuda_backend_availability<br>
raise RuntimeError(log_msg)<br>
RuntimeError: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at <a href="https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend" class="inline-onebox">Installation Guide</a></p>
<p>ERROR | 2025-05-08 07:39:20 | autotrain.trainers.common:wrapper:216 - CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at <a href="https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend" class="inline-onebox">Installation Guide</a><br>
INFO | 2025-05-08 07:39:20 | autotrain.trainers.common:pause_space:156 - Pausing space…</p>
<p>I not sure how i can fix this. Any help is appreciated</p>
|
<p>I found a solution by myself. Im using the free plan to there is only cpu to use and no gpu. I had to change some of the parameters. This is what i did for anyone who is wondering<br>
Distributed Backend from ddp to deepspeed<br>
Mixed precision from fp16 to none<br>
PEFT/LoRA from true to false</p>
<p>Im not exactly sure what did the trick but its training now</p>
|
Join the Hugging Face Discord!
|
https://discuss.huggingface.co/t/join-the-hugging-face-discord/11263
| 11,263
| 12
|
2021-11-01T15:54:32.137000Z
|
[
{
"id": 24338,
"name": "Nate Raw",
"username": "nateraw",
"avatar_template": "/user_avatar/discuss.huggingface.co/nateraw/{size}/2556_2.png",
"created_at": "2021-11-01T15:54:32.206Z",
"cooked": "<p>We’re excited to announce our official community discord server! <img src=\"https://emoji.discourse-cdn.com/apple/space_invader.png?v=12\" title=\":space_invader:\" class=\"emoji\" alt=\":space_invader:\" loading=\"lazy\" width=\"20\" height=\"20\"> We will have community events, sprints, reading clubs and more! Here’s the link to join: <a href=\"https://t.co/1n75wi976V?amp=1\" rel=\"noopener nofollow ugc\">http://hf.co/join/discord</a></p>\n<h4>\n<a name=\"once-you-join-i-highly-encourage-you-to-1\" class=\"anchor\" href=\"#once-you-join-i-highly-encourage-you-to-1\"></a>Once you join, I highly encourage you to:</h4>\n<ul>\n<li>Introduce yourself in the <span class=\"hashtag\">#introduce-yourself</span> channel</li>\n<li>Verify your Hugging Face account at the <span class=\"hashtag\">#verification</span> channel (cool stuff coming from this in the future!!)</li>\n<li>Share a picture of your pet to spread some joy in the <span class=\"hashtag\">#pets</span> channel (this one is my personal fav <img src=\"https://emoji.discourse-cdn.com/apple/heart_eyes.png?v=12\" title=\":heart_eyes:\" class=\"emoji\" alt=\":heart_eyes:\" loading=\"lazy\" width=\"20\" height=\"20\">)</li>\n</ul>\n<h4>\n<a name=\"whats-the-difference-between-the-forum-and-the-discord-2\" class=\"anchor\" href=\"#whats-the-difference-between-the-forum-and-the-discord-2\"></a>Whats the difference between the forum and the Discord?</h4>\n<ul>\n<li>The forum is meant to be a place to ask questions and get answers</li>\n<li>The Discord is meant to be a place to connect with people in the community, collaborate, host events, etc.</li>\n</ul>\n<p>So, any questions should still be directed here. <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=12\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<hr>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53.png\" data-download-href=\"/uploads/short-url/mU5XAa2PZ4rWxWfVfSRHPBHy83V.png?dl=1\" title=\"JOIN OUR DISCORD! (3)\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_690x388.png\" alt=\"JOIN OUR DISCORD! (3)\" data-base62-sha1=\"mU5XAa2PZ4rWxWfVfSRHPBHy83V\" width=\"690\" height=\"388\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_690x388.png, https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_1035x582.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_1380x776.png 2x\" data-dominant-color=\"E5CA92\"><div class=\"meta\">\n<svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">JOIN OUR DISCORD! (3)</span><span class=\"informations\">1920×1080 338 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg>\n</div></a></div></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 41,
"updated_at": "2021-11-01T17:49:36.261Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 16955,
"reads": 741,
"readers_count": 740,
"score": 84843.2,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Nate Raw",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://t.co/1n75wi976V?amp=1",
"internal": false,
"reflection": false,
"title": "http://hf.co/join/discord",
"clicks": 7668
},
{
"url": "https://us1.discourse-cdn.com/hellohellohello/original/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53.png",
"internal": false,
"reflection": false,
"title": "a08727617fb64e7e043a4b0c15d375c9632c0c53.png",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/where-does-someone-go-if-they-need-help/141264/2",
"internal": true,
"reflection": true,
"title": "Where does someone go if they need help?",
"clicks": 3
},
{
"url": "https://discuss.huggingface.co/t/seeking-advice-on-fine-tuning-llms-for-generating-documents/140996/2",
"internal": true,
"reflection": true,
"title": "Seeking Advice on Fine-Tuning LLMs for Generating Documents",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/error-agent-course/147345/9",
"internal": true,
"reflection": true,
"title": "Error: agent course",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/collaborating-with-huggingface-on-python-integration/138583/2",
"internal": true,
"reflection": true,
"title": "Collaborating with HuggingFace on Python Integration?",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/how-can-i-contact-with-the-hugging-face-team/75427/5",
"internal": true,
"reflection": true,
"title": "How can I contact with the Hugging Face team?",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 198,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/1",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 3
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 24341,
"name": "Bram Vanroy",
"username": "BramVanroy",
"avatar_template": "/user_avatar/discuss.huggingface.co/bramvanroy/{size}/47360_2.png",
"created_at": "2021-11-01T17:31:27.348Z",
"cooked": "<p>From looking at the HTML, it seems that that is an empty link. I know it’s November 1st, but aren’t jokes for April 1st? <img src=\"https://emoji.discourse-cdn.com/apple/wink.png?v=12\" title=\":wink:\" class=\"emoji\" alt=\":wink:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<p>For future visitors who like to click instead of type, <a href=\"http://hf.co/join/discord\">here you go</a>.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-04-08T07:23:29.676Z",
"reply_count": 2,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 11,
"reads": 369,
"readers_count": 368,
"score": 183.8,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Bram Vanroy",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "http://hf.co/join/discord",
"internal": false,
"reflection": false,
"title": "Hugging Face",
"clicks": 478
}
],
"read": true,
"user_title": "",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 23,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 3
},
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 24344,
"name": "Nate Raw",
"username": "nateraw",
"avatar_template": "/user_avatar/discuss.huggingface.co/nateraw/{size}/2556_2.png",
"created_at": "2021-11-01T17:51:02.459Z",
"cooked": "<p>whoops, nice catch! I used markdown syntax to add the link, but it didn’t go through <img src=\"https://emoji.discourse-cdn.com/apple/thinking.png?v=10\" title=\":thinking:\" class=\"emoji\" alt=\":thinking:\"> not sure what’s up with that. Anyways, I fixed the link in the original post too. Thanks, Bram <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=10\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\"></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 41,
"updated_at": "2021-11-01T17:51:02.459Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 327,
"readers_count": 326,
"score": 110.4,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Nate Raw",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 198,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/3",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 3
},
{
"id": "clap",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 23,
"username": "BramVanroy",
"name": "Bram Vanroy",
"avatar_template": "/user_avatar/discuss.huggingface.co/bramvanroy/{size}/47360_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 30227,
"name": "Mohamed BEN ALI",
"username": "mohamed1ai",
"avatar_template": "/user_avatar/discuss.huggingface.co/mohamed1ai/{size}/3928_2.png",
"created_at": "2022-02-02T08:52:38.879Z",
"cooked": "<p>hello everyone,<br>\nI present my self, I’m Mohamed BEN ALI research engineer.<br>\nI want to join hugging face community via Discord.<br>\nThanks <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=12\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\"></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-02-02T08:53:31.534Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 28,
"reads": 259,
"readers_count": 258,
"score": 191.8,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Mohamed BEN ALI",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6139,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/4",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 34052,
"name": "Teoh Sin Yee",
"username": "teohsinyee-cs",
"avatar_template": "/user_avatar/discuss.huggingface.co/teohsinyee-cs/{size}/4445_2.png",
"created_at": "2022-04-08T02:29:43.263Z",
"cooked": "<p>The link has expired. Mind sharing a new one? thanks!</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-04-08T02:29:43.263Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 23,
"reads": 181,
"readers_count": 180,
"score": 156.2,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Teoh Sin Yee",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 7117,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 23,
"username": "BramVanroy",
"name": "Bram Vanroy",
"avatar_template": "/user_avatar/discuss.huggingface.co/bramvanroy/{size}/47360_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 34053,
"name": "Nate Raw",
"username": "nateraw",
"avatar_template": "/user_avatar/discuss.huggingface.co/nateraw/{size}/2556_2.png",
"created_at": "2022-04-08T02:54:17.808Z",
"cooked": "<p>The link in the original post should still be working <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=12\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://discord.com/invite/JfAtkvEtRb\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/2X/3/369034b9091cfeb7b7a2072074a29ac8dd03cb8a.png\" class=\"site-icon\" width=\"256\" height=\"256\">\n\n <a href=\"https://discord.com/invite/JfAtkvEtRb\" target=\"_blank\" rel=\"noopener nofollow ugc\">Discord</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:512/170;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/2X/2/23fb7e946a7fd6d6cfae3ff9e43dfbdb6f40a0bb.jpeg\" class=\"thumbnail\" width=\"512\" height=\"170\"></div>\n\n<h3><a href=\"https://discord.com/invite/JfAtkvEtRb\" target=\"_blank\" rel=\"noopener nofollow ugc\">Join the Hugging Face Discord Server!</a></h3>\n\n <p>Check out the Hugging Face community on Discord - hang out with 13,053 other members and enjoy free voice and text chat.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 6,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-04-08T02:54:17.808Z",
"reply_count": 0,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 14,
"reads": 165,
"readers_count": 164,
"score": 103,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Nate Raw",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discord.com/invite/JfAtkvEtRb",
"internal": false,
"reflection": false,
"title": "Hugging Face",
"clicks": 223
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 198,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/6",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 7117,
"username": "teohsinyee-cs",
"name": "Teoh Sin Yee",
"avatar_template": "/user_avatar/discuss.huggingface.co/teohsinyee-cs/{size}/4445_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 45689,
"name": "Fred Guth",
"username": "fredguth",
"avatar_template": "/user_avatar/discuss.huggingface.co/fredguth/{size}/2843_2.png",
"created_at": "2022-09-29T12:40:12.921Z",
"cooked": "<p>The discord invite here and in HF website is invalid. At least it is the message that appear for me.</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-09-29T12:40:12.921Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 17,
"reads": 119,
"readers_count": 118,
"score": 108.8,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Fred Guth",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 4558,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 48823,
"name": "Nate Raw",
"username": "nateraw",
"avatar_template": "/user_avatar/discuss.huggingface.co/nateraw/{size}/2556_2.png",
"created_at": "2022-11-07T18:39:30.512Z",
"cooked": "<p>I know this response is very late, but <a href=\"https://huggingface.co/join/discord\">this link</a> still works as far as I can tell <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=12\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"> may have been out temporarily when you replied <a class=\"mention\" href=\"/u/fredguth\">@fredguth</a></p>",
"post_number": 8,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-11-07T18:39:49.776Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 9,
"reads": 109,
"readers_count": 108,
"score": 66.8,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Nate Raw",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/join/discord",
"internal": false,
"reflection": false,
"title": "Hugging Face",
"clicks": 77
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 198,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 51100,
"name": "Aaron C Wacker",
"username": "awacke1",
"avatar_template": "/user_avatar/discuss.huggingface.co/awacke1/{size}/40934_2.png",
"created_at": "2022-12-03T12:40:50.288Z",
"cooked": "<p>I finally did my post for all three. Cool HF space on Discord <a class=\"mention\" href=\"/u/nateraw\">@nateraw</a> is there any way or future where I can integrate a space and allow AI input/output onto a Discord chat channel or server? I’ve been infatuated with Mid Journey interface on Discord lately as a neat jam session way to multiplayer access to AI in real time. Super excited to see what you are cooking up. --Aaron</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 41,
"updated_at": "2022-12-03T12:40:50.288Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 26,
"reads": 107,
"readers_count": 106,
"score": 151.4,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Aaron C Wacker",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6987,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/9",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 84953,
"name": "Carlos",
"username": "nbalive",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/n/e68b1a/{size}.png",
"created_at": "2023-08-19T02:05:40.166Z",
"cooked": "<p>The invite is invalid for me <img src=\"https://emoji.discourse-cdn.com/apple/frowning.png?v=12\" title=\":frowning:\" class=\"emoji\" alt=\":frowning:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 10,
"post_type": 1,
"posts_count": 41,
"updated_at": "2023-08-19T02:05:40.166Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 71,
"readers_count": 70,
"score": 29.2,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Carlos",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 26779,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/10",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 91118,
"name": "Pat Patterson",
"username": "metadaddy",
"avatar_template": "/user_avatar/discuss.huggingface.co/metadaddy/{size}/52440_2.png",
"created_at": "2023-09-22T19:57:43.823Z",
"cooked": "<p>The invite link (<a href=\"https://huggingface.co/join/discord\" class=\"inline-onebox\">Hugging Face</a>) doesn’t work for me - I just see ‘Unable to accept invite’.</p>",
"post_number": 11,
"post_type": 1,
"posts_count": 41,
"updated_at": "2023-09-22T19:57:43.823Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 63,
"readers_count": 62,
"score": 47.6,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Pat Patterson",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/join/discord",
"internal": false,
"reflection": false,
"title": "Hugging Face",
"clicks": 12
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 29597,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/11",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 91128,
"name": "Radamés Ajna",
"username": "radames",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png",
"created_at": "2023-09-22T22:11:00.940Z",
"cooked": "<p>hi <a class=\"mention\" href=\"/u/metadaddy\">@metadaddy</a>, I jus tested the link <a href=\"https://discord.com/invite/JfAtkvEtRb\" class=\"inline-onebox\">Hugging Face</a> and seems to be working. <a class=\"mention\" href=\"/u/lunarflu\">@lunarflu</a> could you please check?</p>",
"post_number": 12,
"post_type": 1,
"posts_count": 41,
"updated_at": "2023-09-22T22:11:00.940Z",
"reply_count": 1,
"reply_to_post_number": 11,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 63,
"readers_count": 62,
"score": 37.6,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Radamés Ajna",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discord.com/invite/JfAtkvEtRb",
"internal": false,
"reflection": false,
"title": "Hugging Face",
"clicks": 20
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 6306,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/12",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 29597,
"username": "metadaddy",
"name": "Pat Patterson",
"avatar_template": "/user_avatar/discuss.huggingface.co/metadaddy/{size}/52440_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 91130,
"name": "Pat Patterson",
"username": "metadaddy",
"avatar_template": "/user_avatar/discuss.huggingface.co/metadaddy/{size}/52440_2.png",
"created_at": "2023-09-22T22:49:34.239Z",
"cooked": "<p>Hi <a class=\"mention\" href=\"/u/radames\">@radames</a> - I figured it out - Discord needs to be running for the invitation process to work correctly. If it’s not, then you get the ‘unable to accept invite’ message, rather than any advice to start Discord.</p>\n<p>Thanks!</p>",
"post_number": 13,
"post_type": 1,
"posts_count": 41,
"updated_at": "2023-09-22T22:49:34.239Z",
"reply_count": 1,
"reply_to_post_number": 12,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 63,
"readers_count": 62,
"score": 87.6,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Pat Patterson",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 29597,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/13",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 6306,
"username": "radames",
"name": "Radamés Ajna",
"avatar_template": "/user_avatar/discuss.huggingface.co/radames/{size}/28246_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 91234,
"name": "Adam Molnar",
"username": "lunarflu",
"avatar_template": "/user_avatar/discuss.huggingface.co/lunarflu/{size}/29357_2.png",
"created_at": "2023-09-23T17:29:24.291Z",
"cooked": "<p>Happy to hear that. Enjoy, and share your thoughts with the world! <img src=\"https://emoji.discourse-cdn.com/apple/earth_africa.png?v=12\" title=\":earth_africa:\" class=\"emoji\" alt=\":earth_africa:\" loading=\"lazy\" width=\"20\" height=\"20\"> <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=12\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 14,
"post_type": 1,
"posts_count": 41,
"updated_at": "2023-09-23T17:29:24.291Z",
"reply_count": 0,
"reply_to_post_number": 13,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 58,
"readers_count": 57,
"score": 51.6,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Adam Molnar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 15783,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/14",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 29597,
"username": "metadaddy",
"name": "Pat Patterson",
"avatar_template": "/user_avatar/discuss.huggingface.co/metadaddy/{size}/52440_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 156209,
"name": "mamat mamation",
"username": "mmty",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dfb087/{size}.png",
"created_at": "2024-09-19T10:45:48.832Z",
"cooked": "<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/b/5/b582f90beab314508a400be2c06b51e0676d8758.jpeg\" data-download-href=\"/uploads/short-url/pTJ1RWCAKzxo5WSMnaSM3nJr41O.jpeg?dl=1\" title=\"1000118262\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/5/b582f90beab314508a400be2c06b51e0676d8758_2_225x500.jpeg\" alt=\"1000118262\" data-base62-sha1=\"pTJ1RWCAKzxo5WSMnaSM3nJr41O\" width=\"225\" height=\"500\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/5/b582f90beab314508a400be2c06b51e0676d8758_2_225x500.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/5/b582f90beab314508a400be2c06b51e0676d8758_2_337x750.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/5/b582f90beab314508a400be2c06b51e0676d8758_2_450x1000.jpeg 2x\" data-dominant-color=\"2D458C\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">1000118262</span><span class=\"informations\">1080×2400 54.3 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>I can’t join, why?</p>",
"post_number": 16,
"post_type": 1,
"posts_count": 41,
"updated_at": "2024-09-19T10:45:48.832Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 57,
"readers_count": 56,
"score": 41.4,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "mamat mamation",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 64844,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/16",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 156210,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-19T10:51:38.322Z",
"cooked": "<p><a class=\"mention\" href=\"/u/nateraw\">@nateraw</a> The HF Discord key posted on the HF Forum appears to have expired.</p>",
"post_number": 17,
"post_type": 1,
"posts_count": 41,
"updated_at": "2024-09-19T10:51:38.322Z",
"reply_count": 1,
"reply_to_post_number": 16,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 68,
"readers_count": 67,
"score": 63.6,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/17",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 64844,
"username": "mmty",
"name": "mamat mamation",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/m/dfb087/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 159113,
"name": "Adam Molnar",
"username": "lunarflu",
"avatar_template": "/user_avatar/discuss.huggingface.co/lunarflu/{size}/29357_2.png",
"created_at": "2024-09-30T10:26:31.510Z",
"cooked": "<p>Hey <a class=\"mention\" href=\"/u/john6666\">@John6666</a> <a class=\"mention\" href=\"/u/mmty\">@mmty</a> ! <img src=\"https://emoji.discourse-cdn.com/apple/hugs.png?v=12\" title=\":hugs:\" class=\"emoji\" alt=\":hugs:\" loading=\"lazy\" width=\"20\" height=\"20\"> Feel free to try <a href=\"https://discord.gg/hugging-face-879548962464493619\">this link</a>, or alternatively, you can try searching hugging face within Discord. Let me know if it works!<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/7/6/76e6a6033fd031fbf2759abd33baa2566772d3d2.png\" data-download-href=\"/uploads/short-url/gXQvdU9tRhlyU4gx10sQ9c01bRo.png?dl=1\" title=\"image\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/6/76e6a6033fd031fbf2759abd33baa2566772d3d2_2_690x230.png\" alt=\"image\" data-base62-sha1=\"gXQvdU9tRhlyU4gx10sQ9c01bRo\" width=\"690\" height=\"230\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/6/76e6a6033fd031fbf2759abd33baa2566772d3d2_2_690x230.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/6/76e6a6033fd031fbf2759abd33baa2566772d3d2_2_1035x345.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/7/6/76e6a6033fd031fbf2759abd33baa2566772d3d2_2_1380x460.png 2x\" data-dominant-color=\"434446\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">2970×991 273 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 19,
"post_type": 1,
"posts_count": 41,
"updated_at": "2024-09-30T10:26:31.510Z",
"reply_count": 1,
"reply_to_post_number": 17,
"quote_count": 0,
"incoming_link_count": 95,
"reads": 73,
"readers_count": 72,
"score": 539.6,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Adam Molnar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discord.gg/hugging-face-879548962464493619",
"internal": false,
"reflection": false,
"title": "Hugging Face",
"clicks": 84
},
{
"url": "https://discuss.huggingface.co/t/delete-a-repository-with-doi/111515/2",
"internal": true,
"reflection": true,
"title": "Delete a repository with DOI",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/is-there-a-way-to-delete-hide-a-published-dataset-with-assigned-doi/109787/4",
"internal": true,
"reflection": true,
"title": "Is there a way to delete/hide a published Dataset with assigned DOI?",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/issues-with-sadtalker-zerogpu-spaces-inquiry-about-community-grant/110625/11",
"internal": true,
"reflection": true,
"title": "Issues with SadTalker ZeroGPU Spaces + Inquiry About Community Grant",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/not-able-to-upload-or-download-custom-datasets/110001/2",
"internal": true,
"reflection": true,
"title": "Not able to upload or download custom datasets",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/attn-hf-staff-space-stuck-building-indefinitely/111415/12",
"internal": true,
"reflection": true,
"title": "ATTN HF STAFF: Space stuck building indefinitely",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/got-http-500-among-all-links-in-an-organization/112724/2",
"internal": true,
"reflection": true,
"title": "Got HTTP 500 among all links in an organization",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/build-error-for-spaces-model/52882/7",
"internal": true,
"reflection": true,
"title": "Build Error for Spaces model",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/how-to-rebuild-the-library-of-alexandria/115415/2",
"internal": true,
"reflection": true,
"title": "How to rebuild the Library of Alexandria?",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/build-error-error-while-cloning-repository/113801/4",
"internal": true,
"reflection": true,
"title": "Build error: Error while cloning repository",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/hf-hub-cdn-urls-changes-notifications/114653/2",
"internal": true,
"reflection": true,
"title": "HF Hub CDN URLs changes notifications",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/allow-navigation-outside-iframe/114755/6",
"internal": true,
"reflection": true,
"title": "Allow navigation outside iframe",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/best-way-to-do-multi-to-univariate-time-series-prediction/115858/2",
"internal": true,
"reflection": true,
"title": "Best way to do multi- to univariate time series prediction",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/issues-connecting-to-model-mistralai-mixtral-8x7b-instruct-v0-1-via-websocket-since-october-14th/112911/4",
"internal": true,
"reflection": true,
"title": "Issues Connecting to Model mistralai/Mixtral-8x7B-Instruct-v0.1 via WebSocket since October 14th",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/python-gradio-web-pages-suddenly-dont-render-properly-on-ipad-browsers/126669/6",
"internal": true,
"reflection": true,
"title": "Python gradio web pages suddenly don't render properly on iPad browsers",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/the-discord-verification-process-does-not-work/131992/2",
"internal": true,
"reflection": true,
"title": "The discord verification process does not work",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/ocr-confidence-score-extraction-for-opengvlab-internvl2-5-8b-mpo/139189/3",
"internal": true,
"reflection": true,
"title": "OCR Confidence score extraction for OpenGVLab/InternVL2_5-8B-MPO",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/best-model-for-music-generation/133604/2",
"internal": true,
"reflection": true,
"title": "Best model for music generation",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/seeking-specialist-for-finetuning-ai-model/137385/2",
"internal": true,
"reflection": true,
"title": "Seeking Specialist for FineTuning AI Model",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/smollm-or-othe-slms-example-uses-andmfeedback-for-getting-the-most-of-of-them/110108/4",
"internal": true,
"reflection": true,
"title": "Smollm or othe SLM's example uses andmfeedback for getting the most of of them",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/request-for-additional-storage-space-for-dataset-repository/111308/4",
"internal": true,
"reflection": true,
"title": "Request for Additional Storage Space for Dataset Repository",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 3
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 15783,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/19",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
},
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 3,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 159114,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-09-30T10:28:31.134Z",
"cooked": "<p>Thanks for the update. But I don’t have a Discord account so I’ll leave it to someone else! <img src=\"https://emoji.discourse-cdn.com/apple/roll_eyes.png?v=12\" title=\":roll_eyes:\" class=\"emoji\" alt=\":roll_eyes:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 20,
"post_type": 1,
"posts_count": 41,
"updated_at": "2024-10-15T22:30:06.208Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 67,
"readers_count": 66,
"score": 23.4,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/20",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 165921,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2024-10-29T04:41:13.879Z",
"cooked": "<p>I was able to unearth an ancient, unused Discord account, so I joined!</p>",
"post_number": 21,
"post_type": 1,
"posts_count": 41,
"updated_at": "2024-10-29T04:41:13.879Z",
"reply_count": 1,
"reply_to_post_number": 19,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 47,
"readers_count": 46,
"score": 59.4,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/21",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": {
"id": 15783,
"username": "lunarflu",
"name": "Adam Molnar",
"avatar_template": "/user_avatar/discuss.huggingface.co/lunarflu/{size}/29357_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 168305,
"name": "Edward Surridge",
"username": "EdSurridge",
"avatar_template": "/user_avatar/discuss.huggingface.co/edsurridge/{size}/34137_2.png",
"created_at": "2024-11-07T11:40:21.424Z",
"cooked": "<p>I am interested to join what you found . Thanks if you can share it<br>\nEd</p>",
"post_number": 22,
"post_type": 1,
"posts_count": 41,
"updated_at": "2024-11-07T11:40:21.424Z",
"reply_count": 0,
"reply_to_post_number": 21,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 49,
"readers_count": 48,
"score": 24.8,
"yours": false,
"topic_id": 11263,
"topic_slug": "join-the-hugging-face-discord",
"display_username": "Edward Surridge",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 69843,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/join-the-hugging-face-discord/11263/22",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
}
] |
<p>We’re excited to announce our official community discord server! <img src="https://emoji.discourse-cdn.com/apple/space_invader.png?v=12" title=":space_invader:" class="emoji" alt=":space_invader:" loading="lazy" width="20" height="20"> We will have community events, sprints, reading clubs and more! Here’s the link to join: <a href="https://t.co/1n75wi976V?amp=1" rel="noopener nofollow ugc">http://hf.co/join/discord</a></p>
<h4>
<a name="once-you-join-i-highly-encourage-you-to-1" class="anchor" href="#once-you-join-i-highly-encourage-you-to-1"></a>Once you join, I highly encourage you to:</h4>
<ul>
<li>Introduce yourself in the <span class="hashtag">#introduce-yourself</span> channel</li>
<li>Verify your Hugging Face account at the <span class="hashtag">#verification</span> channel (cool stuff coming from this in the future!!)</li>
<li>Share a picture of your pet to spread some joy in the <span class="hashtag">#pets</span> channel (this one is my personal fav <img src="https://emoji.discourse-cdn.com/apple/heart_eyes.png?v=12" title=":heart_eyes:" class="emoji" alt=":heart_eyes:" loading="lazy" width="20" height="20">)</li>
</ul>
<h4>
<a name="whats-the-difference-between-the-forum-and-the-discord-2" class="anchor" href="#whats-the-difference-between-the-forum-and-the-discord-2"></a>Whats the difference between the forum and the Discord?</h4>
<ul>
<li>The forum is meant to be a place to ask questions and get answers</li>
<li>The Discord is meant to be a place to connect with people in the community, collaborate, host events, etc.</li>
</ul>
<p>So, any questions should still be directed here. <img src="https://emoji.discourse-cdn.com/apple/hugs.png?v=12" title=":hugs:" class="emoji" alt=":hugs:" loading="lazy" width="20" height="20"></p>
<hr>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53.png" data-download-href="/uploads/short-url/mU5XAa2PZ4rWxWfVfSRHPBHy83V.png?dl=1" title="JOIN OUR DISCORD! (3)" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_690x388.png" alt="JOIN OUR DISCORD! (3)" data-base62-sha1="mU5XAa2PZ4rWxWfVfSRHPBHy83V" width="690" height="388" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_690x388.png, https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_1035x582.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/2X/a/a08727617fb64e7e043a4b0c15d375c9632c0c53_2_1380x776.png 2x" data-dominant-color="E5CA92"><div class="meta">
<svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">JOIN OUR DISCORD! (3)</span><span class="informations">1920×1080 338 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg>
</div></a></div></p>
|
<p>I am interested to join what you found . Thanks if you can share it<br>
Ed</p>
|
AutoTokenizer.from_pretrained() suddenly raises an error
|
https://discuss.huggingface.co/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809
| 153,809
| 9
|
2025-05-06T19:41:08.470000Z
|
[
{
"id": 220162,
"name": "Sina Mostafanejad",
"username": "smostafanejad",
"avatar_template": "/user_avatar/discuss.huggingface.co/smostafanejad/{size}/34306_2.png",
"created_at": "2025-05-06T19:41:08.528Z",
"cooked": "<p>Hi,</p>\n<p>IThe following code snippet for pulling a pretrained custom tokenizer from the Hugging Face Hub</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">import os\nfrom transformers import AutoTokenizer\n\n# load the tokenizer\ntokenizer = AutoTokenizer.from_pretrained(\"smostafanejad/gen-mlm-cismi-bert-wordpiece\",\n token=os.environ['HF_TOKEN'],\n cache_dir=\"./cache\"\n )\n</code></pre>\n<p>suddenly started raising the following runtime error since yesterday (05/05/2025).</p>\n<pre data-code-wrap=\"bash\"><code class=\"lang-bash\">Cell In[4], line 5\n 2 from transformers import AutoTokenizer\n 4 # load the tokenizer\n----> 5 tokenizer = AutoTokenizer.from_pretrained(\"smostafanejad/gen-mlm-cismi-bert-wordpiece\",\n 6 token=os.environ['HF_TOKEN'],\n 7 cache_dir=\"./cache\"\n 8 )\n\nFile ~/Packages/miniconda3/envs/bertchemai/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:992, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\n 989 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]\n 991 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):\n--> 992 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\n 993 else:\n 994 if tokenizer_class_py is not None:\n\nFile ~/Packages/miniconda3/envs/bertchemai/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2046, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, trust_remote_code, *init_inputs, **kwargs)\n 2043 # If one passes a GGUF file path to `gguf_file` there is no need for this check as the tokenizer will be\n 2044 # loaded directly from the GGUF file.\n 2045 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()) and not gguf_file:\n-> 2046 raise EnvironmentError(\n 2047 f\"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from \"\n 2048 \"'https://huggingface.co/models', make sure you don't have a local directory with the same name. \"\n 2049 f\"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory \"\n 2050 f\"containing all relevant files for a {cls.__name__} tokenizer.\"\n 2051 )\n 2053 for file_id, file_path in vocab_files.items():\n 2054 if file_id not in resolved_vocab_files:\n\nOSError: Can't load tokenizer for 'smostafanejad/gen-mlm-cismi-bert-wordpiece'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'smostafanejad/gen-mlm-cismi-bert-wordpiece' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer.\n</code></pre>\n<p>I have followed the suggestions in the error message (directory is clean and the address on the Hub is available) but they do not help.</p>\n<p>I appreciate any assistance on this matter as the same function call used to work until yesterday.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-06T19:41:08.528Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 164,
"reads": 12,
"readers_count": 11,
"score": 822.4,
"yours": false,
"topic_id": 153809,
"topic_slug": "autotokenizer-from-pretrained-suddenly-raises-an-error",
"display_username": "Sina Mostafanejad",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 70171,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220194,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-06T23:18:34.825Z",
"cooked": "<p>Hmm, it seems to be working. Maybe it’s a problem specific to ipython or Jupyter, or maybe it was a bug that occurred when you upgraded Transoformers. Or maybe it’s a network problem?</p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">import os\nfrom transformers import AutoTokenizer\n\n# load the tokenizer\ntokenizer = AutoTokenizer.from_pretrained(\"smostafanejad/gen-mlm-cismi-bert-wordpiece\",\n #token=os.environ['HF_TOKEN'],\n #cache_dir=\"./cache\"\n )\nprint(tokenizer)\n\"\"\"\nPreTrainedTokenizerFast(name_or_path='smostafanejad/gen-mlm-cismi-bert-wordpiece', vocab_size=30522, model_max_length=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}, clean_up_tokenization_spaces=False, added_tokens_decoder={\n 0: AddedToken(\"[PAD]\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n 1: AddedToken(\"[UNK]\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n 2: AddedToken(\"[CLS]\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n 3: AddedToken(\"[SEP]\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n 4: AddedToken(\"[MASK]\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n}\n)\n\"\"\"\n</code></pre>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-06T23:18:34.825Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 7,
"readers_count": 6,
"score": 11.4,
"yours": false,
"topic_id": 153809,
"topic_slug": "autotokenizer-from-pretrained-suddenly-raises-an-error",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220237,
"name": "Sina Mostafanejad",
"username": "smostafanejad",
"avatar_template": "/user_avatar/discuss.huggingface.co/smostafanejad/{size}/34306_2.png",
"created_at": "2025-05-07T03:02:04.783Z",
"cooked": "<p>You are right and the problem does not seem to be related to Jupyter or ipython either.</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/9/9/99c59ec59c87eecf2974f98cc3a773da3f5473a1.png\" data-download-href=\"/uploads/short-url/lWks2GgoqQdA3hg9Ocfvkg6WZzz.png?dl=1\" title=\"Screenshot from 2025-05-06 22-52-10\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/9/99c59ec59c87eecf2974f98cc3a773da3f5473a1_2_690x279.png\" alt=\"Screenshot from 2025-05-06 22-52-10\" data-base62-sha1=\"lWks2GgoqQdA3hg9Ocfvkg6WZzz\" width=\"690\" height=\"279\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/9/99c59ec59c87eecf2974f98cc3a773da3f5473a1_2_690x279.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/9/99c59ec59c87eecf2974f98cc3a773da3f5473a1_2_1035x418.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/9/99c59ec59c87eecf2974f98cc3a773da3f5473a1_2_1380x558.png 2x\" data-dominant-color=\"F0EDED\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Screenshot from 2025-05-06 22-52-10</span><span class=\"informations\">1752×710 111 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>I have now two machines with conda environments that suddenly started generating errors without doing anything to them. My personal laptop with a fresh conda environment seem to be fine (as you can see in the screenshot). So, I exported the problematic and OK conda environments and uploaded them to the repo to see if I can find out what’s causing the issue:</p>\n<ul>\n<li>Bad environment:\n<ul>\n<li><a href=\"https://huggingface.co/smostafanejad/gen-mlm-cismi-bert-wordpiece/blob/main/bad_env.yml\" class=\"inline-onebox\">bad_env.yml · smostafanejad/gen-mlm-cismi-bert-wordpiece at main</a></li>\n</ul>\n</li>\n<li>Good environment:\n<ul>\n<li><a href=\"https://huggingface.co/smostafanejad/gen-mlm-cismi-bert-wordpiece/blob/main/good_env.yml\" class=\"inline-onebox\">good_env.yml · smostafanejad/gen-mlm-cismi-bert-wordpiece at main</a></li>\n</ul>\n</li>\n</ul>\n<p>Thanks for the time you’ve taken and tested the function call, <a class=\"mention\" href=\"/u/john6666\">@John6666</a>.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-07T03:02:04.783Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 6,
"readers_count": 5,
"score": 36.2,
"yours": false,
"topic_id": 153809,
"topic_slug": "autotokenizer-from-pretrained-suddenly-raises-an-error",
"display_username": "Sina Mostafanejad",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/smostafanejad/gen-mlm-cismi-bert-wordpiece/blob/main/good_env.yml",
"internal": false,
"reflection": false,
"title": "good_env.yml · smostafanejad/gen-mlm-cismi-bert-wordpiece at main",
"clicks": 2
},
{
"url": "https://huggingface.co/smostafanejad/gen-mlm-cismi-bert-wordpiece/blob/main/bad_env.yml",
"internal": false,
"reflection": false,
"title": "bad_env.yml · smostafanejad/gen-mlm-cismi-bert-wordpiece at main",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 70171,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 220377,
"name": "Sina Mostafanejad",
"username": "smostafanejad",
"avatar_template": "/user_avatar/discuss.huggingface.co/smostafanejad/{size}/34306_2.png",
"created_at": "2025-05-07T14:39:35.439Z",
"cooked": "<p>OK since this was an <code>EnvironmentError</code> I checked everything and I think I have found the culprit.<br>\nIn my bashrc, I had <code>export HF_HUB_ENABLE_HF_TRANSFER=1</code> set which means the problem might have something to do with an inconsistency with the <strong>hf-transfer</strong> package. Reading Hugging Face’s <a href=\"https://huggingface.co/docs/huggingface_hub/v0.31.0/package_reference/environment_variables\">Environment Variable documentation</a> gave the clue about the possibility of such incidents and undefined behavior</p>\n<pre><code class=\"lang-plaintext\">HF_HUB_ENABLE_HF_TRANSFER\n\nSet to True to download files from the Hub using hf_transfer. It’s a Rust-based package that enables faster download (up to x2 speed-up). Be aware that this is still experimental so it might cause issues in your workflow. In particular, it does not support features such as progress bars, resume download, proxies or error handling.\n\nNote: hf_transfer has to be installed separately from Pypi.\n</code></pre>\n<p>I have forced a reinstall and upgrade through pip and apparently that resolved the issues with both supercomputer and data center machines which had problems calling the <code>AutoTokenizer.from_pretrained()</code>.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-07T14:41:19.078Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 16,
"reads": 5,
"readers_count": 4,
"score": 86,
"yours": false,
"topic_id": 153809,
"topic_slug": "autotokenizer-from-pretrained-suddenly-raises-an-error",
"display_username": "Sina Mostafanejad",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/huggingface_hub/v0.31.0/package_reference/environment_variables",
"internal": false,
"reflection": false,
"title": "Environment variables",
"clicks": 1
},
{
"url": "https://discuss.huggingface.co/t/model-loading-in-colab-but-not-jupyterlab/154082/2",
"internal": true,
"reflection": true,
"title": "Model loading in Colab but not Jupyterlab?!",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 70171,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 70171,
"username": "smostafanejad",
"name": "Sina Mostafanejad",
"avatar_template": "/user_avatar/discuss.huggingface.co/smostafanejad/{size}/34306_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 220471,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-08T02:40:20.217Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-08T02:40:20.217Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 1,
"yours": false,
"topic_id": 153809,
"topic_slug": "autotokenizer-from-pretrained-suddenly-raises-an-error",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/autotokenizer-from-pretrained-suddenly-raises-an-error/153809/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi,</p>
<p>IThe following code snippet for pulling a pretrained custom tokenizer from the Hugging Face Hub</p>
<pre data-code-wrap="python"><code class="lang-python">import os
from transformers import AutoTokenizer
# load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("smostafanejad/gen-mlm-cismi-bert-wordpiece",
token=os.environ['HF_TOKEN'],
cache_dir="./cache"
)
</code></pre>
<p>suddenly started raising the following runtime error since yesterday (05/05/2025).</p>
<pre data-code-wrap="bash"><code class="lang-bash">Cell In[4], line 5
2 from transformers import AutoTokenizer
4 # load the tokenizer
----> 5 tokenizer = AutoTokenizer.from_pretrained("smostafanejad/gen-mlm-cismi-bert-wordpiece",
6 token=os.environ['HF_TOKEN'],
7 cache_dir="./cache"
8 )
File ~/Packages/miniconda3/envs/bertchemai/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:992, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
989 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
991 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):
--> 992 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
993 else:
994 if tokenizer_class_py is not None:
File ~/Packages/miniconda3/envs/bertchemai/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2046, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, trust_remote_code, *init_inputs, **kwargs)
2043 # If one passes a GGUF file path to `gguf_file` there is no need for this check as the tokenizer will be
2044 # loaded directly from the GGUF file.
2045 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()) and not gguf_file:
-> 2046 raise EnvironmentError(
2047 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from "
2048 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. "
2049 f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory "
2050 f"containing all relevant files for a {cls.__name__} tokenizer."
2051 )
2053 for file_id, file_path in vocab_files.items():
2054 if file_id not in resolved_vocab_files:
OSError: Can't load tokenizer for 'smostafanejad/gen-mlm-cismi-bert-wordpiece'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'smostafanejad/gen-mlm-cismi-bert-wordpiece' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer.
</code></pre>
<p>I have followed the suggestions in the error message (directory is clean and the address on the Hub is available) but they do not help.</p>
<p>I appreciate any assistance on this matter as the same function call used to work until yesterday.</p>
|
<p>OK since this was an <code>EnvironmentError</code> I checked everything and I think I have found the culprit.<br>
In my bashrc, I had <code>export HF_HUB_ENABLE_HF_TRANSFER=1</code> set which means the problem might have something to do with an inconsistency with the <strong>hf-transfer</strong> package. Reading Hugging Face’s <a href="https://huggingface.co/docs/huggingface_hub/v0.31.0/package_reference/environment_variables">Environment Variable documentation</a> gave the clue about the possibility of such incidents and undefined behavior</p>
<pre><code class="lang-plaintext">HF_HUB_ENABLE_HF_TRANSFER
Set to True to download files from the Hub using hf_transfer. It’s a Rust-based package that enables faster download (up to x2 speed-up). Be aware that this is still experimental so it might cause issues in your workflow. In particular, it does not support features such as progress bars, resume download, proxies or error handling.
Note: hf_transfer has to be installed separately from Pypi.
</code></pre>
<p>I have forced a reinstall and upgrade through pip and apparently that resolved the issues with both supercomputer and data center machines which had problems calling the <code>AutoTokenizer.from_pretrained()</code>.</p>
|
Can I get clarification on what exactly transformers does vs what the model does?
|
https://discuss.huggingface.co/t/can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does/152365
| 152,365
| 13
|
2025-04-26T02:21:47.051000Z
|
[
{
"id": 218287,
"name": "Sven Voigt",
"username": "svenpvoigt",
"avatar_template": "/user_avatar/discuss.huggingface.co/svenpvoigt/{size}/46353_2.png",
"created_at": "2025-04-26T02:21:47.120Z",
"cooked": "<p>Hi there,</p>\n<p>I am trying to figure out where documentation for models exists. For example, I am looking at the <a href=\"https://huggingface.co/docs/transformers/v4.51.3/en/main_classes/pipelines#transformers.Pipeline\">pipeline documentation</a> which says that <code>task</code> is some id. But it is not a user defined id because passing “foo” as the task to the model <a href=\"https://huggingface.co/google/gemma-3-27b-it\">gemma-3-27b-it</a> gives me an error that lists all the tasks. Is there a function to call that lists the tasks ahead of time without having to get an error message? It is not clear from the documentation that the tasks are implemented by each model not the pipeline api - and it would be nice to know exactly what a model does for implementation purposes rather than some generic description of tasks in the tutorial. I would rather have some way of figuring out what a particular model does so I can implement it. Are there any tools that help me figure this out? Maybe it’s possible to parse it from the config files or the model file?</p>\n<p>Also, how can I get information on message formatting for each task? Is there a way to figure this out or is it dependent on the information provided on the model card? So if the tasks and message formats are not listed on the model card, is there a way to determine these? Especially because I am also not seeing any source code implementing a model class that lists tasks and message parsers. Maybe there is a way to parse these from the config or model files as well?</p>\n<p>Thanks</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-26T02:21:47.120Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 22,
"reads": 12,
"readers_count": 11,
"score": 122.4,
"yours": false,
"topic_id": 152365,
"topic_slug": "can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does",
"display_username": "Sven Voigt",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/transformers/v4.51.3/en/main_classes/pipelines#transformers.Pipeline",
"internal": false,
"reflection": false,
"title": "Pipelines",
"clicks": 1
},
{
"url": "https://huggingface.co/google/gemma-3-27b-it",
"internal": false,
"reflection": false,
"title": "google/gemma-3-27b-it · Hugging Face",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91985,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does/152365/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218318,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-26T08:44:58.165Z",
"cooked": "<p>It seems that tasks are being retrieved from classes registered in AutoModel, so you should be able to identify the problem by checking whether the class corresponding to the task is defined in the code.</p>\n<p>I’m not sure if there is a simple method (a dedicated function) for this…</p><aside class=\"onebox githubblob\" data-onebox-src=\"https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L877\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L877\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/transformers</a>\n </header>\n\n <article class=\"onebox-body\">\n <h4><a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L877\" target=\"_blank\" rel=\"noopener\">src/transformers/pipelines/__init__.py</a></h4>\n\n<div class=\"git-blob-info\">\n <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L877\" rel=\"noopener\"><code>main</code></a>\n</div>\n\n\n\n <pre class=\"onebox\"><code class=\"lang-py\">\n <ol class=\"start lines\" start=\"867\" style=\"counter-reset: li-counter 866 ;\">\n <li></li>\n <li>if task is None and model is not None:</li>\n <li> if not isinstance(model, str):</li>\n <li> raise RuntimeError(</li>\n <li> \"Inferring the task automatically requires to check the hub with a model_id defined as a `str`. \"</li>\n <li> f\"{model} is not a valid model_id.\"</li>\n <li> )</li>\n <li> task = get_task(model, token)</li>\n <li></li>\n <li># Retrieve the task</li>\n <li class=\"selected\">if task in custom_tasks:</li>\n <li> normalized_task = task</li>\n <li> targeted_task, task_options = clean_custom_task(custom_tasks[task])</li>\n <li> if pipeline_class is None:</li>\n <li> if not trust_remote_code:</li>\n <li> raise ValueError(</li>\n <li> \"Loading this pipeline requires you to execute the code in the pipeline file in that\"</li>\n <li> \" repo on your local machine. Make sure you have read the code there to avoid malicious use, then\"</li>\n <li> \" set the option `trust_remote_code=True` to remove this error.\"</li>\n <li> )</li>\n <li> class_ref = targeted_task[\"impl\"]</li>\n </ol>\n </code></pre>\n\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubblob\" data-onebox-src=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/transformers</a>\n </header>\n\n <article class=\"onebox-body\">\n <h4><a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py\" target=\"_blank\" rel=\"noopener\">src/transformers/models/auto/modeling_auto.py</a></h4>\n\n<div class=\"git-blob-info\">\n <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py\" rel=\"noopener\"><code>main</code></a>\n</div>\n\n\n <pre><code class=\"lang-py\"># coding=utf-8\n# Copyright 2018 The HuggingFace Inc. team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Auto Model class.\"\"\"\n\nimport warnings\nfrom collections import OrderedDict\n\nfrom ...utils import logging\n</code></pre>\n\n\n\n This file has been truncated. <a href=\"https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py\" target=\"_blank\" rel=\"noopener\">show original</a>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-26T08:44:58.165Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 11,
"readers_count": 10,
"score": 2.2,
"yours": false,
"topic_id": 152365,
"topic_slug": "can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L877",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/pipelines/__init__.py at main · huggingface/transformers · GitHub",
"clicks": 0
},
{
"url": "https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/modeling_auto.py",
"internal": false,
"reflection": false,
"title": "transformers/src/transformers/models/auto/modeling_auto.py at main · huggingface/transformers · GitHub",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does/152365/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218524,
"name": "Sven Voigt",
"username": "svenpvoigt",
"avatar_template": "/user_avatar/discuss.huggingface.co/svenpvoigt/{size}/46353_2.png",
"created_at": "2025-04-27T18:32:02.143Z",
"cooked": "<p><a class=\"mention\" href=\"/u/john6666\">@John6666</a> Thanks that’s a good place to start looking!</p>\n<p>Also, to add an example to the original post, the <a href=\"https://huggingface.co/jinaai/jina-embeddings-v3\">jinaai-embeddings</a> model implements all custom tasks but lists them on the model card (e.g., retrieval.query, text-matching). However, it is unclear what the input format should be for each task just from the model card. It looks like lists of strings, but would need to see the model implementation to be sure there aren’t other options.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-27T18:32:24.674Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 152365,
"topic_slug": "can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does",
"display_username": "Sven Voigt",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/jinaai/jina-embeddings-v3",
"internal": false,
"reflection": false,
"title": "jinaai/jina-embeddings-v3 · Hugging Face",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91985,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does/152365/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220179,
"name": "Sven Voigt",
"username": "svenpvoigt",
"avatar_template": "/user_avatar/discuss.huggingface.co/svenpvoigt/{size}/46353_2.png",
"created_at": "2025-05-06T22:42:54.575Z",
"cooked": "<p>I think I have an answer:</p>\n<p>the message format is always a list of strings for the tokenizer, unless the tokenizer includes a template. In that case the template can be printed out with <code>tokenizer.chat_template</code> and usually includes system and user roles as well as some keywords like add_generation_prompt.</p>\n<p>However, it doesn’t seem to be overall standardized and there is a lot of custom code for models.</p>\n<p>So final answer: most everything has to be explained in the model card and you have to kind of figure out how to make it work from a couple examples.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-06T22:42:54.575Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 20.8,
"yours": false,
"topic_id": 152365,
"topic_slug": "can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does",
"display_username": "Sven Voigt",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91985,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does/152365/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220314,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-07T10:43:41.493Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-07T10:43:41.493Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 152365,
"topic_slug": "can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-i-get-clarification-on-what-exactly-transformers-does-vs-what-the-model-does/152365/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi there,</p>
<p>I am trying to figure out where documentation for models exists. For example, I am looking at the <a href="https://huggingface.co/docs/transformers/v4.51.3/en/main_classes/pipelines#transformers.Pipeline">pipeline documentation</a> which says that <code>task</code> is some id. But it is not a user defined id because passing “foo” as the task to the model <a href="https://huggingface.co/google/gemma-3-27b-it">gemma-3-27b-it</a> gives me an error that lists all the tasks. Is there a function to call that lists the tasks ahead of time without having to get an error message? It is not clear from the documentation that the tasks are implemented by each model not the pipeline api - and it would be nice to know exactly what a model does for implementation purposes rather than some generic description of tasks in the tutorial. I would rather have some way of figuring out what a particular model does so I can implement it. Are there any tools that help me figure this out? Maybe it’s possible to parse it from the config files or the model file?</p>
<p>Also, how can I get information on message formatting for each task? Is there a way to figure this out or is it dependent on the information provided on the model card? So if the tasks and message formats are not listed on the model card, is there a way to determine these? Especially because I am also not seeing any source code implementing a model class that lists tasks and message parsers. Maybe there is a way to parse these from the config or model files as well?</p>
<p>Thanks</p>
|
<p>I think I have an answer:</p>
<p>the message format is always a list of strings for the tokenizer, unless the tokenizer includes a template. In that case the template can be printed out with <code>tokenizer.chat_template</code> and usually includes system and user roles as well as some keywords like add_generation_prompt.</p>
<p>However, it doesn’t seem to be overall standardized and there is a lot of custom code for models.</p>
<p>So final answer: most everything has to be explained in the model card and you have to kind of figure out how to make it work from a couple examples.</p>
|
403 Error: “Private repository storage limit reached” — quota shows space remaining
|
https://discuss.huggingface.co/t/403-error-private-repository-storage-limit-reached-quota-shows-space-remaining/153121
| 153,121
| 23
|
2025-05-01T12:19:13.054000Z
|
[
{
"id": 219303,
"name": "Théo Boyer",
"username": "Theob",
"avatar_template": "/user_avatar/discuss.huggingface.co/theob/{size}/30775_2.png",
"created_at": "2025-05-01T12:19:13.110Z",
"cooked": "<p>Hi,<br>\nI’m getting the following error when trying to push to my private dataset repo using <code>huggingface_hub</code>:</p>\n<pre><code class=\"lang-auto\">403 Forbidden: Private repository storage limit reached, please upgrade your plan...\n</code></pre>\n<p>However, when I check my organization quota on the Hugging Face UI, it shows we’re only using ~66 GB out of the 100 GB available.</p>\n<p>Any advice on how to find the root cause of this discrepancy ?</p>\n<p>Thanks!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-01T12:19:13.110Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 83,
"reads": 18,
"readers_count": 17,
"score": 423.4,
"yours": false,
"topic_id": 153121,
"topic_slug": "403-error-private-repository-storage-limit-reached-quota-shows-space-remaining",
"display_username": "Théo Boyer",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/under-500-mb-in-storage-but-indicates-1-gb/166347/2",
"internal": true,
"reflection": true,
"title": "Under 500 MB in storage, but indicates 1 GB",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 30390,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/403-error-private-repository-storage-limit-reached-quota-shows-space-remaining/153121/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219312,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-01T12:45:00.165Z",
"cooked": "<p>There is a phenomenon where past git commit entries accumulate and waste space, but even in that case, the size itself should be displayed in the settings screen. This phenomenon is probably an error or a bad specification. <a class=\"mention\" href=\"/u/meganariley\">@meganariley</a> <a class=\"mention\" href=\"/u/pierric\">@pierric</a></p><aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"130269\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/carlthome/48/31253_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/spaces-force-push-getting-repository-storage-limit-reached/130269\">Spaces force push getting \"Repository storage limit reached\"</a> <a class=\"badge-category__wrapper \" href=\"/c/spaces/24\"><span data-category-id=\"24\" style=\"--category-badge-color: #25AAE2; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category to ask any questions about Spaces or to share your work.\"><span class=\"badge-category__name\">Spaces</span></span></a>\n </div>\n <blockquote>\n I have a Hugging Face Spaces app that I deploy to via GitHub Actions (<a href=\"https://huggingface.co/docs/hub/spaces-github-actions\">as per the documentation</a>) which contains a few example data of ~100 MB. I do a clean force push every time so there’s only a single commit on the Spaces repo. However, recently I started getting failing pushes. I assume this is because the LFS tracked assets are duplicated every force push, and not garbage collected internally. How can I repair this? \nUploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.\nbatch response: Reposi…\n </blockquote>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/hub/storage-limits\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/hub/storage-limits\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/f/3f13c6d0ad455fac9516b1c7edd35fc94c89d63a_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"FAF8F2\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/hub/storage-limits\" target=\"_blank\" rel=\"noopener\">Storage limits</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-01T12:45:00.165Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 15,
"readers_count": 14,
"score": 37.8,
"yours": false,
"topic_id": 153121,
"topic_slug": "403-error-private-repository-storage-limit-reached-quota-shows-space-remaining",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/hub/storage-limits",
"internal": false,
"reflection": false,
"title": "Storage limits",
"clicks": 4
},
{
"url": "https://discuss.huggingface.co/t/spaces-force-push-getting-repository-storage-limit-reached/130269",
"internal": true,
"reflection": false,
"title": "Spaces force push getting \"Repository storage limit reached\"",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/403-error-private-repository-storage-limit-reached-quota-shows-space-remaining/153121/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219768,
"name": "Andrew J tokar",
"username": "Zelgodiz",
"avatar_template": "/user_avatar/discuss.huggingface.co/zelgodiz/{size}/45662_2.png",
"created_at": "2025-05-05T04:30:01.968Z",
"cooked": "<p>It looks like you’re encountering a <strong>quota discrepancy</strong> issue on Hugging Face, where your storage limit error doesn’t match the actual usage shown in the UI. This has been reported by other users as well<a href=\"https://github.com/huggingface/huggingface_hub/issues/3049?citationMarker=43dcd9a7-70db-4a1f-b0ae-981daa162054\" title=\"1\" rel=\"noopener nofollow ugc\">43dcd9a7-70db-4a1f-b0ae-981daa162054</a>.</p>\n<h3><a name=\"p-219768-possible-causes-1\" class=\"anchor\" href=\"#p-219768-possible-causes-1\"></a><strong>Possible Causes</strong></h3>\n<ol>\n<li><strong>Hidden Large Files (LFS)</strong> – Some files tracked via <strong>Git Large File Storage (LFS)</strong> may not be counted in the UI but still contribute to the storage limit.</li>\n<li><strong>Stale Storage Calculation</strong> – The quota display might not be updating in real-time, leading to outdated usage stats.</li>\n<li><strong>Repository-Level Limits</strong> – Even if your <strong>organization</strong> has space left, individual <strong>repositories</strong> may have separate limits.</li>\n<li><strong>Force Push Issues</strong> – If you’ve been force-pushing updates, old files may still be counted in storage even if they’re not visible.</li>\n</ol>\n<h3><a name=\"p-219768-potential-fixes-2\" class=\"anchor\" href=\"#p-219768-potential-fixes-2\"></a><strong>Potential Fixes</strong></h3>\n<ul>\n<li><strong>Check LFS Usage</strong>: Run this in Python to manually compute LFS file sizes:<pre data-code-wrap=\"python\"><code class=\"lang-python\">from huggingface_hub import HfApi\napi = HfApi()\nlfs_files = list(api.list_lfs_files(repo_id=\"your_repo\", repo_type=\"dataset\"))\ntotal_size = sum(file.size for file in lfs_files)\nprint(f\"Total LFS storage used: {total_size / (1024**3)} GB\")\n</code></pre>\n</li>\n<li><strong>Delete Unused Large Files</strong>: If LFS files are taking up space, remove them using:<pre data-code-wrap=\"bash\"><code class=\"lang-bash\">git lfs prune\n</code></pre>\n</li>\n<li><strong>Contact Hugging Face Support</strong>: If the issue persists, reach out via their <a href=\"https://github.com/huggingface/huggingface_hub/issues/3049\" rel=\"noopener nofollow ugc\">GitHub issue tracker</a> or <a href=\"https://discuss.huggingface.co/t/spaces-force-push-getting-repository-storage-limit-reached/130269\">Hugging Face forums</a>.</li>\n</ul>\n<p>Let me know if you need help troubleshooting further! <img src=\"https://emoji.discourse-cdn.com/apple/rocket.png?v=14\" title=\":rocket:\" class=\"emoji\" alt=\":rocket:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-05T04:30:01.968Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 9,
"readers_count": 8,
"score": 41.6,
"yours": false,
"topic_id": 153121,
"topic_slug": "403-error-private-repository-storage-limit-reached-quota-shows-space-remaining",
"display_username": "Andrew J tokar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/huggingface_hub/issues/3049?citationMarker=43dcd9a7-70db-4a1f-b0ae-981daa162054",
"internal": false,
"reflection": false,
"title": "Private repository storage limit reached - quota shows space remaining · Issue #3049 · huggingface/huggingface_hub · GitHub",
"clicks": 2
},
{
"url": "https://github.com/huggingface/huggingface_hub/issues/3049",
"internal": false,
"reflection": false,
"title": "Private repository storage limit reached - quota shows space remaining · Issue #3049 · huggingface/huggingface_hub · GitHub",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/spaces-force-push-getting-repository-storage-limit-reached/130269",
"internal": true,
"reflection": false,
"title": "Spaces force push getting \"Repository storage limit reached\"",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90984,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/403-error-private-repository-storage-limit-reached-quota-shows-space-remaining/153121/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220056,
"name": "Théo Boyer",
"username": "Theob",
"avatar_template": "/user_avatar/discuss.huggingface.co/theob/{size}/30775_2.png",
"created_at": "2025-05-06T09:37:54.998Z",
"cooked": "<p>Solved! <a href=\"https://github.com/huggingface/huggingface_hub/issues/3048\" class=\"inline-onebox\" rel=\"noopener nofollow ugc\">“Private repository storage limit reached” — quota shows space remaining · Issue #3048 · huggingface/huggingface_hub · GitHub</a></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-06T09:37:54.998Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 7,
"readers_count": 6,
"score": 16.2,
"yours": false,
"topic_id": 153121,
"topic_slug": "403-error-private-repository-storage-limit-reached-quota-shows-space-remaining",
"display_username": "Théo Boyer",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/huggingface_hub/issues/3048",
"internal": false,
"reflection": false,
"title": "“Private repository storage limit reached” — quota shows space remaining · Issue #3048 · huggingface/huggingface_hub · GitHub",
"clicks": 17
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 30390,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/403-error-private-repository-storage-limit-reached-quota-shows-space-remaining/153121/4",
"reactions": [
{
"id": "confetti_ball",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220173,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-06T21:38:42.706Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-05-06T21:38:42.706Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 6,
"readers_count": 5,
"score": 1,
"yours": false,
"topic_id": 153121,
"topic_slug": "403-error-private-repository-storage-limit-reached-quota-shows-space-remaining",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/403-error-private-repository-storage-limit-reached-quota-shows-space-remaining/153121/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hi,<br>
I’m getting the following error when trying to push to my private dataset repo using <code>huggingface_hub</code>:</p>
<pre><code class="lang-auto">403 Forbidden: Private repository storage limit reached, please upgrade your plan...
</code></pre>
<p>However, when I check my organization quota on the Hugging Face UI, it shows we’re only using ~66 GB out of the 100 GB available.</p>
<p>Any advice on how to find the root cause of this discrepancy ?</p>
<p>Thanks!</p>
|
<p>Solved! <a href="https://github.com/huggingface/huggingface_hub/issues/3048" class="inline-onebox" rel="noopener nofollow ugc">“Private repository storage limit reached” — quota shows space remaining · Issue #3048 · huggingface/huggingface_hub · GitHub</a></p>
|
Prepare dataset from YOLO format to COCO for DETR
|
https://discuss.huggingface.co/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894
| 34,894
| 9
|
2023-03-28T10:19:48.796000Z
|
[
{
"id": 62739,
"name": "Alberto Ruiz",
"username": "Alberto1404",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/90ced4/{size}.png",
"created_at": "2023-03-28T10:19:48.868Z",
"cooked": "<p>Hi. I would like to compare two nets using the same dataset, regardless being Transformer-based (DETR) vs Non-Transformer based (YOLOv5).<br>\nI have already trained a model using Yolov5, such that my dataset is already split into train-val-test, in YOLO format. See <a href=\"https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco\" rel=\"noopener nofollow ugc\">Formatting table</a> to visualize an example. My dataset folder looks like this:</p>\n<pre><code class=\"lang-auto\">.\n├── train\n └── images\n │ ├── ima1.png\n │ ├── ima2.png\n │ ├── ...\n └── labels\n │ ├── ima1.txt\n │ ├── ima2.txt\n │ ├── ...\n├── val\n └── images\n │ ├── ima3.png\n │ ├── ima4.png\n │ ├── ...\n └── labels\n │ ├── ima3.txt\n │ ├── ima4.txt\n │ ├── ...\n├── test\n └── images\n │ ├── ima5.png\n │ ├── ima6.png\n │ ├── ...\n └── labels\n │ ├── ima5.txt\n │ ├── ima6.txt\n │ ├── ...\n</code></pre>\n<p>Now I want to convert it to COCO format. From <a href=\"https://huggingface.co/docs/transformers/tasks/object_detection\">Hugging Face documentation</a>, DETR demands COCO format in labels, using JSON files. However, you are using a dataset loaded from Hugging Face datasets library. Moreover, I would like to know if I should create 3 JSON files, for each split, or 1 JSON file containing all. In the latter case, could you provide some documentation on how should the JSON file be defined?<br>\nIf there is any tutorial on how to prepare the data to feed DETR, based on my specs, it would be nice to post it here.<br>\nThank you for all!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2023-03-28T10:19:48.868Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4546,
"reads": 46,
"readers_count": 45,
"score": 22644.2,
"yours": false,
"topic_id": 34894,
"topic_slug": "prepare-dataset-from-yolo-format-to-coco-for-detr",
"display_username": "Alberto Ruiz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco",
"internal": false,
"reflection": false,
"title": "Bounding boxes augmentation for object detection - Albumentations Documentation",
"clicks": 36
},
{
"url": "https://huggingface.co/docs/transformers/tasks/object_detection",
"internal": false,
"reflection": false,
"title": "Object detection",
"clicks": 33
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 15008,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894/1",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 63053,
"name": "Alberto Ruiz",
"username": "Alberto1404",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/90ced4/{size}.png",
"created_at": "2023-03-30T16:59:48.991Z",
"cooked": "<h1>\n<a name=\"update-1\" class=\"anchor\" href=\"#update-1\"></a>Update</h1>\n<p>I did the following parser to convert it.</p>\n<pre><code class=\"lang-python\">import os\nimport json\nfrom PIL import Image\nfrom tqdm import tqdm\n\n\ndef yolo_to_coco(image_dir, label_dir, output_dir):\n\t# Define categories\n\tcategories = [{'id': 0, 'name': 'person'}]\n\n\t# Initialize data dict\n\tdata = {'train': [], 'validation': [], 'test': []}\n\n\t# Loop over splits\n\tfor split in ['train', 'validation', 'test']:\n\t\tsplit_data = {'info': {}, 'licenses': [], 'images': [], 'annotations': [], 'categories': categories}\n\n\t\t# Get image and label files for current split\n\t\timage_files = sorted(os.listdir(image_dir))\n\t\tlabel_files = sorted(os.listdir(label_dir))\n\n\t\t# Loop over images in current split\n\t\tcumulative_id = 0\n\t\twith tqdm(total=len(image_files), desc=f'Processing {split} images') as pbar:\n\t\t\tfor i, filename in enumerate(image_files):\n\t\t\t\timage_path = os.path.join(image_dir, filename)\n\t\t\t\tim = Image.open(image_path)\n\t\t\t\tim_id = i + 1\n\n\t\t\t\tsplit_data['images'].append({\n\t\t\t\t\t'id': im_id,\n\t\t\t\t\t'file_name': filename,\n\t\t\t\t\t'width': im.size[0],\n\t\t\t\t\t'height': im.size[1]\n\t\t\t\t})\n\n\t\t\t\t# Get labels for current image\n\t\t\t\tlabel_path = os.path.join(label_dir, os.path.splitext(filename)[0] + '.txt')\n\t\t\t\twith open(label_path, 'r') as f:\n\t\t\t\t\tyolo_data = f.readlines()\n\n\t\t\t\tfor line in yolo_data:\n\t\t\t\t\tclass_id, x_center, y_center, width, height = line.split()\n\t\t\t\t\tclass_id = int(class_id)\n\t\t\t\t\tbbox_x = (float(x_center) - float(width) / 2) * im.size[0]\n\t\t\t\t\tbbox_y = (float(y_center) - float(height) / 2) * im.size[1]\n\t\t\t\t\tbbox_width = float(width) * im.size[0]\n\t\t\t\t\tbbox_height = float(height) * im.size[1]\n\n\t\t\t\t\tsplit_data['annotations'].append({\n\t\t\t\t\t\t'id': cumulative_id,\n\t\t\t\t\t\t'image_id': im_id,\n\t\t\t\t\t\t'category_id': class_id,\n\t\t\t\t\t\t'bbox': [bbox_x, bbox_y, bbox_width, bbox_height],\n\t\t\t\t\t\t'area': bbox_width * bbox_height,\n\t\t\t\t\t\t'iscrowd': 0\n\t\t\t\t\t})\n\n\t\t\t\t\tcumulative_id += 1\n\n\t\t\t\tpbar.update(1)\n\n\t\tdata[split] = split_data\n\n\t# Save data to JSON files\n\tfor split in ['train', 'validation', 'test']:\n\t\tfilename = os.path.join(output_dir, f'{split}.json')\n\t\twith open(filename, 'w') as f:\n\t\t\tjson.dump({'data': data[split]}, f)\n\n\treturn data\n\nimage_dir = '/home/alberto/Dataset/train/images'\nlabel_dir = '/home/alberto/Dataset/train/labels'\noutput_dir = './'\ncoco_data = yolo_to_coco(image_dir, label_dir, output_dir)\n\n</code></pre>\n<p>However, when I want to load my dataset using:</p>\n<pre><code class=\"lang-python\">from datasets import load_dataset\ndata_files = {\n\t\"train\": '/home/alberto/Dataset/train/images/train_labels.json',\n\t\"validation\": '/home/alberto/Dataset/val/images/val_labels.json',\n\t\"test\": '/home/alberto/Dataset/val/images/test_labels.json'\n}\ndataset = load_dataset(\"json\", data_files=data_files)\n</code></pre>\n<p>Typing <code>dataset['train']</code> outputs that number of rows is 1, which is not correct. It should be 7000, the number of images in the train set. Does anybody know where the error is commited?<br>\nExample with subset of train set:<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/2X/9/987d69ee5ab8bca0c6ba02ba77e58881ac92488c.png\" data-download-href=\"/uploads/short-url/lKZgWE3ZgJyQVaVByWBLUlrO06o.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/original/2X/9/987d69ee5ab8bca0c6ba02ba77e58881ac92488c.png\" alt=\"image\" data-base62-sha1=\"lKZgWE3ZgJyQVaVByWBLUlrO06o\" width=\"690\" height=\"197\" data-dominant-color=\"14323A\"><div class=\"meta\">\n<svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">916×262 36.9 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg>\n</div></a></div></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2023-03-31T07:29:16.824Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 474,
"reads": 45,
"readers_count": 44,
"score": 2399,
"yours": false,
"topic_id": 34894,
"topic_slug": "prepare-dataset-from-yolo-format-to-coco-for-detr",
"display_username": "Alberto Ruiz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://us1.discourse-cdn.com/hellohellohello/original/2X/9/987d69ee5ab8bca0c6ba02ba77e58881ac92488c.png",
"internal": false,
"reflection": false,
"title": "987d69ee5ab8bca0c6ba02ba77e58881ac92488c.png",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 15008,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 63655,
"name": "Alberto Ruiz",
"username": "Alberto1404",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/90ced4/{size}.png",
"created_at": "2023-04-04T12:20:54.348Z",
"cooked": "<p>In order to read it using <code>load_dataset</code>, it is a must to follow the same structure as defined<br>\n<a href=\"https://huggingface.co/docs/datasets/image_dataset#object-detection\">here</a></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2023-04-04T12:20:54.348Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 92,
"reads": 37,
"readers_count": 36,
"score": 467.4,
"yours": false,
"topic_id": 34894,
"topic_slug": "prepare-dataset-from-yolo-format-to-coco-for-detr",
"display_username": "Alberto Ruiz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/datasets/image_dataset#object-detection",
"internal": false,
"reflection": false,
"title": "Create an image dataset",
"clicks": 462
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 15008,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 15008,
"username": "Alberto1404",
"name": "Alberto Ruiz",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/a/90ced4/{size}.png"
},
"action_code": null,
"via_email": null
},
{
"id": 145731,
"name": "Daniyal Khan",
"username": "Daniyalkhan26",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/d/b5e925/{size}.png",
"created_at": "2024-07-23T10:01:20.744Z",
"cooked": "<p><a class=\"mention\" href=\"/u/alberto1404\">@Alberto1404</a> Have you find out the final script to convert from yolo format to coco for DETR? Have you resolved this issue\" typing <code>dataset['train']</code> outputs that number of rows is 1, which is not correct. It should be 7000, the number of images in the train set. Does anybody know where the error is commited?\"</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2024-07-23T10:01:20.744Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 11,
"reads": 18,
"readers_count": 17,
"score": 88.6,
"yours": false,
"topic_id": 34894,
"topic_slug": "prepare-dataset-from-yolo-format-to-coco-for-detr",
"display_username": "Daniyal Khan",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 2
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 58988,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 2
}
],
"current_user_reaction": null,
"reaction_users_count": 2,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220079,
"name": "RAOUNAK LOUDAD",
"username": "Godouche",
"avatar_template": "/user_avatar/discuss.huggingface.co/godouche/{size}/46990_2.png",
"created_at": "2025-05-06T12:03:48.957Z",
"cooked": "<p>could you please provide the solution to transform YOLO to COCO for DETR?</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-05-06T12:03:48.957Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 4,
"readers_count": 3,
"score": 30.8,
"yours": false,
"topic_id": 34894,
"topic_slug": "prepare-dataset-from-yolo-format-to-coco-for-detr",
"display_username": "RAOUNAK LOUDAD",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 93025,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/prepare-dataset-from-yolo-format-to-coco-for-detr/34894/5",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
}
] |
<p>Hi. I would like to compare two nets using the same dataset, regardless being Transformer-based (DETR) vs Non-Transformer based (YOLOv5).<br>
I have already trained a model using Yolov5, such that my dataset is already split into train-val-test, in YOLO format. See <a href="https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco" rel="noopener nofollow ugc">Formatting table</a> to visualize an example. My dataset folder looks like this:</p>
<pre><code class="lang-auto">.
├── train
└── images
│ ├── ima1.png
│ ├── ima2.png
│ ├── ...
└── labels
│ ├── ima1.txt
│ ├── ima2.txt
│ ├── ...
├── val
└── images
│ ├── ima3.png
│ ├── ima4.png
│ ├── ...
└── labels
│ ├── ima3.txt
│ ├── ima4.txt
│ ├── ...
├── test
└── images
│ ├── ima5.png
│ ├── ima6.png
│ ├── ...
└── labels
│ ├── ima5.txt
│ ├── ima6.txt
│ ├── ...
</code></pre>
<p>Now I want to convert it to COCO format. From <a href="https://huggingface.co/docs/transformers/tasks/object_detection">Hugging Face documentation</a>, DETR demands COCO format in labels, using JSON files. However, you are using a dataset loaded from Hugging Face datasets library. Moreover, I would like to know if I should create 3 JSON files, for each split, or 1 JSON file containing all. In the latter case, could you provide some documentation on how should the JSON file be defined?<br>
If there is any tutorial on how to prepare the data to feed DETR, based on my specs, it would be nice to post it here.<br>
Thank you for all!</p>
|
<p>In order to read it using <code>load_dataset</code>, it is a must to follow the same structure as defined<br>
<a href="https://huggingface.co/docs/datasets/image_dataset#object-detection">here</a></p>
|
The full dataset viewer is not available (click to read why). Only showing a preview of the rows
|
https://discuss.huggingface.co/t/the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows/153590
| 153,590
| 5
|
2025-05-05T14:53:31.649000Z
|
[
{
"id": 219886,
"name": "Bill",
"username": "mysocratesnote",
"avatar_template": "/user_avatar/discuss.huggingface.co/mysocratesnote/{size}/46167_2.png",
"created_at": "2025-05-05T14:53:31.718Z",
"cooked": "<p>I don’t know what happened here. For about 20-30 minutes <a href=\"https://huggingface.co/datasets/mysocratesnote/jfk-files-text\">the dataset card and data studio looked perfect</a> and was working including the ability to query with SQL but now I have this error message and nothing works.</p>\n<p>I was trying to add the metadata to my parquet file. It took several tries to get it right but maybe it was actually my 2nd to last try that was correct and the latest try is a disaster. Maybe I inadvertently over-wrote the good file.</p>\n<p>Can anyone assist with debugging this and help me figure out how to restore the good file?</p>\n<p>The correct file should have the following columns:</p>\n<p>column 1 - year<br>\ncolumn 2 - path<br>\ncolumn 3 - file_name<br>\ncolumn 4 - record_number<br>\ncolumn 5 - nara_release_date<br>\ncolumn 6 - formerly_withheld<br>\ncolumn 7 - agency<br>\ncolumn 8 - document_date<br>\ncolumn 9 - document_type<br>\ncolumn 10 - file_number<br>\ncolumn 11 - to_name<br>\ncolumn 12 - from_name<br>\ncolumn 13 - title<br>\ncolumn 14 - number_of_pages<br>\ncolumn 15 - originator<br>\ncolumn 16 - record_series<br>\ncolumn 17 - review_date<br>\ncolumn 18 - comments<br>\ncolumn 19 - pages_released<br>\ncolumn 20 - content</p>\n<p>The first file uploaded worked as well, it had only year, path, filename and content. These 16 new columns were inserted between filename and content.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-05T14:55:06.888Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 19,
"reads": 6,
"readers_count": 5,
"score": 111.2,
"yours": false,
"topic_id": 153590,
"topic_slug": "the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows",
"display_username": "Bill",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/datasets/mysocratesnote/jfk-files-text",
"internal": false,
"reflection": false,
"title": "mysocratesnote/jfk-files-text · Datasets at Hugging Face",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91697,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows/153590/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219935,
"name": "Bill",
"username": "mysocratesnote",
"avatar_template": "/user_avatar/discuss.huggingface.co/mysocratesnote/{size}/46167_2.png",
"created_at": "2025-05-05T19:11:08.441Z",
"cooked": "<p>Turns out uploading a .csv with a different number of columns even in a different directory broke it.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-05T19:11:08.441Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 153590,
"topic_slug": "the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows",
"display_username": "Bill",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91697,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows/153590/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 220026,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-06T07:11:25.083Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-06T07:11:25.083Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.8,
"yours": false,
"topic_id": 153590,
"topic_slug": "the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/the-full-dataset-viewer-is-not-available-click-to-read-why-only-showing-a-preview-of-the-rows/153590/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I don’t know what happened here. For about 20-30 minutes <a href="https://huggingface.co/datasets/mysocratesnote/jfk-files-text">the dataset card and data studio looked perfect</a> and was working including the ability to query with SQL but now I have this error message and nothing works.</p>
<p>I was trying to add the metadata to my parquet file. It took several tries to get it right but maybe it was actually my 2nd to last try that was correct and the latest try is a disaster. Maybe I inadvertently over-wrote the good file.</p>
<p>Can anyone assist with debugging this and help me figure out how to restore the good file?</p>
<p>The correct file should have the following columns:</p>
<p>column 1 - year<br>
column 2 - path<br>
column 3 - file_name<br>
column 4 - record_number<br>
column 5 - nara_release_date<br>
column 6 - formerly_withheld<br>
column 7 - agency<br>
column 8 - document_date<br>
column 9 - document_type<br>
column 10 - file_number<br>
column 11 - to_name<br>
column 12 - from_name<br>
column 13 - title<br>
column 14 - number_of_pages<br>
column 15 - originator<br>
column 16 - record_series<br>
column 17 - review_date<br>
column 18 - comments<br>
column 19 - pages_released<br>
column 20 - content</p>
<p>The first file uploaded worked as well, it had only year, path, filename and content. These 16 new columns were inserted between filename and content.</p>
|
<p>Turns out uploading a .csv with a different number of columns even in a different directory broke it.</p>
|
HF Playground Incorrect Billing -
|
https://discuss.huggingface.co/t/hf-playground-incorrect-billing/153328
| 153,328
| 5
|
2025-05-03T12:01:35.655000Z
|
[
{
"id": 219558,
"name": "Kwabena Anim",
"username": "KwabsHug",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/k/ba8739/{size}.png",
"created_at": "2025-05-03T12:01:35.766Z",
"cooked": "<p>Hello All, I was testing the HF playground and all my requests were only $0.20, I was testing in the window on the model page now my total is $9.08 (Model is Qwen/Qwen3-235B-A22B) Where can I find the HF Inference pricing and why is it so high? I got at best 10k tokens for price of Millions</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-03T12:11:46.503Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 23,
"reads": 8,
"readers_count": 7,
"score": 131.6,
"yours": false,
"topic_id": 153328,
"topic_slug": "hf-playground-incorrect-billing",
"display_username": "Kwabena Anim",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31391,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hf-playground-incorrect-billing/153328/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219616,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-03T23:07:53.607Z",
"cooked": "<p>It seems that the criteria have changed. In other words, when using large models, the cost per request becomes expensive.</p><aside class=\"quote\" data-post=\"3\" data-topic=\"149074\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/meganariley/48/20596_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/inference-api-cost-changed-for-meta-llama-3-3-70b/149074/3\">Inference API cost changed for meta-llama-3.3-70b?</a> <a class=\"badge-category__wrapper \" href=\"/c/inference-endpoints/64\"><span data-category-id=\"64\" style=\"--category-badge-color: #000000; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is to ask questions about Inference Endpoints, our production inference solution to easily deploy machine learning models hosted on the Hub.\"><span class=\"badge-category__name\">Inference Endpoints on the Hub</span></span></a>\n </div>\n <blockquote>\n In February, Inference billing usage had been a fixed rate while we added pay-as-you-go billing <a href=\"https://huggingface.co/posts/julien-c/158943939527784\">support</a>. Starting in March, usage now takes into account compute time x price of the hardware. We’re really sorry for any confusion! \nWe have more information about Inference Providers here: <a href=\"https://huggingface.co/docs/inference-providers/en/index\" class=\"inline-onebox\">Inference Providers</a>.\n </blockquote>\n</aside>\n\n<blockquote>\n<p>Starting in March, usage now takes into account compute time x price of the hardware</p>\n</blockquote>",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-03T23:07:53.607Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 6,
"readers_count": 5,
"score": 16.2,
"yours": false,
"topic_id": 153328,
"topic_slug": "hf-playground-incorrect-billing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/inference-api-cost-changed-for-meta-llama-3-3-70b/149074/3",
"internal": true,
"reflection": false,
"title": "Inference API cost changed for meta-llama-3.3-70b?",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hf-playground-incorrect-billing/153328/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219763,
"name": "Andrew J tokar",
"username": "Zelgodiz",
"avatar_template": "/user_avatar/discuss.huggingface.co/zelgodiz/{size}/45662_2.png",
"created_at": "2025-05-05T04:08:43.555Z",
"cooked": "<p>It sounds like the pricing jumped unexpectedly! Hugging Face’s inference costs can vary based on the model’s <strong>size, provider, and token usage</strong>. The <strong>Qwen/Qwen3-235B-A22B</strong> model is a <strong>Mixture-of-Experts (MoE) model</strong> with <strong>235 billion parameters</strong>, which means it can be significantly more expensive than smaller models<a href=\"https://llm-stats.com/models/qwen3-235b-a22b?citationMarker=43dcd9a7-70db-4a1f-b0ae-981daa162054\" title=\"1\" rel=\"noopener nofollow ugc\">43dcd9a7-70db-4a1f-b0ae-981daa162054</a>.</p>\n<h3><a name=\"p-219763-where-to-find-pricing-details-1\" class=\"anchor\" href=\"#p-219763-where-to-find-pricing-details-1\"></a><strong>Where to Find Pricing Details</strong></h3>\n<p>You can check Hugging Face’s official <strong>inference pricing</strong> on their <a href=\"https://huggingface.co/Qwen/Qwen3-235B-A22B\">model page</a> or explore detailed cost breakdowns on <a href=\"https://llm-stats.com/models/qwen3-235b-a22b\" rel=\"noopener nofollow ugc\">LLM Stats</a>.</p>\n<h3><a name=\"p-219763-why-the-cost-might-be-high-2\" class=\"anchor\" href=\"#p-219763-why-the-cost-might-be-high-2\"></a><strong>Why the Cost Might Be High</strong></h3>\n<ol>\n<li><strong>MoE Architecture</strong> – This model activates <strong>22 billion parameters</strong> per request, meaning it consumes more compute resources.</li>\n<li><strong>Token Pricing</strong> – Some models charge per <strong>million tokens</strong>, and if the pricing structure isn’t clear, it can lead to unexpected costs.</li>\n<li><strong>Inference Provider Differences</strong> – Different providers may have <strong>varying rates</strong>, so switching providers could help reduce costs.</li>\n<li><strong>Hidden Overhead</strong> – Some models require <strong>additional processing</strong> beyond just token generation, increasing the total price.</li>\n</ol>\n<h3><a name=\"p-219763-next-steps-3\" class=\"anchor\" href=\"#p-219763-next-steps-3\"></a><strong>Next Steps</strong></h3>\n<ul>\n<li><strong>Check the pricing breakdown</strong> on Hugging Face’s documentation.</li>\n<li><strong>Compare providers</strong> to see if a different one offers lower rates.</li>\n<li><strong>Limit token usage</strong> by adjusting your request length.</li>\n</ul>\n<p>If you need help optimizing your usage, I can suggest ways to reduce token consumption! <img src=\"https://emoji.discourse-cdn.com/apple/rocket.png?v=14\" title=\":rocket:\" class=\"emoji\" alt=\":rocket:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-05T04:08:43.555Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 1,
"yours": false,
"topic_id": 153328,
"topic_slug": "hf-playground-incorrect-billing",
"display_username": "Andrew J tokar",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://llm-stats.com/models/qwen3-235b-a22b",
"internal": false,
"reflection": false,
"title": null,
"clicks": 1
},
{
"url": "https://huggingface.co/Qwen/Qwen3-235B-A22B",
"internal": false,
"reflection": false,
"title": null,
"clicks": 1
},
{
"url": "https://llm-stats.com/models/qwen3-235b-a22b?citationMarker=43dcd9a7-70db-4a1f-b0ae-981daa162054",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90984,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hf-playground-incorrect-billing/153328/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219782,
"name": "Kwabena Anim",
"username": "KwabsHug",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/k/ba8739/{size}.png",
"created_at": "2025-05-05T06:26:22.561Z",
"cooked": "<p>Okay, so we are charged per time on HF inference API which means for now the solution is to use the other providers? Also is there a way to disable providers you dont want to use?</p>\n<p>Also is there a way to set a spending ceiling for my account?<br>\nIf I used R1 for the same task it wouldnt have cost this much through replicate for example.</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/9/5/9571b8608de9aa5f4df96db66c4c365c1254a517.png\" data-download-href=\"/uploads/short-url/lk2MPlBUTTTUcJi7nG0RLRmjOVp.png?dl=1\" title=\"Screenshot 2025-05-03 184046\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/5/9571b8608de9aa5f4df96db66c4c365c1254a517_2_690x335.png\" alt=\"Screenshot 2025-05-03 184046\" data-base62-sha1=\"lk2MPlBUTTTUcJi7nG0RLRmjOVp\" width=\"690\" height=\"335\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/5/9571b8608de9aa5f4df96db66c4c365c1254a517_2_690x335.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/5/9571b8608de9aa5f4df96db66c4c365c1254a517_2_1035x502.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/9/5/9571b8608de9aa5f4df96db66c4c365c1254a517_2_1380x670.png 2x\" data-dominant-color=\"10141F\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">Screenshot 2025-05-03 184046</span><span class=\"informations\">1807×878 86.5 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-05T06:26:22.561Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 15.8,
"yours": false,
"topic_id": 153328,
"topic_slug": "hf-playground-incorrect-billing",
"display_username": "Kwabena Anim",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31391,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hf-playground-incorrect-billing/153328/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219795,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-05-05T07:28:40.182Z",
"cooked": "<p>The payment limit is set to $100 by default. (I think it was already there when I first joined HF.)<br>\nChanging this should be sufficient for personal use.</p>\n<p>Detailed limits for the Inference API can apparently be set for Enterprise subscriptions. Well, if multiple people are using it, it’s more convenient to have separate limits for each service.</p>\n<p>Individual on/off settings for Inference Providers can be configured on the settings page.</p><aside class=\"quote\" data-post=\"13\" data-topic=\"13239\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/meganariley/48/20596_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/inference-api-budget-billing-limit/13239/13\">Inference API budget, billing limit</a> <a class=\"badge-category__wrapper \" href=\"/c/site-feedback/2\"><span data-category-id=\"2\" style=\"--category-badge-color: #808281; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any feedback you have for the Hugging Face team about this forum or the website in general. Let us know how we can improve them!\"><span class=\"badge-category__name\">Site Feedback</span></span></a>\n </div>\n <blockquote>\n Hi <a class=\"mention\" href=\"/u/john6666\">@John6666</a>, <a class=\"mention\" href=\"/u/filipptrigub\">@FilippTrigub</a>, and <a class=\"mention\" href=\"/u/im93\">@im93</a>! This feature now exists for Enterprise Hub organizations for Inference Providers billing! You can check out what setting a limit looks like in the screenshot here: <a href=\"https://huggingface.co/docs/inference-providers/en/pricing#organization-billing\" class=\"inline-onebox\">Pricing and Billing</a>. \nFor more info and to subscribe to Enterprise Hub, head here: <a href=\"https://huggingface.co/enterprise\" class=\"inline-onebox\">Enterprise Hub - Hugging Face</a>.\n </blockquote>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/inference-providers/pricing\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/inference-providers/pricing\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/9/49ea0920c7b377025bd26a49d8a827ed0471d7ee_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F2F0EA\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/inference-providers/pricing\" target=\"_blank\" rel=\"noopener\">Pricing and Billing</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<p>Edit:</p>\n<blockquote>\n<p>The payment limit is set to $100 by default</p>\n</blockquote>\n<p>Oh… It was wrong…</p><aside class=\"quote\" data-post=\"14\" data-topic=\"13239\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/meganariley/48/20596_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/inference-api-budget-billing-limit/13239/14\">Inference API budget, billing limit</a> <a class=\"badge-category__wrapper \" href=\"/c/site-feedback/2\"><span data-category-id=\"2\" style=\"--category-badge-color: #808281; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any feedback you have for the Hugging Face team about this forum or the website in general. Let us know how we can improve them!\"><span class=\"badge-category__name\">Site Feedback</span></span></a>\n </div>\n <blockquote>\n <a class=\"mention\" href=\"/u/john6666\">@John6666</a> The $100 is the threshold limit and please note it doesn’t act as a spending cap. More info here: <a href=\"https://huggingface.co/docs/hub/billing#billing-thresholds--invoicing\" class=\"inline-onebox\">Billing</a>.\n </blockquote>\n</aside>\n",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-05-05T21:32:43.345Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 4,
"readers_count": 3,
"score": 30.8,
"yours": false,
"topic_id": 153328,
"topic_slug": "hf-playground-incorrect-billing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/inference-api-budget-billing-limit/13239/14",
"internal": true,
"reflection": false,
"title": "Inference API budget, billing limit",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/inference-api-budget-billing-limit/13239/13",
"internal": true,
"reflection": false,
"title": "Inference API budget, billing limit",
"clicks": 0
},
{
"url": "https://huggingface.co/docs/inference-providers/pricing",
"internal": false,
"reflection": false,
"title": "Pricing and Billing",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hf-playground-incorrect-billing/153328/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219939,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-05T19:28:48.453Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-05-05T19:28:48.453Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 153328,
"topic_slug": "hf-playground-incorrect-billing",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hf-playground-incorrect-billing/153328/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello All, I was testing the HF playground and all my requests were only $0.20, I was testing in the window on the model page now my total is $9.08 (Model is Qwen/Qwen3-235B-A22B) Where can I find the HF Inference pricing and why is it so high? I got at best 10k tokens for price of Millions</p>
|
<p>The payment limit is set to $100 by default. (I think it was already there when I first joined HF.)<br>
Changing this should be sufficient for personal use.</p>
<p>Detailed limits for the Inference API can apparently be set for Enterprise subscriptions. Well, if multiple people are using it, it’s more convenient to have separate limits for each service.</p>
<p>Individual on/off settings for Inference Providers can be configured on the settings page.</p><aside class="quote" data-post="13" data-topic="13239">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/meganariley/48/20596_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/inference-api-budget-billing-limit/13239/13">Inference API budget, billing limit</a> <a class="badge-category__wrapper " href="/c/site-feedback/2"><span data-category-id="2" style="--category-badge-color: #808281; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="This category is for any feedback you have for the Hugging Face team about this forum or the website in general. Let us know how we can improve them!"><span class="badge-category__name">Site Feedback</span></span></a>
</div>
<blockquote>
Hi <a class="mention" href="/u/john6666">@John6666</a>, <a class="mention" href="/u/filipptrigub">@FilippTrigub</a>, and <a class="mention" href="/u/im93">@im93</a>! This feature now exists for Enterprise Hub organizations for Inference Providers billing! You can check out what setting a limit looks like in the screenshot here: <a href="https://huggingface.co/docs/inference-providers/en/pricing#organization-billing" class="inline-onebox">Pricing and Billing</a>.
For more info and to subscribe to Enterprise Hub, head here: <a href="https://huggingface.co/enterprise" class="inline-onebox">Enterprise Hub - Hugging Face</a>.
</blockquote>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/inference-providers/pricing">
<header class="source">
<a href="https://huggingface.co/docs/inference-providers/pricing" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/9/49ea0920c7b377025bd26a49d8a827ed0471d7ee_2_690x372.png" class="thumbnail" data-dominant-color="F2F0EA" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/inference-providers/pricing" target="_blank" rel="noopener">Pricing and Billing</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>Edit:</p>
<blockquote>
<p>The payment limit is set to $100 by default</p>
</blockquote>
<p>Oh… It was wrong…</p><aside class="quote" data-post="14" data-topic="13239">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/meganariley/48/20596_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/inference-api-budget-billing-limit/13239/14">Inference API budget, billing limit</a> <a class="badge-category__wrapper " href="/c/site-feedback/2"><span data-category-id="2" style="--category-badge-color: #808281; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="This category is for any feedback you have for the Hugging Face team about this forum or the website in general. Let us know how we can improve them!"><span class="badge-category__name">Site Feedback</span></span></a>
</div>
<blockquote>
<a class="mention" href="/u/john6666">@John6666</a> The $100 is the threshold limit and please note it doesn’t act as a spending cap. More info here: <a href="https://huggingface.co/docs/hub/billing#billing-thresholds--invoicing" class="inline-onebox">Billing</a>.
</blockquote>
</aside>
|
Adding additional metadata columns to a .parque file from .xlsx files
|
https://discuss.huggingface.co/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017
| 152,017
| 12
|
2025-04-23T18:50:05.289000Z
|
[
{
"id": 217777,
"name": "Bill",
"username": "mysocratesnote",
"avatar_template": "/user_avatar/discuss.huggingface.co/mysocratesnote/{size}/46167_2.png",
"created_at": "2025-04-23T18:50:05.356Z",
"cooked": "<p>I just created a <a href=\"https://huggingface.co/datasets/mysocratesnote/jfk-files-text\">data set</a> containing extracted text from the JFK Files.</p>\n<p>Each release had an accompanying <a href=\"https://github.com/noops888/jfk-files-text/tree/main/downloader_scripts/xlsx\" rel=\"noopener nofollow ugc\">.xlsx file</a> with a bunch of metadata including: Record Num, NARA Release Date, Formerly Withheld, Doc Date, Doc Type, Doc Type, File Num, To Name, From Name, Title, Num Pages, Originator, Record Series, Review Date, Comments, Pages Released</p>\n<p>Record Num - Record Number, also sometimes the filename less the extension but sometimes not.<br>\nNARA Release Date - Date archives(.)org released the file<br>\nFormerly Withheld - Reason for withholding the document<br>\nDoc Date - Original document date<br>\nDoc Type - Paper, audio tape, etc.<br>\nFile Num - File Number<br>\nTo Name - Who the document was addressed to<br>\nFrom Name - Who sent the document<br>\nTitle - Document title<br>\nNum Pages - Total number of pages in the document<br>\nOriginator - Where the document came from, often CIA or FBI<br>\nRecord Series - In this case they may all be ‘JFK’<br>\nReview Date - Date the document was reviewed for release<br>\nComments - Comments<br>\nPages Released - Number of pages released</p>\n<p>It seems like the parque format is ideal to attach all this meta data to the content of the files and while this initially looks like a straight forward task, it’s a bit more challenging because:</p>\n<ol>\n<li>\n<p>The same record number can refer to multiple files <em>and</em> a single file can have multiple record numbers.</p>\n</li>\n<li>\n<p>Sometimes the record number is the file name (less the extension), sometimes it’s a “dicid” (whatever that is) and sometimes the files follow no standard naming convention at all.</p>\n</li>\n<li>\n<p>Each release has a different format for the .xlsx files.</p>\n</li>\n<li>\n<p>2025 seems to have standardized on the record number for the file name and no .xlsx is provided. We only have filenames and NARA Release Date. But, many (maybe even all?) of these files were previously released (often with more redactions , blank or missing pages) and have meta data in the .xlsx files from previous releases.</p>\n</li>\n<li>\n<p>Many of the same files appear again and again in subsequent releases usually with additional pages and/or less redactions.</p>\n</li>\n<li>\n<p>The 2017-2018 release is by far the largest and many files appear twice within the same release.</p>\n</li>\n</ol>\n<p>This may be a trivial task for an experienced data scientist but it’s challenging for me therefore I’m reaching out to see if anyone can suggest the best approach.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-24T05:52:21.958Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 20,
"reads": 3,
"readers_count": 2,
"score": 115.6,
"yours": false,
"topic_id": 152017,
"topic_slug": "adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files",
"display_username": "Bill",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 3,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/noops888/jfk-files-text/tree/main/downloader_scripts/xlsx",
"internal": false,
"reflection": false,
"title": "jfk-files-text/downloader_scripts/xlsx at main · noops888/jfk-files-text · GitHub",
"clicks": 0
},
{
"url": "https://huggingface.co/datasets/mysocratesnote/jfk-files-text",
"internal": false,
"reflection": false,
"title": "mysocratesnote/jfk-files-text · Datasets at Hugging Face",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91697,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 217801,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-23T22:37:20.357Z",
"cooked": "<p>The xlsx format is often difficult to handle with software, so it would be better to convert it to csv (using Python or some kind of GUI tool) and then read it with the datasets library…</p>\n<p>Incidentally, it will be converted to parquet format when it is read.</p>\n<p>The text is small, so size is not really an issue, but I think it would be better to copy it for multiple references. Is there a good way to convert complex xlsx files…?</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://www.geeksforgeeks.org/convert-excel-to-csv-in-python/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/e/b/eb43f6eeac1480d83f476ebbc7b8ea0e3a29ec05.png\" class=\"site-icon\" data-dominant-color=\"2F8D46\" width=\"32\" height=\"32\">\n\n <a href=\"https://www.geeksforgeeks.org/convert-excel-to-csv-in-python/\" target=\"_blank\" rel=\"noopener\" title=\"12:33AM - 09 July 2020\">GeeksforGeeks – 9 Jul 20</a>\n </header>\n\n <article class=\"onebox-body\">\n <img width=\"200\" height=\"200\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/8/d/8da0a1c5a233ca0377b1baef8ba14b73fc9bd7d1.png\" class=\"thumbnail onebox-avatar\" data-dominant-color=\"D5E8DA\">\n\n<h3><a href=\"https://www.geeksforgeeks.org/convert-excel-to-csv-in-python/\" target=\"_blank\" rel=\"noopener\">Convert Excel to CSV in Python - GeeksforGeeks</a></h3>\n\n <p>Your All-in-One Learning Portal: GeeksforGeeks is a comprehensive educational platform that empowers learners across domains-spanning computer science and programming, school education, upskilling, commerce, software tools, competitive exams, and...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/datasets/en/loading\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/datasets/en/loading\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/3/5/35e852b936c2343e04e14f5d22299d4e04d553d8_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F8F5F0\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/datasets/en/loading\" target=\"_blank\" rel=\"noopener\">Load</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-23T22:37:20.357Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 25.6,
"yours": false,
"topic_id": 152017,
"topic_slug": "adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.geeksforgeeks.org/convert-excel-to-csv-in-python/",
"internal": false,
"reflection": false,
"title": "Convert Excel to CSV in Python | GeeksforGeeks",
"clicks": 0
},
{
"url": "https://huggingface.co/docs/datasets/en/loading",
"internal": false,
"reflection": false,
"title": "Load",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 217962,
"name": "Bill",
"username": "mysocratesnote",
"avatar_template": "/user_avatar/discuss.huggingface.co/mysocratesnote/{size}/46167_2.png",
"created_at": "2025-04-24T15:59:19.655Z",
"cooked": "<p>Hi again <a class=\"mention\" href=\"/u/john6666\">@John6666</a> converting to .csv is no problem using python or just saving it to CSV from Exel - there are only four files. They are large but not super massive. The problem arises from a few different issues, inconsistent format of the spreadsheet. Record numbers that refer to multiple files but also single files that have multiple record numbers. Duplicate file listings in the spreadsheets (probably due to the record number issue), and some bad data:</p>\n<p>34 files in the 2022 release and 5 files in the 2021 release tie to multiple record numbers listed in the .xlsx files which have more rows than unique file names (13,263 and 1,491 resptively). The <a href=\"https://www.archives.gov/files/research/jfk/national-archives-jfk-assassination-records-2017-2018-release.xlsx\" rel=\"noopener nofollow ugc\">2017-2018 release xlsx file</a>contains 6 bad links, but <a href=\"https://www.archives.gov/research/jfk/release-2017-2018\" rel=\"noopener nofollow ugc\">the 2017-2018 release website</a> lists two files not included in the xlsx in the /additional path. With two exceptions all .md files match up to .pdf files, the two exceptions match to .mp3 files.</p>\n<p>national-archives-jfk-assassination-records-2017-2018-release.xlsx (17 columns, 54,636 data rows, 1 header)</p>\n<p>Columns: File Name, Record Num, NARA Release Date, Formerly Withheld, Agency, Doc Date, Doc Type. File Num\tTo Name, From Name, Title, Num Pages, Originator, Record Series, Review Date, Comments, Pages Released.</p>\n<p>national-archives-jfk-assassination-records-2021-release.xlsx (16 columns, 1,491 data rows, 1 header)</p>\n<p>Columns: Record Number, File Title, NARA Release Date, Formerly Withheld, Document Date, Document Type, File Number., To, From, Title, Original Document Pages, Originator, Record Series, Review Date, Comments, Document Pages in PDF</p>\n<p>File Title is the same as File Name<br>\nDocument Pages in PDF is the same as Pages Released<br>\nAgency is missing (often the same as “Originator” but sometimes different).</p>\n<p>national-archives-jfk-assassination-records-2022-release.xlsx (16 columns, 13,264 data rows, 1 header)</p>\n<p>Columns: File Name, Record Num, NARA Release Date, Formerly Withheld, Doc Date, Doc Type, File Num\tTo Name, From Name,\tTitle, Num Pages, Originator, Record Series, Review Date, Comments, Pages Released</p>\n<p>Format looks the same as the first file but is missing “Agency”</p>\n<p>national-archives-jfk-assassination-records-2023-release.xlsx (17 columns, 2693 data rows, 1 header)</p>\n<p>Columns: File Name, Record Num, NARA Release Date, Formerly Withheld, Agency, Doc Date, Doc Type\tFile Num, To Name, From Name, Title, Num Pages, Originator, Record Series, Review Date, Comments, Pages Released</p>\n<p>Back to the first file’s format, Agency column is back but it’s blank for this release.</p>\n<p>2025-release.xlsx (2 columns, 2,566 data rows, 1 header)</p>\n<p>Columns: Record Number, NARA Release Date</p>\n<p>There was no .xlsx provided for 2025, this is the only available information from the website which mirrors the .xlsx for previous years.</p>\n<p>For an experienced developer I’m sure this is easy but I’m not sure how to go about because of all the inconsistencies and discrepancies. It’s not a simple 1:1 mapping. But, having all this metadata in the parque file and standardized as best as possible would definitely make for a much better data set.</p>\n<p>It would make sense to standardize on the column headings used in 3 out of the 4 files and to leave the columns blank where data wasn’t provided.</p>\n<p>If anyone can offer some advice on the best way to do this without introducing a bunch of data errors it would be much appreciated.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-24T15:59:19.655Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 3,
"readers_count": 2,
"score": 25.6,
"yours": false,
"topic_id": 152017,
"topic_slug": "adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files",
"display_username": "Bill",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://www.archives.gov/research/jfk/release-2017-2018",
"internal": false,
"reflection": false,
"title": "JFK Assassination Records - 2017-2018 Additional Documents Release | National Archives",
"clicks": 0
},
{
"url": "https://www.archives.gov/files/research/jfk/national-archives-jfk-assassination-records-2017-2018-release.xlsx",
"internal": false,
"reflection": false,
"title": null,
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91697,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 218079,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-25T03:21:47.447Z",
"cooked": "<p>I’m not a data scientist, so this is just a general observation, but when dealing with text-based data, it’s easier for the computer to process if you align the data to the larger number.<br>\nRegardless of whether individual data points exist or not, it’s best to add all possible columns to all data.</p>\n<p>And for complete irregularities like the mp3 part, it’s faster and more reliable to handle them manually. Just because you have the tools doesn’t mean you have to do it by hand—no one has decided that.</p>\n<hr>\n<p>by Hugging Chat: <a href=\"https://huggingface.co/chat/\" class=\"inline-onebox\">HuggingChat</a></p>\n<p>To standardize the inconsistent spreadsheet data from the JFK assassination records releases, follow this structured approach:</p>\n<h3><a name=\"p-218079-step-by-step-solution-1\" class=\"anchor\" href=\"#p-218079-step-by-step-solution-1\"></a>Step-by-Step Solution</h3>\n<ol>\n<li>\n<p><strong>Read and Load Data</strong></p>\n<ul>\n<li>Use Python’s <code>pandas</code> library to read each Excel file into a DataFrame.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">import pandas as pd\n\nfiles = ['2017-2018.xlsx', '2021.xlsx', '2022.xlsx', '2023.xlsx', '2025.xlsx']\ndfs = []\nfor file in files:\n dfs.append(pd.read_excel(file))\n</code></pre>\n</li>\n<li>\n<p><strong>Standardize Column Names</strong></p>\n<ul>\n<li>Create a mapping dictionary to standardize column names across all files.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">column_mapping = {\n 'File Name': 'File Name',\n 'Record Num': 'Record Number',\n 'NARA Release Date': 'Release Date',\n 'Formerly Withheld': 'Withheld',\n 'Agency': 'Agency',\n 'Doc Date': 'Document Date',\n 'Doc Type': 'Document Type',\n 'File Num To Name': 'File Number',\n 'From Name': 'From',\n 'Title': 'Title',\n 'Num Pages': 'Pages',\n 'Originator': 'Originator',\n 'Record Series': 'Series',\n 'Review Date': 'Review Date',\n 'Comments': 'Comments',\n 'Pages Released': 'Released Pages'\n}\n</code></pre>\n<ul>\n<li>Apply the mapping to each DataFrame.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">for df in dfs:\n df.columns = [column_mapping.get(col, col) for col in df.columns]\n</code></pre>\n</li>\n<li>\n<p><strong>Handle Missing Columns</strong></p>\n<ul>\n<li>Ensure all DataFrames have the same columns by adding missing ones with <code>NaN</code> where data is unavailable.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">all_columns = set()\nfor df in dfs:\n all_columns.update(df.columns)\nall_columns = list(all_columns)\n\nfor df in dfs:\n missing_cols = [col for col in all_columns if col not in df.columns]\n for col in missing_cols:\n df[col] = pd.NA\n</code></pre>\n</li>\n<li>\n<p><strong>Identify and Remove Duplicates</strong></p>\n<ul>\n<li>Check for duplicates based on key columns like ‘Record Number’ or ‘File Name’.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">for df in dfs:\n df.drop_duplicates(subset=['Record Number', 'File Name'], inplace=True)\n</code></pre>\n</li>\n<li>\n<p><strong>Clean Data and Validate Paths</strong></p>\n<ul>\n<li>Validate file paths and correct extensions where necessary.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">import os\n\nfor df in dfs:\n for index, row in df.iterrows():\n file_path = row['File Name']\n if not os.path.exists(file_path):\n # Handle invalid path; log or flag\n print(f\"Invalid path: {file_path}\")\n # Correct extension for known exceptions\n if file_path.endswith('.md') and row['File Type'] == 'mp3':\n df.at[index, 'File Name'] = file_path.replace('.md', '.mp3')\n</code></pre>\n</li>\n<li>\n<p><strong>Concatenate DataFrames</strong></p>\n<ul>\n<li>Combine all DataFrames into a single standardized DataFrame.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">combined_df = pd.concat(dfs, ignore_index=True)\n</code></pre>\n</li>\n<li>\n<p><strong>Save to Standardized CSV</strong></p>\n<ul>\n<li>Write the standardized data to a CSV file.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">combined_df.to_csv('standardized_jfk_records.csv', index=False)\n</code></pre>\n</li>\n</ol>\n<h3><a name=\"p-218079-considerations-and-recommendations-2\" class=\"anchor\" href=\"#p-218079-considerations-and-recommendations-2\"></a>Considerations and Recommendations</h3>\n<ul>\n<li><strong>Data Type Handling</strong>: Ensure consistent data types, especially for dates and numeric fields.</li>\n<li><strong>Testing and Validation</strong>: Regularly inspect DataFrames to verify data integrity post-processing.</li>\n<li><strong>Exception Handling</strong>: Implement logging for discrepancies and edge cases encountered.</li>\n</ul>\n<p>This approach systematically addresses the inconsistencies and discrepancies in the data, ensuring a standardized and clean dataset is produced.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-25T03:21:47.447Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 3,
"readers_count": 2,
"score": 30.6,
"yours": false,
"topic_id": 152017,
"topic_slug": "adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/chat/",
"internal": false,
"reflection": false,
"title": "HuggingChat",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218099,
"name": "Bill",
"username": "mysocratesnote",
"avatar_template": "/user_avatar/discuss.huggingface.co/mysocratesnote/{size}/46167_2.png",
"created_at": "2025-04-25T06:39:46.293Z",
"cooked": "<p>That sounds like a very logical approach that will address all the issues, except the duplicate file listings which are multiple record numbers that apply to the same file. That needs to get into the final data. I guess the inverse were multiple files have the same record number would sort itself out automatically. You’re right mp3 and the few broken links can be handled manually.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-25T06:39:46.293Z",
"reply_count": 0,
"reply_to_post_number": 4,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 4,
"readers_count": 3,
"score": 30.8,
"yours": false,
"topic_id": 152017,
"topic_slug": "adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files",
"display_username": "Bill",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91697,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 219883,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-05T14:32:31.129Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-05-05T14:32:31.129Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 2,
"readers_count": 1,
"score": 5.4,
"yours": false,
"topic_id": 152017,
"topic_slug": "adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/adding-additional-metadata-columns-to-a-parque-file-from-xlsx-files/152017/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I just created a <a href="https://huggingface.co/datasets/mysocratesnote/jfk-files-text">data set</a> containing extracted text from the JFK Files.</p>
<p>Each release had an accompanying <a href="https://github.com/noops888/jfk-files-text/tree/main/downloader_scripts/xlsx" rel="noopener nofollow ugc">.xlsx file</a> with a bunch of metadata including: Record Num, NARA Release Date, Formerly Withheld, Doc Date, Doc Type, Doc Type, File Num, To Name, From Name, Title, Num Pages, Originator, Record Series, Review Date, Comments, Pages Released</p>
<p>Record Num - Record Number, also sometimes the filename less the extension but sometimes not.<br>
NARA Release Date - Date archives(.)org released the file<br>
Formerly Withheld - Reason for withholding the document<br>
Doc Date - Original document date<br>
Doc Type - Paper, audio tape, etc.<br>
File Num - File Number<br>
To Name - Who the document was addressed to<br>
From Name - Who sent the document<br>
Title - Document title<br>
Num Pages - Total number of pages in the document<br>
Originator - Where the document came from, often CIA or FBI<br>
Record Series - In this case they may all be ‘JFK’<br>
Review Date - Date the document was reviewed for release<br>
Comments - Comments<br>
Pages Released - Number of pages released</p>
<p>It seems like the parque format is ideal to attach all this meta data to the content of the files and while this initially looks like a straight forward task, it’s a bit more challenging because:</p>
<ol>
<li>
<p>The same record number can refer to multiple files <em>and</em> a single file can have multiple record numbers.</p>
</li>
<li>
<p>Sometimes the record number is the file name (less the extension), sometimes it’s a “dicid” (whatever that is) and sometimes the files follow no standard naming convention at all.</p>
</li>
<li>
<p>Each release has a different format for the .xlsx files.</p>
</li>
<li>
<p>2025 seems to have standardized on the record number for the file name and no .xlsx is provided. We only have filenames and NARA Release Date. But, many (maybe even all?) of these files were previously released (often with more redactions , blank or missing pages) and have meta data in the .xlsx files from previous releases.</p>
</li>
<li>
<p>Many of the same files appear again and again in subsequent releases usually with additional pages and/or less redactions.</p>
</li>
<li>
<p>The 2017-2018 release is by far the largest and many files appear twice within the same release.</p>
</li>
</ol>
<p>This may be a trivial task for an experienced data scientist but it’s challenging for me therefore I’m reaching out to see if anyone can suggest the best approach.</p>
|
<p>I’m not a data scientist, so this is just a general observation, but when dealing with text-based data, it’s easier for the computer to process if you align the data to the larger number.<br>
Regardless of whether individual data points exist or not, it’s best to add all possible columns to all data.</p>
<p>And for complete irregularities like the mp3 part, it’s faster and more reliable to handle them manually. Just because you have the tools doesn’t mean you have to do it by hand—no one has decided that.</p>
<hr>
<p>by Hugging Chat: <a href="https://huggingface.co/chat/" class="inline-onebox">HuggingChat</a></p>
<p>To standardize the inconsistent spreadsheet data from the JFK assassination records releases, follow this structured approach:</p>
<h3><a name="p-218079-step-by-step-solution-1" class="anchor" href="#p-218079-step-by-step-solution-1"></a>Step-by-Step Solution</h3>
<ol>
<li>
<p><strong>Read and Load Data</strong></p>
<ul>
<li>Use Python’s <code>pandas</code> library to read each Excel file into a DataFrame.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">import pandas as pd
files = ['2017-2018.xlsx', '2021.xlsx', '2022.xlsx', '2023.xlsx', '2025.xlsx']
dfs = []
for file in files:
dfs.append(pd.read_excel(file))
</code></pre>
</li>
<li>
<p><strong>Standardize Column Names</strong></p>
<ul>
<li>Create a mapping dictionary to standardize column names across all files.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">column_mapping = {
'File Name': 'File Name',
'Record Num': 'Record Number',
'NARA Release Date': 'Release Date',
'Formerly Withheld': 'Withheld',
'Agency': 'Agency',
'Doc Date': 'Document Date',
'Doc Type': 'Document Type',
'File Num To Name': 'File Number',
'From Name': 'From',
'Title': 'Title',
'Num Pages': 'Pages',
'Originator': 'Originator',
'Record Series': 'Series',
'Review Date': 'Review Date',
'Comments': 'Comments',
'Pages Released': 'Released Pages'
}
</code></pre>
<ul>
<li>Apply the mapping to each DataFrame.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">for df in dfs:
df.columns = [column_mapping.get(col, col) for col in df.columns]
</code></pre>
</li>
<li>
<p><strong>Handle Missing Columns</strong></p>
<ul>
<li>Ensure all DataFrames have the same columns by adding missing ones with <code>NaN</code> where data is unavailable.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">all_columns = set()
for df in dfs:
all_columns.update(df.columns)
all_columns = list(all_columns)
for df in dfs:
missing_cols = [col for col in all_columns if col not in df.columns]
for col in missing_cols:
df[col] = pd.NA
</code></pre>
</li>
<li>
<p><strong>Identify and Remove Duplicates</strong></p>
<ul>
<li>Check for duplicates based on key columns like ‘Record Number’ or ‘File Name’.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">for df in dfs:
df.drop_duplicates(subset=['Record Number', 'File Name'], inplace=True)
</code></pre>
</li>
<li>
<p><strong>Clean Data and Validate Paths</strong></p>
<ul>
<li>Validate file paths and correct extensions where necessary.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">import os
for df in dfs:
for index, row in df.iterrows():
file_path = row['File Name']
if not os.path.exists(file_path):
# Handle invalid path; log or flag
print(f"Invalid path: {file_path}")
# Correct extension for known exceptions
if file_path.endswith('.md') and row['File Type'] == 'mp3':
df.at[index, 'File Name'] = file_path.replace('.md', '.mp3')
</code></pre>
</li>
<li>
<p><strong>Concatenate DataFrames</strong></p>
<ul>
<li>Combine all DataFrames into a single standardized DataFrame.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">combined_df = pd.concat(dfs, ignore_index=True)
</code></pre>
</li>
<li>
<p><strong>Save to Standardized CSV</strong></p>
<ul>
<li>Write the standardized data to a CSV file.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">combined_df.to_csv('standardized_jfk_records.csv', index=False)
</code></pre>
</li>
</ol>
<h3><a name="p-218079-considerations-and-recommendations-2" class="anchor" href="#p-218079-considerations-and-recommendations-2"></a>Considerations and Recommendations</h3>
<ul>
<li><strong>Data Type Handling</strong>: Ensure consistent data types, especially for dates and numeric fields.</li>
<li><strong>Testing and Validation</strong>: Regularly inspect DataFrames to verify data integrity post-processing.</li>
<li><strong>Exception Handling</strong>: Implement logging for discrepancies and edge cases encountered.</li>
</ul>
<p>This approach systematically addresses the inconsistencies and discrepancies in the data, ensuring a standardized and clean dataset is produced.</p>
|
Why `inv_freq` when computing frequencies for RoPE
|
https://discuss.huggingface.co/t/why-inv-freq-when-computing-frequencies-for-rope/153106
| 153,106
| 9
|
2025-05-01T09:58:34.624000Z
|
[
{
"id": 219283,
"name": "Ye Zhiling",
"username": "yzlnew",
"avatar_template": "/user_avatar/discuss.huggingface.co/yzlnew/{size}/46705_2.png",
"created_at": "2025-05-01T09:58:34.687Z",
"cooked": "<p>I’m getting confused at the naming here,</p>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\"> # Compute the inverse frequencies\n inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2, dtype=torch.int64).to(device=device, dtype=torch.float) / dim))\n return inv_freq, attention_factor\n</code></pre>\n<p>This <code>inv_freq</code> is actually meaning frequencies for each dimension for RoPE. What does <code>inv</code> mean here?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-01T09:58:34.687Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 72,
"reads": 3,
"readers_count": 2,
"score": 365.6,
"yours": false,
"topic_id": 153106,
"topic_slug": "why-inv-freq-when-computing-frequencies-for-rope",
"display_username": "Ye Zhiling",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 92540,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-inv-freq-when-computing-frequencies-for-rope/153106/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219298,
"name": "SunnyAiNetwork",
"username": "HaruthaiAi",
"avatar_template": "/user_avatar/discuss.huggingface.co/haruthaiai/{size}/46814_2.png",
"created_at": "2025-05-01T11:41:22.031Z",
"cooked": "<p><strong>Reply to yzlnew on ‘Why <code>inv_freq</code> when computing frequencies for RoPE’</strong></p>\n<p>Hi <a class=\"mention\" href=\"/u/yzlnew\">@yzlnew</a>! Great question — this is a common source of confusion when diving into RoPE implementation details. Let me break it down clearly:</p>\n<h3><a name=\"p-219298-what-is-inv_freq-in-the-context-of-rope-1\" class=\"anchor\" href=\"#p-219298-what-is-inv_freq-in-the-context-of-rope-1\"></a>What is <code>inv_freq</code> in the context of RoPE?</h3>\n<p>In most implementations of <strong>Rotary Positional Embeddings (RoPE)</strong>, the <code>inv_freq</code> refers to the <strong>inverse frequency</strong> used to compute the positional encodings for each embedding dimension. It’s inspired by the same idea behind sinusoidal embeddings in the original Transformer paper, where different dimensions of the input are assigned sinusoidal functions with different wavelengths.</p>\n<h3><a name=\"p-219298-why-inverse-frequency-2\" class=\"anchor\" href=\"#p-219298-why-inverse-frequency-2\"></a>Why “inverse” frequency?</h3>\n<p>The key lies in this line:</p>\n<pre><code class=\"lang-auto\">inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2) / dim))\n</code></pre>\n<p>This gives you a <strong>vector of inverse frequencies</strong> — meaning <strong>higher frequency values (shorter wavelengths) for lower dimensions</strong>, and <strong>lower frequency values (longer wavelengths) for higher dimensions</strong>.</p>\n<p>So for example:</p>\n<ul>\n<li>At <code>dim=0</code>, you might have an inv_freq like <code>1/10000^0 = 1</code></li>\n<li>At <code>dim=2</code>, you get <code>1/10000^(2/dim)</code>, and so on…</li>\n</ul>\n<p>This mirrors the <strong>logarithmic spacing</strong> of frequencies, enabling smooth interpolation and generalization across positions.</p>\n<p>Then, when you later multiply <code>position_ids * inv_freq</code>, you get a phase angle for each position, which is passed to <code>sin()</code> and <code>cos()</code> to rotate the query/key vectors — hence the term <strong>“rotary”</strong>.</p>\n<hr>\n<h3><a name=\"p-219298-summary-3\" class=\"anchor\" href=\"#p-219298-summary-3\"></a>Summary:</h3>\n<ul>\n<li><code>inv_freq</code> = inverse frequency per dimension</li>\n<li>Used in sinusoidal-style rotary embedding</li>\n<li>It encodes how fast each dimension rotates across position</li>\n<li>Not a literal “frequency”, but a mathematically convenient inverse scale for phase calculation</li>\n</ul>\n<p>Let me know if you’d like a visual intuition or derivation behind the rotational aspect of RoPE — happy to elaborate!</p>\n<p>Cheers,<br>\n<strong>Haruthai AI (Sunny)</strong></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-01T11:41:22.031Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 3,
"readers_count": 2,
"score": 20.6,
"yours": false,
"topic_id": 153106,
"topic_slug": "why-inv-freq-when-computing-frequencies-for-rope",
"display_username": "SunnyAiNetwork",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 85573,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-inv-freq-when-computing-frequencies-for-rope/153106/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219512,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-03T01:22:58.384Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-03T01:22:58.384Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 153106,
"topic_slug": "why-inv-freq-when-computing-frequencies-for-rope",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/why-inv-freq-when-computing-frequencies-for-rope/153106/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’m getting confused at the naming here,</p>
<pre data-code-wrap="python"><code class="lang-python"> # Compute the inverse frequencies
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2, dtype=torch.int64).to(device=device, dtype=torch.float) / dim))
return inv_freq, attention_factor
</code></pre>
<p>This <code>inv_freq</code> is actually meaning frequencies for each dimension for RoPE. What does <code>inv</code> mean here?</p>
|
<p><strong>Reply to yzlnew on ‘Why <code>inv_freq</code> when computing frequencies for RoPE’</strong></p>
<p>Hi <a class="mention" href="/u/yzlnew">@yzlnew</a>! Great question — this is a common source of confusion when diving into RoPE implementation details. Let me break it down clearly:</p>
<h3><a name="p-219298-what-is-inv_freq-in-the-context-of-rope-1" class="anchor" href="#p-219298-what-is-inv_freq-in-the-context-of-rope-1"></a>What is <code>inv_freq</code> in the context of RoPE?</h3>
<p>In most implementations of <strong>Rotary Positional Embeddings (RoPE)</strong>, the <code>inv_freq</code> refers to the <strong>inverse frequency</strong> used to compute the positional encodings for each embedding dimension. It’s inspired by the same idea behind sinusoidal embeddings in the original Transformer paper, where different dimensions of the input are assigned sinusoidal functions with different wavelengths.</p>
<h3><a name="p-219298-why-inverse-frequency-2" class="anchor" href="#p-219298-why-inverse-frequency-2"></a>Why “inverse” frequency?</h3>
<p>The key lies in this line:</p>
<pre><code class="lang-auto">inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2) / dim))
</code></pre>
<p>This gives you a <strong>vector of inverse frequencies</strong> — meaning <strong>higher frequency values (shorter wavelengths) for lower dimensions</strong>, and <strong>lower frequency values (longer wavelengths) for higher dimensions</strong>.</p>
<p>So for example:</p>
<ul>
<li>At <code>dim=0</code>, you might have an inv_freq like <code>1/10000^0 = 1</code></li>
<li>At <code>dim=2</code>, you get <code>1/10000^(2/dim)</code>, and so on…</li>
</ul>
<p>This mirrors the <strong>logarithmic spacing</strong> of frequencies, enabling smooth interpolation and generalization across positions.</p>
<p>Then, when you later multiply <code>position_ids * inv_freq</code>, you get a phase angle for each position, which is passed to <code>sin()</code> and <code>cos()</code> to rotate the query/key vectors — hence the term <strong>“rotary”</strong>.</p>
<hr>
<h3><a name="p-219298-summary-3" class="anchor" href="#p-219298-summary-3"></a>Summary:</h3>
<ul>
<li><code>inv_freq</code> = inverse frequency per dimension</li>
<li>Used in sinusoidal-style rotary embedding</li>
<li>It encodes how fast each dimension rotates across position</li>
<li>Not a literal “frequency”, but a mathematically convenient inverse scale for phase calculation</li>
</ul>
<p>Let me know if you’d like a visual intuition or derivation behind the rotational aspect of RoPE — happy to elaborate!</p>
<p>Cheers,<br>
<strong>Haruthai AI (Sunny)</strong></p>
|
HFAPIModel pricing
|
https://discuss.huggingface.co/t/hfapimodel-pricing/153001
| 153,001
| 64
|
2025-04-30T10:39:47.795000Z
|
[
{
"id": 219157,
"name": "Giuseppe Boezio",
"username": "gboezio",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/g/f14d63/{size}.png",
"created_at": "2025-04-30T10:39:47.855Z",
"cooked": "<p>I am using smolagents library with HfAPIModel. Where can I find the pricing related to the models I can use with it? Do I pay based on tokens or amount of requests?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-04-30T10:39:47.855Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 57,
"reads": 7,
"readers_count": 6,
"score": 301.4,
"yours": false,
"topic_id": 153001,
"topic_slug": "hfapimodel-pricing",
"display_username": "Giuseppe Boezio",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89270,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hfapimodel-pricing/153001/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219174,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-30T12:10:12.190Z",
"cooked": "<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/9/49ea0920c7b377025bd26a49d8a827ed0471d7ee_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F2F0EA\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost\" target=\"_blank\" rel=\"noopener\">Pricing and Billing</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<p>\nProbably the number of requests multiplied by the price of the GPU used for that model. For exact details, please consult Hugging Face. <a href=\"mailto:[email protected]\">[email protected]</a></p>",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-05-01T15:19:55.354Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 7,
"readers_count": 6,
"score": 16.4,
"yours": false,
"topic_id": 153001,
"topic_slug": "hfapimodel-pricing",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost",
"internal": false,
"reflection": false,
"title": "Pricing and Billing",
"clicks": 5
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hfapimodel-pricing/153001/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219404,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-02T08:00:24.283Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-05-02T08:00:24.283Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 153001,
"topic_slug": "hfapimodel-pricing",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/hfapimodel-pricing/153001/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am using smolagents library with HfAPIModel. Where can I find the pricing related to the models I can use with it? Do I pay based on tokens or amount of requests?</p>
|
<aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost">
<header class="source">
<a href="https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/9/49ea0920c7b377025bd26a49d8a827ed0471d7ee_2_690x372.png" class="thumbnail" data-dominant-color="F2F0EA" width="690" height="372"></div>
<h3><a href="https://huggingface.co/docs/inference-providers/en/pricing#hf-inference-cost" target="_blank" rel="noopener">Pricing and Billing</a></h3>
<p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<p>
Probably the number of requests multiplied by the price of the GPU used for that model. For exact details, please consult Hugging Face. <a href="mailto:[email protected]">[email protected]</a></p>
|
Server-side problems
|
https://discuss.huggingface.co/t/server-side-problems/150852
| 150,852
| 24
|
2025-04-16T15:40:07.811000Z
|
[
{
"id": 216187,
"name": "Edward J. Schwartz",
"username": "ejschwartz",
"avatar_template": "/user_avatar/discuss.huggingface.co/ejschwartz/{size}/16902_2.png",
"created_at": "2025-04-16T15:40:07.883Z",
"cooked": "<p>I’ve encountered two strange errors in a short period of time.</p>\n<p>Space: <a href=\"https://huggingface.co/spaces/ejschwartz/aidapal-space\" class=\"inline-onebox\">Aidapal Space - a Hugging Face Space by ejschwartz</a></p>\n<h2><a name=\"p-216187-first-problem-1\" class=\"anchor\" href=\"#p-216187-first-problem-1\"></a>First problem</h2>\n<p>I created a new space. I committed <code>app.py</code> and pushed, and got an error that was roughly “Unable to find app.py” in the runtime logs.</p>\n<h2><a name=\"p-216187-second-problem-2\" class=\"anchor\" href=\"#p-216187-second-problem-2\"></a>Second problem</h2>\n<p>I just added and committed requirements.txt and received the following build error.</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490.png\" data-download-href=\"/uploads/short-url/5d8moTTYt6QthjUrokSustxEd8Y.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_690x362.png\" alt=\"image\" data-base62-sha1=\"5d8moTTYt6QthjUrokSustxEd8Y\" width=\"690\" height=\"362\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_690x362.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_1035x543.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_1380x724.png 2x\" data-dominant-color=\"121724\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1388×730 99.5 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<h2><a name=\"p-216187-conclusion-3\" class=\"anchor\" href=\"#p-216187-conclusion-3\"></a>Conclusion</h2>\n<p>Both problems seem to be related to not finding a recently committed file. Manually doing a factory rebuild seems to mitigate the problem.</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-04-16T15:40:36.169Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 64,
"reads": 11,
"readers_count": 10,
"score": 332.2,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "Edward J. Schwartz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/ejschwartz/aidapal-space",
"internal": false,
"reflection": false,
"title": "Aidapal Space - a Hugging Face Space by ejschwartz",
"clicks": 3
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22191,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 216259,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-17T03:39:05.812Z",
"cooked": "<p>It might be the same rollback bug that occurred in Dev mode before.</p><aside class=\"quote\" data-post=\"4\" data-topic=\"139695\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/martim-ramos-neural/48/37664_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/hugging-face-space-keeps-using-an-old-commit-despite-redeploys/139695/4\">Hugging Face Space Keeps Using an Old Commit Despite Redeploys</a> <a class=\"badge-category__wrapper \" href=\"/c/beginners/5\"><span data-category-id=\"5\" style=\"--category-badge-color: #0088CC; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"Use this category for any basic question you have on any of the Hugging Face library. Don’t moderate yourself, everyone has to begin somewhere and everyone on this forum is here to help!\"><span class=\"badge-category__name\">Beginners</span></span></a>\n </div>\n <blockquote>\n \" SOLVED \". Only happens with DEV mode enabled.\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-04-17T03:39:05.812Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 1.6,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/hugging-face-space-keeps-using-an-old-commit-despite-redeploys/139695/4",
"internal": true,
"reflection": false,
"title": "Hugging Face Space Keeps Using an Old Commit Despite Redeploys",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 216348,
"name": "Edward J. Schwartz",
"username": "ejschwartz",
"avatar_template": "/user_avatar/discuss.huggingface.co/ejschwartz/{size}/16902_2.png",
"created_at": "2025-04-17T13:01:20.623Z",
"cooked": "<p>I was not using DEV mode. <img src=\"https://emoji.discourse-cdn.com/apple/slightly_frowning_face.png?v=14\" title=\":slightly_frowning_face:\" class=\"emoji\" alt=\":slightly_frowning_face:\" loading=\"lazy\" width=\"20\" height=\"20\"> I’ll report if I run into any more problems today.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-04-17T13:01:20.623Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "Edward J. Schwartz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22191,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 216351,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-17T13:07:58.375Z",
"cooked": "<p>Whether it will be fixed or not, it’s an unknown issue…</p>\n<p>It seems that it’s OK to report the hub issue below.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://github.com/huggingface/hub-docs/issues\">\n <header class=\"source\">\n <img src=\"https://github.githubassets.com/favicons/favicon.svg\" class=\"site-icon\" width=\"32\" height=\"32\">\n\n <a href=\"https://github.com/huggingface/hub-docs/issues\" target=\"_blank\" rel=\"noopener\">GitHub</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/344;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/9/59929307d1fe37b678698fa45b4e1349cb118b73_2_690x345.png\" class=\"thumbnail\" data-dominant-color=\"F4F2EB\" width=\"690\" height=\"345\"></div>\n\n<h3><a href=\"https://github.com/huggingface/hub-docs/issues\" target=\"_blank\" rel=\"noopener\">Issues · huggingface/hub-docs</a></h3>\n\n <p>Docs of the Hugging Face Hub. Contribute to huggingface/hub-docs development by creating an account on GitHub.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 4,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-04-17T13:07:58.375Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 8,
"readers_count": 7,
"score": 16.6,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/hub-docs/issues",
"internal": false,
"reflection": false,
"title": "GitHub · Where software is built",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/4",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 216374,
"name": "Edward J. Schwartz",
"username": "ejschwartz",
"avatar_template": "/user_avatar/discuss.huggingface.co/ejschwartz/{size}/16902_2.png",
"created_at": "2025-04-17T15:33:13.286Z",
"cooked": "<blockquote>\n<p>Still an issue.</p>\n<p><div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/4/b/4b12d9254f02469537320f6706d23445307da069.png\" data-download-href=\"/uploads/short-url/aI8bVFe7xcLOE6PfFd3DEhjhTTP.png?dl=1\" title=\"image\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/b/4b12d9254f02469537320f6706d23445307da069_2_690x281.png\" alt=\"image\" data-base62-sha1=\"aI8bVFe7xcLOE6PfFd3DEhjhTTP\" width=\"690\" height=\"281\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/b/4b12d9254f02469537320f6706d23445307da069_2_690x281.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/b/4b12d9254f02469537320f6706d23445307da069_2_1035x421.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/4/b/4b12d9254f02469537320f6706d23445307da069.png 2x\" data-dominant-color=\"151A24\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">image</span><span class=\"informations\">1354×553 117 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>\n<p>Here the space fails to parse a JSON file that is committed to the repository.</p>\n<p>I will report to HF.</p>\n</blockquote>\n<p><strong>Disregard this message</strong> This was my mistake. The file I was loading was jsonl but was labeled as json. I have not seen any problems since yesterday.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-04-17T15:46:36.942Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 7,
"readers_count": 6,
"score": 36.4,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "Edward J. Schwartz",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 22191,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 216383,
"name": "Megan Riley",
"username": "meganariley",
"avatar_template": "/user_avatar/discuss.huggingface.co/meganariley/{size}/20596_2.png",
"created_at": "2025-04-17T16:35:54.198Z",
"cooked": "<p>Hi! I’m glad to hear the issue is now resolved <img src=\"https://emoji.discourse-cdn.com/apple/slight_smile.png?v=14\" title=\":slight_smile:\" class=\"emoji\" alt=\":slight_smile:\" loading=\"lazy\" width=\"20\" height=\"20\"></p>",
"post_number": 7,
"post_type": 1,
"posts_count": 7,
"updated_at": "2025-04-17T16:35:54.198Z",
"reply_count": 0,
"reply_to_post_number": 5,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 6,
"readers_count": 5,
"score": 21.2,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "Megan Riley",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 31941,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/7",
"reactions": [
{
"id": "hugs",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 22191,
"username": "ejschwartz",
"name": "Edward J. Schwartz",
"avatar_template": "/user_avatar/discuss.huggingface.co/ejschwartz/{size}/16902_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 219321,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-05-01T13:46:17.194Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 8,
"post_type": 3,
"posts_count": 7,
"updated_at": "2025-05-01T13:46:17.194Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 1,
"readers_count": 0,
"score": 10.2,
"yours": false,
"topic_id": 150852,
"topic_slug": "server-side-problems",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/server-side-problems/150852/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’ve encountered two strange errors in a short period of time.</p>
<p>Space: <a href="https://huggingface.co/spaces/ejschwartz/aidapal-space" class="inline-onebox">Aidapal Space - a Hugging Face Space by ejschwartz</a></p>
<h2><a name="p-216187-first-problem-1" class="anchor" href="#p-216187-first-problem-1"></a>First problem</h2>
<p>I created a new space. I committed <code>app.py</code> and pushed, and got an error that was roughly “Unable to find app.py” in the runtime logs.</p>
<h2><a name="p-216187-second-problem-2" class="anchor" href="#p-216187-second-problem-2"></a>Second problem</h2>
<p>I just added and committed requirements.txt and received the following build error.</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490.png" data-download-href="/uploads/short-url/5d8moTTYt6QthjUrokSustxEd8Y.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_690x362.png" alt="image" data-base62-sha1="5d8moTTYt6QthjUrokSustxEd8Y" width="690" height="362" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_690x362.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_1035x543.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/2/4/2486f860e2b0051d32bb844b2ce3e545813a4490_2_1380x724.png 2x" data-dominant-color="121724"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1388×730 99.5 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<h2><a name="p-216187-conclusion-3" class="anchor" href="#p-216187-conclusion-3"></a>Conclusion</h2>
<p>Both problems seem to be related to not finding a recently committed file. Manually doing a factory rebuild seems to mitigate the problem.</p>
|
<blockquote>
<p>Still an issue.</p>
<p><div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/4/b/4b12d9254f02469537320f6706d23445307da069.png" data-download-href="/uploads/short-url/aI8bVFe7xcLOE6PfFd3DEhjhTTP.png?dl=1" title="image" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/b/4b12d9254f02469537320f6706d23445307da069_2_690x281.png" alt="image" data-base62-sha1="aI8bVFe7xcLOE6PfFd3DEhjhTTP" width="690" height="281" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/b/4b12d9254f02469537320f6706d23445307da069_2_690x281.png, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/4/b/4b12d9254f02469537320f6706d23445307da069_2_1035x421.png 1.5x, https://us1.discourse-cdn.com/hellohellohello/original/3X/4/b/4b12d9254f02469537320f6706d23445307da069.png 2x" data-dominant-color="151A24"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">image</span><span class="informations">1354×553 117 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
<p>Here the space fails to parse a JSON file that is committed to the repository.</p>
<p>I will report to HF.</p>
</blockquote>
<p><strong>Disregard this message</strong> This was my mistake. The file I was loading was jsonl but was labeled as json. I have not seen any problems since yesterday.</p>
|
Can the T5 model classify codes such as codebert-small-v1?
|
https://discuss.huggingface.co/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496
| 152,496
| 5
|
2025-04-27T10:03:32.978000Z
|
[
{
"id": 218451,
"name": "Franck da COSTA",
"username": "kirilinko",
"avatar_template": "/user_avatar/discuss.huggingface.co/kirilinko/{size}/46423_2.png",
"created_at": "2025-04-27T10:03:33.036Z",
"cooked": "<p>Hello.<br>\nI’m doing code classification with codebert-small-v1, but as the maximum sequence is 512 tokens, this may limit me when faced with a certain amount of code (because of the size). On the other hand, I’ve noticed that T5 has a greater margin as regards the maximum sequence. Is it possible to use the T5 model for sort code classification to have the same output as codebert-small-v1? In the sense that I have the probability of appearance of each class of vulnerability in the code?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-27T10:03:33.036Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 23,
"reads": 5,
"readers_count": 4,
"score": 126,
"yours": false,
"topic_id": 152496,
"topic_slug": "can-the-t5-model-classify-codes-such-as-codebert-small-v1",
"display_username": "Franck da COSTA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90907,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218454,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-27T10:27:35.969Z",
"cooked": "<p>I’m not familiar with it, but it seems possible.</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/Salesforce/codet5-base\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/Salesforce/codet5-base\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/3/c34cd96d30d647872876592fdd2eed186209581b_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"5A70A4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/Salesforce/codet5-base\" target=\"_blank\" rel=\"noopener\">Salesforce/codet5-base · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://arxiv.org/abs/2408.07181\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/7/7/7737f9c766957e34da6871902e1e7a9d2aca40f3.png\" class=\"site-icon\" data-dominant-color=\"B36362\" width=\"32\" height=\"32\">\n\n <a href=\"https://arxiv.org/abs/2408.07181\" target=\"_blank\" rel=\"noopener\">arXiv.org</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/402;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/c/d/cd49b65780faf86c14ed9761c9c522acfb73adde_2_500x500.png\" class=\"thumbnail\" data-dominant-color=\"865F5C\" width=\"500\" height=\"500\"></div>\n\n<h3><a href=\"https://arxiv.org/abs/2408.07181\" target=\"_blank\" rel=\"noopener\">VulCatch: Enhancing Binary Vulnerability Detection through CodeT5...</a></h3>\n\n <p>Binary program vulnerability detection is critical for software security, yet existing deep learning approaches often rely on source code analysis, limiting their ability to detect unknown vulnerabilities. To address this, we propose VulCatch, a...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/huggingface/CodeBERTa-small-v1\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/huggingface/CodeBERTa-small-v1\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/b/b/bba1c135549667228495e2d05d356d422753bf3c_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"F8F4E8\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/huggingface/CodeBERTa-small-v1\" target=\"_blank\" rel=\"noopener\">huggingface/CodeBERTa-small-v1 · Hugging Face</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-27T10:27:35.969Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 152496,
"topic_slug": "can-the-t5-model-classify-codes-such-as-codebert-small-v1",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/Salesforce/codet5-base",
"internal": false,
"reflection": false,
"title": "Salesforce/codet5-base · Hugging Face",
"clicks": 3
},
{
"url": "https://arxiv.org/abs/2408.07181",
"internal": false,
"reflection": false,
"title": "[2408.07181] VulCatch: Enhancing Binary Vulnerability Detection through CodeT5 Decompilation and KAN Advanced Feature Extraction",
"clicks": 0
},
{
"url": "https://huggingface.co/huggingface/CodeBERTa-small-v1",
"internal": false,
"reflection": false,
"title": "huggingface/CodeBERTa-small-v1 · Hugging Face",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218616,
"name": "Franck da COSTA",
"username": "kirilinko",
"avatar_template": "/user_avatar/discuss.huggingface.co/kirilinko/{size}/46423_2.png",
"created_at": "2025-04-28T09:12:37.985Z",
"cooked": "<p>But I’m a bit surprised, when I try to classify with “TFAutoModelForSequenceClassification”, I get an error telling me that model T5 is not compatible. However, with codeBert small, no problem. I want to try another model because, I lack performance in predictions. My current model manages to classify the code well according to the CWE around 8 classes, but not when the code is vulnerable (only two classes) Do you have any idea what to do?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-28T09:16:37.704Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 4,
"readers_count": 3,
"score": 20.8,
"yours": false,
"topic_id": 152496,
"topic_slug": "can-the-t5-model-classify-codes-such-as-codebert-small-v1",
"display_username": "Franck da COSTA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90907,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 218690,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-28T12:50:13.942Z",
"cooked": "<p>Hmm…</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/huggingface/transformers/issues/10405\">\n <header class=\"source\">\n\n <a href=\"https://github.com/huggingface/transformers/issues/10405\" target=\"_blank\" rel=\"noopener\">github.com/huggingface/transformers</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/huggingface/transformers/issues/10405\" target=\"_blank\" rel=\"noopener\">Problem running T5 (configuration) with text classification</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2021-02-25\" data-time=\"22:14:47\" data-timezone=\"UTC\">10:14PM - 25 Feb 21 UTC</span>\n </div>\n\n <div class=\"date\">\n closed <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2021-02-26\" data-time=\"17:13:24\" data-timezone=\"UTC\">05:13PM - 26 Feb 21 UTC</span>\n </div>\n\n <div class=\"user\">\n <a href=\"https://github.com/ioana-blue\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/d/b/dbfcedc6979b515bffe8c4b37bbc5ce2c2e5d7d2.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"724E40\">\n ioana-blue\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">## Environment info\n\n\n- `transformers` version: 4.3.2\n- Platform: Linux-4.18<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">.0-193.el8.x86_64-x86_64-with-glibc2.10\n- Python version: 3.8.3\n- PyTorch version (GPU?): 1.5.1+cu101 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Using GPU in script?: yes\n- Using distributed or parallel set-up in script?: single gpu\n\n### Who can help\n\nPerhaps @patrickvonplaten, @patil-suraj could help?\n\n## Information\n\nModel I am using (Bert, XLNet ...): T5\n\nThe problem arises when using:\n* [ ] the official example scripts: (give details below)\n* [x] my own modified scripts: (give details below)\n\nThe tasks I am working on is:\n* [ ] an official GLUE/SQUaD task: (give the name)\n* [x] my own task or dataset: (give details below)\n\n## To reproduce\n\nI'm trying to run the T5 base model. It seems that I use the correct model path (i.e., t5-base) and it finds and downloads the model, but crashes when it tries to instantiate it. The problem seems to be around the configuration class not being found. This is what I get:\n\n```\nFile \"../../../models/tr-4.3.2/run_puppets.py\", line 279, in main\n model = AutoModelForSequenceClassification.from_pretrained(\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py\", line 1362, in from_pretrained\n raise ValueError(\nValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForSequenceClassification.\nModel type should be one of ConvBertConfig, LEDConfig, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, MBartConfig, BartConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, LayoutLMConfig, BertConfig, XLNetConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, FunnelConfig, DebertaConfig, GPT2Config, OpenAIGPTConfig, ReformerConfig, CTRLConfig, TransfoXLConfig, MPNetConfig, TapasConfig.\n```\nI dig a bit and I may have a hunch why this happens. The config file is there: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/configuration_t5.py#L32\nbut it's not recorded here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L514\n\nSo the check here fails: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L1389\n\nAnd the ValueError is raised. \n\nI hope this is it. It looks like an easy fix :) Thanks!\n\nPS: I'm running the same scripts/files with other models without problems. This seems to be something specific to T5.</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<blockquote>\n<p>even though T5 can be used very well for text-classification it remains a text-to-text only model. So you can only load the model via<br>\nfrom transformers import AutoModelForConditionalGeneration<br>\nmodel = AutoModelForConditionalGeneration.from_pretrained(“t5-small”)</p>\n</blockquote>",
"post_number": 4,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-28T12:50:13.942Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 152496,
"topic_slug": "can-the-t5-model-classify-codes-such-as-codebert-small-v1",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/huggingface/transformers/issues/10405",
"internal": false,
"reflection": false,
"title": "Problem running T5 (configuration) with text classification · Issue #10405 · huggingface/transformers · GitHub",
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219173,
"name": "Franck da COSTA",
"username": "kirilinko",
"avatar_template": "/user_avatar/discuss.huggingface.co/kirilinko/{size}/46423_2.png",
"created_at": "2025-04-30T11:23:13.244Z",
"cooked": "<p>thank you !</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 6,
"updated_at": "2025-04-30T11:23:13.244Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 15.4,
"yours": false,
"topic_id": 152496,
"topic_slug": "can-the-t5-model-classify-codes-such-as-codebert-small-v1",
"display_username": "Franck da COSTA",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 90907,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496/5",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219233,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-30T23:24:02.666Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 6,
"post_type": 3,
"posts_count": 6,
"updated_at": "2025-04-30T23:24:02.666Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 1,
"reads": 1,
"readers_count": 0,
"score": 5.2,
"yours": false,
"topic_id": 152496,
"topic_slug": "can-the-t5-model-classify-codes-such-as-codebert-small-v1",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/can-the-t5-model-classify-codes-such-as-codebert-small-v1/152496/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>Hello.<br>
I’m doing code classification with codebert-small-v1, but as the maximum sequence is 512 tokens, this may limit me when faced with a certain amount of code (because of the size). On the other hand, I’ve noticed that T5 has a greater margin as regards the maximum sequence. Is it possible to use the T5 model for sort code classification to have the same output as codebert-small-v1? In the sense that I have the probability of appearance of each class of vulnerability in the code?</p>
|
<p>Hmm…</p><aside class="onebox githubissue" data-onebox-src="https://github.com/huggingface/transformers/issues/10405">
<header class="source">
<a href="https://github.com/huggingface/transformers/issues/10405" target="_blank" rel="noopener">github.com/huggingface/transformers</a>
</header>
<article class="onebox-body">
<div class="github-row">
<div class="github-icon-container" title="Issue" data-github-private-repo="false">
<svg width="60" height="60" class="github-icon" viewBox="0 0 14 16" aria-hidden="true"><path fill-rule="evenodd" d="M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z"></path></svg>
</div>
<div class="github-info-container">
<h4>
<a href="https://github.com/huggingface/transformers/issues/10405" target="_blank" rel="noopener">Problem running T5 (configuration) with text classification</a>
</h4>
<div class="github-info">
<div class="date">
opened <span class="discourse-local-date" data-format="ll" data-date="2021-02-25" data-time="22:14:47" data-timezone="UTC">10:14PM - 25 Feb 21 UTC</span>
</div>
<div class="date">
closed <span class="discourse-local-date" data-format="ll" data-date="2021-02-26" data-time="17:13:24" data-timezone="UTC">05:13PM - 26 Feb 21 UTC</span>
</div>
<div class="user">
<a href="https://github.com/ioana-blue" target="_blank" rel="noopener">
<img alt="" src="https://us1.discourse-cdn.com/hellohellohello/original/3X/d/b/dbfcedc6979b515bffe8c4b37bbc5ce2c2e5d7d2.jpeg" class="onebox-avatar-inline" width="20" height="20" data-dominant-color="724E40">
ioana-blue
</a>
</div>
</div>
<div class="labels">
</div>
</div>
</div>
<div class="github-row">
<p class="github-body-container">## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.18<span class="show-more-container"><a href="" rel="noopener" class="show-more">…</a></span><span class="excerpt hidden">.0-193.el8.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
### Who can help
Perhaps @patrickvonplaten, @patil-suraj could help?
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I'm trying to run the T5 base model. It seems that I use the correct model path (i.e., t5-base) and it finds and downloads the model, but crashes when it tries to instantiate it. The problem seems to be around the configuration class not being found. This is what I get:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 279, in main
model = AutoModelForSequenceClassification.from_pretrained(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 1362, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of ConvBertConfig, LEDConfig, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, MBartConfig, BartConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, LayoutLMConfig, BertConfig, XLNetConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, FunnelConfig, DebertaConfig, GPT2Config, OpenAIGPTConfig, ReformerConfig, CTRLConfig, TransfoXLConfig, MPNetConfig, TapasConfig.
```
I dig a bit and I may have a hunch why this happens. The config file is there: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/configuration_t5.py#L32
but it's not recorded here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L514
So the check here fails: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L1389
And the ValueError is raised.
I hope this is it. It looks like an easy fix :) Thanks!
PS: I'm running the same scripts/files with other models without problems. This seems to be something specific to T5.</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<blockquote>
<p>even though T5 can be used very well for text-classification it remains a text-to-text only model. So you can only load the model via<br>
from transformers import AutoModelForConditionalGeneration<br>
model = AutoModelForConditionalGeneration.from_pretrained(“t5-small”)</p>
</blockquote>
|
Docling image captioning best VLM
|
https://discuss.huggingface.co/t/docling-image-captioning-best-vlm/152311
| 152,311
| 13
|
2025-04-25T14:37:54.184000Z
|
[
{
"id": 218203,
"name": "Sean Bayly",
"username": "swtb",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/s/8c91f0/{size}.png",
"created_at": "2025-04-25T14:37:54.254Z",
"cooked": "<p>What is the current SOTA model for captioning images in documents?</p>\n<p>I need good descriptions of diagrams. Most of the ones I have seen have very basic descriptions “the image contains a woman in a blue dress”. I need more like “The figure shows a flowchart representing a process of… that starts with…and ends with…key steps are…”</p>\n<p>Or “The image depicts a scene in which people walk about in a modern cafe, key elements of the cafes design are…”</p>\n<p>In other words I need a good paragraph that offers some insight into the image.</p>\n<p>Any suggestions on models?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-04-25T14:37:54.254Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 202,
"reads": 5,
"readers_count": 4,
"score": 1006,
"yours": false,
"topic_id": 152311,
"topic_slug": "docling-image-captioning-best-vlm",
"display_username": "Sean Bayly",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 37927,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/docling-image-captioning-best-vlm/152311/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218212,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-25T15:33:04.696Z",
"cooked": "<p>I’m not sure which VLM is strong in understanding the context of image content…<br>\nHow about trying out some VLM that seem to perform well to some extent…</p><aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/spaces/opencompass/open_vlm_leaderboard\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/spaces/opencompass/open_vlm_leaderboard\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/c/0c4ae571357ea7787bc3a411a1b1784610da44e1_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"0B86B6\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/spaces/opencompass/open_vlm_leaderboard\" target=\"_blank\" rel=\"noopener\">Open VLM Leaderboard - a Hugging Face Space by opencompass</a></h3>\n\n <p>Explore detailed leaderboard data for various models and datasets with customizable filters for model name, size, and type.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox githubrepo\" data-onebox-src=\"https://github.com/MoonshotAI/Kimi-VL\">\n <header class=\"source\">\n\n <a href=\"https://github.com/MoonshotAI/Kimi-VL\" target=\"_blank\" rel=\"noopener\">github.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\" data-github-private-repo=\"false\">\n <img width=\"690\" height=\"344\" src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/1/51f408aff67b12b84619bec8577cf55c1ee55f99_2_690x344.png\" class=\"thumbnail\" data-dominant-color=\"EDEDEE\">\n\n <h3><a href=\"https://github.com/MoonshotAI/Kimi-VL\" target=\"_blank\" rel=\"noopener\">GitHub - MoonshotAI/Kimi-VL: Kimi-VL: Mixture-of-Experts Vision-Language Model...</a></h3>\n\n <p><span class=\"github-repo-description\">Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities</span></p>\n</div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/d/1/d14e18898ba4b64d8de198b6cbaeb4fa636402c6.png\" class=\"site-icon\" data-dominant-color=\"74B700\" width=\"16\" height=\"16\">\n\n <a href=\"https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/\" target=\"_blank\" rel=\"noopener\" title=\"04:25PM - 26 February 2025\">NVIDIA Technical Blog – 26 Feb 25</a>\n </header>\n\n <article class=\"onebox-body\">\n \n\n<h3><a href=\"https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/\" target=\"_blank\" rel=\"noopener\">Vision Language Model Prompt Engineering Guide for Image and Video...</a></h3>\n\n <p>Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs)…</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-04-25T15:33:04.696Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 5,
"readers_count": 4,
"score": 51,
"yours": false,
"topic_id": 152311,
"topic_slug": "docling-image-captioning-best-vlm",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/spaces/opencompass/open_vlm_leaderboard",
"internal": false,
"reflection": false,
"title": "Open VLM Leaderboard - a Hugging Face Space by opencompass",
"clicks": 23
},
{
"url": "https://github.com/MoonshotAI/Kimi-VL",
"internal": false,
"reflection": false,
"title": "GitHub - MoonshotAI/Kimi-VL: Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities",
"clicks": 7
},
{
"url": "https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/",
"internal": false,
"reflection": false,
"title": null,
"clicks": 6
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/docling-image-captioning-best-vlm/152311/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 219032,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-29T19:34:51.185Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-04-29T19:34:51.185Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 3,
"readers_count": 2,
"score": 15.6,
"yours": false,
"topic_id": 152311,
"topic_slug": "docling-image-captioning-best-vlm",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/docling-image-captioning-best-vlm/152311/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>What is the current SOTA model for captioning images in documents?</p>
<p>I need good descriptions of diagrams. Most of the ones I have seen have very basic descriptions “the image contains a woman in a blue dress”. I need more like “The figure shows a flowchart representing a process of… that starts with…and ends with…key steps are…”</p>
<p>Or “The image depicts a scene in which people walk about in a modern cafe, key elements of the cafes design are…”</p>
<p>In other words I need a good paragraph that offers some insight into the image.</p>
<p>Any suggestions on models?</p>
|
<p>I’m not sure which VLM is strong in understanding the context of image content…<br>
How about trying out some VLM that seem to perform well to some extent…</p><aside class="onebox allowlistedgeneric" data-onebox-src="https://huggingface.co/spaces/opencompass/open_vlm_leaderboard">
<header class="source">
<a href="https://huggingface.co/spaces/opencompass/open_vlm_leaderboard" target="_blank" rel="noopener">huggingface.co</a>
</header>
<article class="onebox-body">
<div class="aspect-image" style="--aspect-ratio:690/372;"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/c/0c4ae571357ea7787bc3a411a1b1784610da44e1_2_690x372.png" class="thumbnail" data-dominant-color="0B86B6" width="690" height="372"></div>
<h3><a href="https://huggingface.co/spaces/opencompass/open_vlm_leaderboard" target="_blank" rel="noopener">Open VLM Leaderboard - a Hugging Face Space by opencompass</a></h3>
<p>Explore detailed leaderboard data for various models and datasets with customizable filters for model name, size, and type.</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox githubrepo" data-onebox-src="https://github.com/MoonshotAI/Kimi-VL">
<header class="source">
<a href="https://github.com/MoonshotAI/Kimi-VL" target="_blank" rel="noopener">github.com</a>
</header>
<article class="onebox-body">
<div class="github-row" data-github-private-repo="false">
<img width="690" height="344" src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/1/51f408aff67b12b84619bec8577cf55c1ee55f99_2_690x344.png" class="thumbnail" data-dominant-color="EDEDEE">
<h3><a href="https://github.com/MoonshotAI/Kimi-VL" target="_blank" rel="noopener">GitHub - MoonshotAI/Kimi-VL: Kimi-VL: Mixture-of-Experts Vision-Language Model...</a></h3>
<p><span class="github-repo-description">Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
<aside class="onebox allowlistedgeneric" data-onebox-src="https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/">
<header class="source">
<img src="https://us1.discourse-cdn.com/hellohellohello/original/3X/d/1/d14e18898ba4b64d8de198b6cbaeb4fa636402c6.png" class="site-icon" data-dominant-color="74B700" width="16" height="16">
<a href="https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/" target="_blank" rel="noopener" title="04:25PM - 26 February 2025">NVIDIA Technical Blog – 26 Feb 25</a>
</header>
<article class="onebox-body">
<h3><a href="https://developer.nvidia.com/blog/vision-language-model-prompt-engineering-guide-for-image-and-video-understanding/" target="_blank" rel="noopener">Vision Language Model Prompt Engineering Guide for Image and Video...</a></h3>
<p>Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs)…</p>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Incomplete character head display when using IPAdapter
|
https://discuss.huggingface.co/t/incomplete-character-head-display-when-using-ipadapter/152581
| 152,581
| 5
|
2025-04-28T02:10:04.746000Z
|
[
{
"id": 218567,
"name": "fu",
"username": "juwei101",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/a4c791/{size}.png",
"created_at": "2025-04-28T02:10:04.809Z",
"cooked": "<p>I encountered an issue where the character’s head is not fully displayed when generating images with IPAdapter. How can I resolve this problem? Below is a screenshot of my workflow.<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243.jpeg\" data-download-href=\"/uploads/short-url/d6guwyswzCw0rntoT8ONB0GHHLJ.jpeg?dl=1\" title=\"屏幕截图 2025-04-28 095929\" rel=\"noopener nofollow ugc\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_690x331.jpeg\" alt=\"屏幕截图 2025-04-28 095929\" data-base62-sha1=\"d6guwyswzCw0rntoT8ONB0GHHLJ\" width=\"690\" height=\"331\" srcset=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_690x331.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_1035x496.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_1380x662.jpeg 2x\" data-dominant-color=\"39332C\"><div class=\"meta\"><svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use href=\"#far-image\"></use></svg><span class=\"filename\">屏幕截图 2025-04-28 095929</span><span class=\"informations\">1562×751 210 KB</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use href=\"#discourse-expand\"></use></svg></div></a></div></p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-04-28T02:10:04.809Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 7,
"reads": 4,
"readers_count": 3,
"score": 50.6,
"yours": false,
"topic_id": 152581,
"topic_slug": "incomplete-character-head-display-when-using-ipadapter",
"display_username": "fu",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91978,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/incomplete-character-head-display-when-using-ipadapter/152581/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218610,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-28T08:47:44.128Z",
"cooked": "<p>Hmm, I’m not familiar with ComfyUI…</p><aside class=\"onebox githubissue\" data-onebox-src=\"https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406\">\n <header class=\"source\">\n\n <a href=\"https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406\" target=\"_blank\" rel=\"noopener\">github.com/cubiq/ComfyUI_IPAdapter_plus</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"github-row\">\n <div class=\"github-icon-container\" title=\"Issue\" data-github-private-repo=\"false\">\n\t <svg width=\"60\" height=\"60\" class=\"github-icon\" viewBox=\"0 0 14 16\" aria-hidden=\"true\"><path fill-rule=\"evenodd\" d=\"M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z\"></path></svg>\n </div>\n\n <div class=\"github-info-container\">\n <h4>\n <a href=\"https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406\" target=\"_blank\" rel=\"noopener\">IPAdapterTiled crops images with 4:5 AR</a>\n </h4>\n\n <div class=\"github-info\">\n <div class=\"date\">\n opened <span class=\"discourse-local-date\" data-format=\"ll\" data-date=\"2024-04-06\" data-time=\"18:25:50\" data-timezone=\"UTC\">06:25PM - 06 Apr 24 UTC</span>\n </div>\n\n\n <div class=\"user\">\n <a href=\"https://github.com/Davikar\" target=\"_blank\" rel=\"noopener\">\n <img alt=\"\" src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/4/f/4f3d45e7b10a56cdc4d3f8eaa722c50f8ac8ba83.jpeg\" class=\"onebox-avatar-inline\" width=\"20\" height=\"20\" data-dominant-color=\"1D1F19\">\n Davikar\n </a>\n </div>\n </div>\n\n <div class=\"labels\">\n <span style=\"display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;\">\n investigate\n </span>\n </div>\n </div>\n</div>\n\n <div class=\"github-row\">\n <p class=\"github-body-container\">IPAdapterTiled seems to crop images that have a slightly wider portrait aspect r<span class=\"show-more-container\"><a href=\"\" rel=\"noopener\" class=\"show-more\">…</a></span><span class=\"excerpt hidden\">atio, like 4:5 and split it into 4 tiles rather than 2.\n\nHere's a couple of examples:\n<img width=\"637\" alt=\"image\" src=\"https://github.com/cubiq/ComfyUI_IPAdapter_plus/assets/8229634/6f59747f-e05a-4b43-bb89-0e96669592ce\">\n\n<img width=\"1219\" alt=\"image\" src=\"https://github.com/cubiq/ComfyUI_IPAdapter_plus/assets/8229634/ef3fd3cb-6096-4d33-a949-03f35b0d3410\">\n\nIt's fairly easy to replicate, make an image that is 608x768 or any 4:5 aspect ratio and send it to the tiled ip adapter.</span></p>\n </div>\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-04-28T08:47:44.128Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 4,
"readers_count": 3,
"score": 0.6,
"yours": false,
"topic_id": 152581,
"topic_slug": "incomplete-character-head-display-when-using-ipadapter",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406",
"internal": false,
"reflection": false,
"title": "IPAdapterTiled crops images with 4:5 AR · Issue #406 · cubiq/ComfyUI_IPAdapter_plus · GitHub",
"clicks": 2
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/incomplete-character-head-display-when-using-ipadapter/152581/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218744,
"name": "retrooisa",
"username": "jamoce",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/96bed5/{size}.png",
"created_at": "2025-04-28T17:31:21.857Z",
"cooked": "<p>You’re definitely not alone – I’ve run into the same issue when using IPAdapter. It’s usually something to do with the scaling settings or the way the input image is being processed. Bit of tweaking usually sorts it! By the way, if you’re after solid help with this sort of thing, having real expertise in modern tech makes a huge difference. The Frontend Company, for example, specialises in cutting-edge frameworks like React, Angular, and Vue.js. You might find their hire frontend developer guide quite useful too.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-05-01T15:20:25.350Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 4,
"readers_count": 3,
"score": 30.6,
"yours": false,
"topic_id": 152581,
"topic_slug": "incomplete-character-head-display-when-using-ipadapter",
"display_username": "retrooisa",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 2,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 92232,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/incomplete-character-head-display-when-using-ipadapter/152581/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
},
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218856,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-29T05:32:14.562Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-04-29T05:32:14.562Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.2,
"yours": false,
"topic_id": 152581,
"topic_slug": "incomplete-character-head-display-when-using-ipadapter",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/incomplete-character-head-display-when-using-ipadapter/152581/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I encountered an issue where the character’s head is not fully displayed when generating images with IPAdapter. How can I resolve this problem? Below is a screenshot of my workflow.<br>
<div class="lightbox-wrapper"><a class="lightbox" href="https://us1.discourse-cdn.com/hellohellohello/original/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243.jpeg" data-download-href="/uploads/short-url/d6guwyswzCw0rntoT8ONB0GHHLJ.jpeg?dl=1" title="屏幕截图 2025-04-28 095929" rel="noopener nofollow ugc"><img src="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_690x331.jpeg" alt="屏幕截图 2025-04-28 095929" data-base62-sha1="d6guwyswzCw0rntoT8ONB0GHHLJ" width="690" height="331" srcset="https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_690x331.jpeg, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_1035x496.jpeg 1.5x, https://us1.discourse-cdn.com/hellohellohello/optimized/3X/5/b/5bd196cdd65b6b26c8aad27ae3fa9cddb77b0243_2_1380x662.jpeg 2x" data-dominant-color="39332C"><div class="meta"><svg class="fa d-icon d-icon-far-image svg-icon" aria-hidden="true"><use href="#far-image"></use></svg><span class="filename">屏幕截图 2025-04-28 095929</span><span class="informations">1562×751 210 KB</span><svg class="fa d-icon d-icon-discourse-expand svg-icon" aria-hidden="true"><use href="#discourse-expand"></use></svg></div></a></div></p>
|
<p>Hmm, I’m not familiar with ComfyUI…</p><aside class="onebox githubissue" data-onebox-src="https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406">
<header class="source">
<a href="https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406" target="_blank" rel="noopener">github.com/cubiq/ComfyUI_IPAdapter_plus</a>
</header>
<article class="onebox-body">
<div class="github-row">
<div class="github-icon-container" title="Issue" data-github-private-repo="false">
<svg width="60" height="60" class="github-icon" viewBox="0 0 14 16" aria-hidden="true"><path fill-rule="evenodd" d="M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z"></path></svg>
</div>
<div class="github-info-container">
<h4>
<a href="https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/406" target="_blank" rel="noopener">IPAdapterTiled crops images with 4:5 AR</a>
</h4>
<div class="github-info">
<div class="date">
opened <span class="discourse-local-date" data-format="ll" data-date="2024-04-06" data-time="18:25:50" data-timezone="UTC">06:25PM - 06 Apr 24 UTC</span>
</div>
<div class="user">
<a href="https://github.com/Davikar" target="_blank" rel="noopener">
<img alt="" src="https://us1.discourse-cdn.com/hellohellohello/original/3X/4/f/4f3d45e7b10a56cdc4d3f8eaa722c50f8ac8ba83.jpeg" class="onebox-avatar-inline" width="20" height="20" data-dominant-color="1D1F19">
Davikar
</a>
</div>
</div>
<div class="labels">
<span style="display:inline-block;margin-top:2px;background-color: #B8B8B8;padding: 2px;border-radius: 4px;color: #fff;margin-left: 3px;">
investigate
</span>
</div>
</div>
</div>
<div class="github-row">
<p class="github-body-container">IPAdapterTiled seems to crop images that have a slightly wider portrait aspect r<span class="show-more-container"><a href="" rel="noopener" class="show-more">…</a></span><span class="excerpt hidden">atio, like 4:5 and split it into 4 tiles rather than 2.
Here's a couple of examples:
<img width="637" alt="image" src="https://github.com/cubiq/ComfyUI_IPAdapter_plus/assets/8229634/6f59747f-e05a-4b43-bb89-0e96669592ce">
<img width="1219" alt="image" src="https://github.com/cubiq/ComfyUI_IPAdapter_plus/assets/8229634/ef3fd3cb-6096-4d33-a949-03f35b0d3410">
It's fairly easy to replicate, make an image that is 608x768 or any 4:5 aspect ratio and send it to the tiled ip adapter.</span></p>
</div>
</article>
<div class="onebox-metadata">
</div>
<div style="clear: both"></div>
</aside>
|
Colab cannot find HuggingFace dataset
|
https://discuss.huggingface.co/t/colab-cannot-find-huggingface-dataset/63448
| 63,448
| 10
|
2023-11-24T21:18:42.821000Z
|
[
{
"id": 100772,
"name": "Seyyed Mohammad Moosavi",
"username": "lnxdx",
"avatar_template": "/user_avatar/discuss.huggingface.co/lnxdx/{size}/20601_2.png",
"created_at": "2023-11-24T21:18:42.886Z",
"cooked": "<p>When I try to run the following code to load a dataset from Hugging Face hub to google Colab, I get an error!</p>\n<pre><code class=\"lang-auto\">! pip install transformers datasets\nfrom datasets import load_dataset\ncv_13 = load_dataset(\"mozilla-foundation/common_voice_13_0\", \"en\", split=\"train\")\n</code></pre>\n<pre><code class=\"lang-auto\"><ipython-input-9-4d772f75be89> in <cell line: 3>()\n 1 from datasets import load_dataset\n 2 \n----> 3 cv_13 = load_dataset(\"mozilla-foundation/common_voice_13_0\", \"en\", split=\"train\")\n\n2 frames\n/usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\n 1505 raise e1 from None\n 1506 if isinstance(e1, FileNotFoundError):\n-> 1507 raise FileNotFoundError(\n 1508 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\n 1509 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\n\nFileNotFoundError: Couldn't find a dataset script at /content/mozilla-foundation/common_voice_13_0/common_voice_13_0.py or any data file in the same directory. Couldn't find 'mozilla-foundation/common_voice_13_0' on the Hugging Face Hub either: FileNotFoundError: Dataset 'mozilla-foundation/common_voice_13_0' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\n</code></pre>\n<p>The dataset exists in Huggingface hub and loads successfully in my local Jupiter Lab. What should I do?</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 8,
"updated_at": "2023-11-24T21:18:42.886Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4822,
"reads": 145,
"readers_count": 144,
"score": 24003.8,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "Seyyed Mohammad Moosavi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/error-in-downloading-private-dataset/125836/4",
"internal": true,
"reflection": true,
"title": "Error in downloading private dataset",
"clicks": 1
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31952,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/1",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 101062,
"name": "Julien Chaumond",
"username": "julien-c",
"avatar_template": "/user_avatar/discuss.huggingface.co/julien-c/{size}/41937_2.png",
"created_at": "2023-11-27T09:11:00.608Z",
"cooked": "<p>Which version of datasets are you using?</p>\n<p>cc <a class=\"mention\" href=\"/u/lhoestq\">@lhoestq</a> just in case</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 8,
"updated_at": "2023-11-27T09:11:00.608Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 60,
"reads": 113,
"readers_count": 112,
"score": 342.4,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "Julien Chaumond",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": true,
"staff": true,
"user_id": 4,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 101084,
"name": "Quentin Lhoest",
"username": "lhoestq",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png",
"created_at": "2023-11-27T10:00:37.033Z",
"cooked": "<p>The Common Voice dataset is a gated dataset, so you need to log in to access it.</p>\n<p>Can you try to log in using <code>huggingface-cli login</code> or pass<br>\na <a href=\"https://huggingface.co/settings/tokens\">HF token</a> <code>load_dataset(..., token=...)</code> ?</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 8,
"updated_at": "2023-11-27T10:00:37.033Z",
"reply_count": 3,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 49,
"reads": 106,
"readers_count": 105,
"score": 296,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "Quentin Lhoest",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/settings/tokens",
"internal": false,
"reflection": false,
"title": "Hugging Face – The AI community building the future.",
"clicks": 128
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": true,
"admin": false,
"staff": true,
"user_id": 76,
"hidden": false,
"trust_level": 2,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/3",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 101097,
"name": "Seyyed Mohammad Moosavi",
"username": "lnxdx",
"avatar_template": "/user_avatar/discuss.huggingface.co/lnxdx/{size}/20601_2.png",
"created_at": "2023-11-27T10:43:06.799Z",
"cooked": "<p>I logged in using <code>huggingface-cli login</code> and the dataset is currently being downloaded.<br>\ndatasets version is <code>datasets-2.15.0-py3-none-any.whl</code>.</p>",
"post_number": 5,
"post_type": 1,
"posts_count": 8,
"updated_at": "2023-11-27T10:43:06.799Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 102,
"readers_count": 101,
"score": 50.2,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "Seyyed Mohammad Moosavi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31952,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 4,
"username": "julien-c",
"name": "Julien Chaumond",
"avatar_template": "/user_avatar/discuss.huggingface.co/julien-c/{size}/41937_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 101098,
"name": "Seyyed Mohammad Moosavi",
"username": "lnxdx",
"avatar_template": "/user_avatar/discuss.huggingface.co/lnxdx/{size}/20601_2.png",
"created_at": "2023-11-27T10:44:07.463Z",
"cooked": "<p>I logged in using huggingface-cli login and the dataset is currently being downloaded. Thank you!</p>",
"post_number": 6,
"post_type": 1,
"posts_count": 8,
"updated_at": "2023-11-27T10:44:07.463Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 12,
"reads": 96,
"readers_count": 95,
"score": 79,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "Seyyed Mohammad Moosavi",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 31952,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/6",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 135815,
"name": "wangguan",
"username": "wangguan1995",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/4bbf92/{size}.png",
"created_at": "2024-06-06T06:55:27.624Z",
"cooked": "<p><span class=\"hashtag-raw\">#Dataset</span> xxx doesn’t exist on the Hub or cannot be accessed<br>\nMeet similar problem can load public dataset, not for private dataset</p>",
"post_number": 7,
"post_type": 1,
"posts_count": 8,
"updated_at": "2024-06-06T06:55:27.624Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 62,
"readers_count": 61,
"score": 27.2,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "wangguan",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52954,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/7",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 135817,
"name": "wangguan",
"username": "wangguan1995",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/w/4bbf92/{size}.png",
"created_at": "2024-06-06T06:57:47.172Z",
"cooked": "<p>I tried the same things. It does not work. Mine is a private dataset.</p>",
"post_number": 8,
"post_type": 1,
"posts_count": 8,
"updated_at": "2024-06-06T06:57:47.172Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 50,
"readers_count": 49,
"score": 30,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "wangguan",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52954,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/8",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 218634,
"name": "yoldas",
"username": "elifyoldas",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/e/bbce88/{size}.png",
"created_at": "2025-04-28T10:36:14.918Z",
"cooked": "<p>it works, thank you</p>",
"post_number": 9,
"post_type": 1,
"posts_count": 8,
"updated_at": "2025-04-28T10:36:14.918Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 11,
"readers_count": 10,
"score": 27.2,
"yours": false,
"topic_id": 63448,
"topic_slug": "colab-cannot-find-huggingface-dataset",
"display_username": "yoldas",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 92190,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/colab-cannot-find-huggingface-dataset/63448/9",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 76,
"username": "lhoestq",
"name": "Quentin Lhoest",
"avatar_template": "/user_avatar/discuss.huggingface.co/lhoestq/{size}/52888_2.png"
},
"action_code": null,
"via_email": null
}
] |
<p>When I try to run the following code to load a dataset from Hugging Face hub to google Colab, I get an error!</p>
<pre><code class="lang-auto">! pip install transformers datasets
from datasets import load_dataset
cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="train")
</code></pre>
<pre><code class="lang-auto"><ipython-input-9-4d772f75be89> in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="train")
2 frames
/usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1505 raise e1 from None
1506 if isinstance(e1, FileNotFoundError):
-> 1507 raise FileNotFoundError(
1508 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1509 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/mozilla-foundation/common_voice_13_0/common_voice_13_0.py or any data file in the same directory. Couldn't find 'mozilla-foundation/common_voice_13_0' on the Hugging Face Hub either: FileNotFoundError: Dataset 'mozilla-foundation/common_voice_13_0' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
</code></pre>
<p>The dataset exists in Huggingface hub and loads successfully in my local Jupiter Lab. What should I do?</p>
|
<p>The Common Voice dataset is a gated dataset, so you need to log in to access it.</p>
<p>Can you try to log in using <code>huggingface-cli login</code> or pass<br>
a <a href="https://huggingface.co/settings/tokens">HF token</a> <code>load_dataset(..., token=...)</code> ?</p>
|
How to write custom TrainerCallback functions with custom arguments?
|
https://discuss.huggingface.co/t/how-to-write-custom-trainercallback-functions-with-custom-arguments/151063
| 151,063
| 5
|
2025-04-18T03:09:20.628000Z
|
[
{
"id": 216453,
"name": "TTTTTC",
"username": "TTTTTC",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/t/5fc32e/{size}.png",
"created_at": "2025-04-18T03:09:20.685Z",
"cooked": "<p>I have a question about how to specify arguments of custom TrainerCallback function. I read from some examples (e.g., <a href=\"https://huggingface.co/docs/setfit/main/how_to/callbacks\">doc</a>) that users can specify custom arguments like <code>model</code> in the <code>EmbeddingPlotCallback.on_evaluate(...) </code> function. Here, <code>model</code> is not a predefined argument of the super class function <code>TrainerCallback.on_evaluate(...)</code> (<a href=\"https://huggingface.co/docs/transformers/main_classes/callback#transformers.TrainerCallback.on_evaluate\">doc</a>).</p>\n<p>I am wondering how the model is passed to this <code>on_evaluate(...)</code>. Should I modify the Trainer class to make it call <code>on_evaluate(...)</code> with additional inputs? Or does the Trainer class handle additional arguments automatically? I have not yet found any examples about these. Any advice or points to relevant code sections/examples will be very helpful.</p>\n<p>To supplement this inquiry with my motivation, I am experimenting with DPOTrainer while enabling synchronization of reference model, and I would like to log info about both the policy model and reference model. So, probably the inputs to the logging function would require two custom inputs for those two models. I think I can define two more arguments to my custom logging function, but I am not sure how I could pass the two models to my function.</p>\n<p>Any comments will be greatly appreciated!</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-18T03:09:20.685Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 108,
"reads": 9,
"readers_count": 8,
"score": 536.8,
"yours": false,
"topic_id": 151063,
"topic_slug": "how-to-write-custom-trainercallback-functions-with-custom-arguments",
"display_username": "TTTTTC",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/setfit/main/how_to/callbacks",
"internal": false,
"reflection": false,
"title": "Callbacks",
"clicks": 1
},
{
"url": "https://huggingface.co/docs/transformers/main_classes/callback#transformers.TrainerCallback.on_evaluate",
"internal": false,
"reflection": false,
"title": "Callbacks",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91116,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-write-custom-trainercallback-functions-with-custom-arguments/151063/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 216473,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-18T05:24:40.346Z",
"cooked": "<p>It seems to be automatic, or rather semi-manual. It seems that the trainer does not need to be modified.</p>\n<hr>\n<p>by Hugging Chat: <a href=\"https://huggingface.co/chat/\" class=\"inline-onebox\">HuggingChat</a></p>\n<p>To specify additional arguments for a custom <code>TrainerCallback</code> function in the Hugging Face Transformers library, you need to design your callback class to accept and store these arguments during initialization. These arguments can then be accessed within the callback’s methods (e.g., <code>on_evaluate</code>). Below is a detailed explanation of how to achieve this:</p>\n<hr>\n<h3><a name=\"p-216473-step-by-step-solution-1\" class=\"anchor\" href=\"#p-216473-step-by-step-solution-1\"></a><strong>Step-by-Step Solution</strong></h3>\n<h4><a name=\"p-216473-h-1-define-your-custom-callback-class-2\" class=\"anchor\" href=\"#p-216473-h-1-define-your-custom-callback-class-2\"></a>1. <strong>Define Your Custom Callback Class</strong></h4>\n<ul>\n<li>Create a custom callback class by subclassing <code>TrainerCallback</code>.</li>\n<li>In the <code>__init__</code> method, accept any additional arguments you need (e.g., <code>policy_model</code> and <code>ref_model</code>).</li>\n<li>Store these arguments as instance variables.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from transformers import TrainerCallback\n\nclass MyCustomCallback(TrainerCallback):\n def __init__(self, policy_model, ref_model):\n super().__init__()\n self.policy_model = policy_model\n self.ref_model = ref_model\n\n def on_evaluate(self, args, state, control, **kwargs):\n # Access your additional arguments here\n print(f\"Evaluating with Policy Model: {self.policy_model}\")\n print(f\"Evaluating with Reference Model: {self.ref_model}\")\n return super().on_evaluate(args, state, control, **kwargs)\n</code></pre>\n<h4><a name=\"p-216473-h-2-initialize-and-add-the-callback-to-the-trainer-3\" class=\"anchor\" href=\"#p-216473-h-2-initialize-and-add-the-callback-to-the-trainer-3\"></a>2. <strong>Initialize and Add the Callback to the Trainer</strong></h4>\n<ul>\n<li>Create an instance of your custom callback, passing the required arguments.</li>\n<li>Add the callback to the <code>Trainer</code> instance using <code>add_callback()</code>.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">from transformers import Trainer\n\n# Initialize your models and trainer\ntrainer = Trainer(...) # Your existing trainer setup\n\n# Create an instance of your custom callback\ncallback = MyCustomCallback(policy_model=policy_model, ref_model=ref_model)\n\n# Add the callback to the trainer\ntrainer.add_callback(callback)\n\n# Start training\ntrainer.train()\n</code></pre>\n<h4><a name=\"p-216473-h-3-access-additional-arguments-in-callback-methods-4\" class=\"anchor\" href=\"#p-216473-h-3-access-additional-arguments-in-callback-methods-4\"></a>3. <strong>Access Additional Arguments in Callback Methods</strong></h4>\n<ul>\n<li>The additional arguments are now accessible as instance variables in the callback’s methods.</li>\n</ul>\n<pre data-code-wrap=\"python\"><code class=\"lang-python\">def on_evaluate(self, args, state, control, **kwargs):\n # Use self.policy_model and self.ref_model here\n print(f\"Policy Model: {self.policy_model}\")\n print(f\"Reference Model: {self.ref_model}\")\n return super().on_evaluate(args, state, control, **kwargs)\n</code></pre>\n<hr>\n<h3><a name=\"p-216473-explanation-5\" class=\"anchor\" href=\"#p-216473-explanation-5\"></a><strong>Explanation</strong></h3>\n<ul>\n<li>The <code>Trainer</code> class in Hugging Face Transformers does not automatically pass additional arguments to callbacks. Instead, you must design your callback to accept and store these arguments during initialization.</li>\n<li>By defining the <code>__init__</code> method in your custom callback, you can pass any arguments (e.g., <code>policy_model</code> and <code>ref_model</code>) and store them for use within the callback’s methods.</li>\n<li>The <code>Trainer</code> class will call the callback methods (e.g., <code>on_evaluate</code>) during training, and your custom callback can access the stored arguments to perform the desired functionality.</li>\n</ul>\n<hr>\n<h3><a name=\"p-216473-why-this-works-6\" class=\"anchor\" href=\"#p-216473-why-this-works-6\"></a><strong>Why This Works</strong></h3>\n<ul>\n<li>The <code>Trainer</code> class supports custom callbacks by allowing users to add them via <code>add_callback()</code>. The <code>Trainer</code> does not restrict the number or type of arguments passed to the callback during initialization.</li>\n<li>By storing the additional arguments as instance variables, you ensure they are available throughout the callback’s lifecycle and can be accessed in any method (e.g., <code>on_evaluate</code>, <code>on_train_begin</code>, etc.).</li>\n</ul>\n<hr>\n<h3><a name=\"p-216473-faq-7\" class=\"anchor\" href=\"#p-216473-faq-7\"></a><strong>FAQ</strong></h3>\n<ol>\n<li>\n<p><strong>Do I need to modify the <code>Trainer</code> class?</strong></p>\n<ul>\n<li>No. The <code>Trainer</code> class already supports custom callbacks through the <code>add_callback()</code> method. You do not need to modify it to pass additional arguments to your callback.</li>\n</ul>\n</li>\n<li>\n<p><strong>Can I pass any type of argument to the callback?</strong></p>\n<ul>\n<li>Yes. You can pass any arguments (e.g., model instances, configuration parameters, or data structures) that your custom callback requires.</li>\n</ul>\n</li>\n<li>\n<p><strong>Where should I store the additional arguments?</strong></p>\n<ul>\n<li>Store them as instance variables in the <code>__init__</code> method of your custom callback. This ensures they are accessible throughout the callback’s lifecycle.</li>\n</ul>\n</li>\n</ol>\n<hr>\n<h3><a name=\"p-216473-relevant-examples-8\" class=\"anchor\" href=\"#p-216473-relevant-examples-8\"></a><strong>Relevant Examples</strong></h3>\n<ul>\n<li>In the <code>WandbCallback</code> example provided in the documentation [3], the callback accepts the <code>tokenizer</code> and <code>val_dataset</code> as arguments and stores them. Your custom callback can follow the same pattern.</li>\n<li>For your specific use case, storing <code>policy_model</code> and <code>ref_model</code> in the callback’s <code>__init__</code> method ensures they are available during evaluation.</li>\n</ul>\n<hr>\n<h3><a name=\"p-216473-conclusion-9\" class=\"anchor\" href=\"#p-216473-conclusion-9\"></a><strong>Conclusion</strong></h3>\n<p>To pass additional arguments to a custom <code>TrainerCallback</code> function, you do not need to modify the <code>Trainer</code> class. Instead, design your custom callback to accept and store these arguments during initialization. The <code>Trainer</code> will call the callback methods during training, and your custom callback can access the stored arguments as needed.</p>\n<p>For more details, you can refer to the <a href=\"https://huggingface.co/docs/transformers/main/en/main_classes/callback#transformers.TrainerCallback\">Hugging Face documentation on callbacks</a> [1][2][3].</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-18T05:24:40.346Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 5,
"reads": 8,
"readers_count": 7,
"score": 46.6,
"yours": false,
"topic_id": 151063,
"topic_slug": "how-to-write-custom-trainercallback-functions-with-custom-arguments",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/chat/",
"internal": false,
"reflection": false,
"title": "HuggingChat",
"clicks": 1
},
{
"url": "https://huggingface.co/docs/transformers/main/en/main_classes/callback#transformers.TrainerCallback",
"internal": false,
"reflection": false,
"title": "Callbacks",
"clicks": 1
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-write-custom-trainercallback-functions-with-custom-arguments/151063/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218483,
"name": "TTTTTC",
"username": "TTTTTC",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/t/5fc32e/{size}.png",
"created_at": "2025-04-27T13:25:38.936Z",
"cooked": "<p>Thanks so much for your reply. The approach you described works in my case. As a reference, let me describe more about my use case and add my current code below.</p>\n<p>I am using a DPOTrainer with sync_ref_model enabled, so there is a policy model and a reference model. Meanwhile, I also add qlora adapters to the models and only optimize the adapaters. Here, I want to log the weights of the adapters during training. The weights of the base models are excluded because they should not be changed during the process.</p>\n<p>Below is my custom TensorBoardCallback class for this purpose:</p>\n<pre><code class=\"lang-auto\">from transformers.integrations import TensorBoardCallback\n\nclass PolicyRefModelLoggingCallback(TensorBoardCallback):\n def __init__(self, model, policy_adapter_name=None, ref_adapter_name=None, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.model = model\n self.policy_adapter_name = policy_adapter_name\n self.ref_adapter_name = ref_adapter_name\n\n def on_log(self, args, state, control, logs=None, **kwargs):\n if not state.is_world_process_zero:\n return\n\n if self.tb_writer is None:\n self._init_summary_writer(args)\n\n if self.tb_writer is not None:\n # logs = rewrite_logs(logs)\n\n if self.policy_adapter_name is not None:\n logs = get_trainable_model_weights(\n self.model, \n self.policy_adapter_name,\n key_prefix=f\"{self.policy_adapter_name}/\",\n )\n for k, v in logs.items():\n self.tb_writer.add_histogram(k, v, state.global_step)\n if self.ref_adapter_name is not None:\n logs = get_trainable_model_weights(\n self.model, \n self.ref_adapter_name,\n key_prefix=f\"{self.ref_adapter_name}/\",\n )\n for k, v in logs.items():\n self.tb_writer.add_histogram(k, v, state.global_step)\n\n self.tb_writer.flush()\n\ndef get_trainable_model_weights(model, adapter_name, key_prefix=\"\"):\n logs = {}\n for name, param in model.state_dict().items() :\n if (adapter_name in name) and (\"lora_A\" in name or \"lora_B\" in name):\n logs[key_prefix+name] = param.data.detach().cpu()\n\n return logs\n\n</code></pre>\n<p>I get the layers of a specific adapter based on its name, which can be defined by, for example, <code>PeftModel.from_pretrained(..., adatper_name=\"...\")</code> as suggested in the DPOTrainer doc <a href=\"https://huggingface.co/docs/trl/v0.8.1/en/dpo_trainer#using-option-3---load-the-adapter-twice\">section</a>.</p>\n<p>This is my first time writing my TensorBoardCallback, so it may not be well structured or optimized. Any comment about how to improve it is very welcomed.</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-27T13:25:38.936Z",
"reply_count": 0,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 2,
"reads": 4,
"readers_count": 3,
"score": 25.8,
"yours": false,
"topic_id": 151063,
"topic_slug": "how-to-write-custom-trainercallback-functions-with-custom-arguments",
"display_username": "TTTTTC",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/docs/trl/v0.8.1/en/dpo_trainer#using-option-3---load-the-adapter-twice",
"internal": false,
"reflection": false,
"title": "DPO Trainer",
"clicks": 0
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91116,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-write-custom-trainercallback-functions-with-custom-arguments/151063/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 218487,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-27T13:58:57.506Z",
"cooked": "<p>Great!<br>\nAs far as I can tell from reading the code, there don’t seem to be any particular problems, but there is one thing. If <code>get_trainable_model_weights</code> is called multiple times, there may be some overhead. The rest should be within the margin of error.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-27T13:58:57.506Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 3,
"readers_count": 2,
"score": 0.6,
"yours": false,
"topic_id": 151063,
"topic_slug": "how-to-write-custom-trainercallback-functions-with-custom-arguments",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-write-custom-trainercallback-functions-with-custom-arguments/151063/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218564,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-28T01:59:26.127Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-04-28T01:59:26.127Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 151063,
"topic_slug": "how-to-write-custom-trainercallback-functions-with-custom-arguments",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/how-to-write-custom-trainercallback-functions-with-custom-arguments/151063/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I have a question about how to specify arguments of custom TrainerCallback function. I read from some examples (e.g., <a href="https://huggingface.co/docs/setfit/main/how_to/callbacks">doc</a>) that users can specify custom arguments like <code>model</code> in the <code>EmbeddingPlotCallback.on_evaluate(...) </code> function. Here, <code>model</code> is not a predefined argument of the super class function <code>TrainerCallback.on_evaluate(...)</code> (<a href="https://huggingface.co/docs/transformers/main_classes/callback#transformers.TrainerCallback.on_evaluate">doc</a>).</p>
<p>I am wondering how the model is passed to this <code>on_evaluate(...)</code>. Should I modify the Trainer class to make it call <code>on_evaluate(...)</code> with additional inputs? Or does the Trainer class handle additional arguments automatically? I have not yet found any examples about these. Any advice or points to relevant code sections/examples will be very helpful.</p>
<p>To supplement this inquiry with my motivation, I am experimenting with DPOTrainer while enabling synchronization of reference model, and I would like to log info about both the policy model and reference model. So, probably the inputs to the logging function would require two custom inputs for those two models. I think I can define two more arguments to my custom logging function, but I am not sure how I could pass the two models to my function.</p>
<p>Any comments will be greatly appreciated!</p>
|
<p>It seems to be automatic, or rather semi-manual. It seems that the trainer does not need to be modified.</p>
<hr>
<p>by Hugging Chat: <a href="https://huggingface.co/chat/" class="inline-onebox">HuggingChat</a></p>
<p>To specify additional arguments for a custom <code>TrainerCallback</code> function in the Hugging Face Transformers library, you need to design your callback class to accept and store these arguments during initialization. These arguments can then be accessed within the callback’s methods (e.g., <code>on_evaluate</code>). Below is a detailed explanation of how to achieve this:</p>
<hr>
<h3><a name="p-216473-step-by-step-solution-1" class="anchor" href="#p-216473-step-by-step-solution-1"></a><strong>Step-by-Step Solution</strong></h3>
<h4><a name="p-216473-h-1-define-your-custom-callback-class-2" class="anchor" href="#p-216473-h-1-define-your-custom-callback-class-2"></a>1. <strong>Define Your Custom Callback Class</strong></h4>
<ul>
<li>Create a custom callback class by subclassing <code>TrainerCallback</code>.</li>
<li>In the <code>__init__</code> method, accept any additional arguments you need (e.g., <code>policy_model</code> and <code>ref_model</code>).</li>
<li>Store these arguments as instance variables.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">from transformers import TrainerCallback
class MyCustomCallback(TrainerCallback):
def __init__(self, policy_model, ref_model):
super().__init__()
self.policy_model = policy_model
self.ref_model = ref_model
def on_evaluate(self, args, state, control, **kwargs):
# Access your additional arguments here
print(f"Evaluating with Policy Model: {self.policy_model}")
print(f"Evaluating with Reference Model: {self.ref_model}")
return super().on_evaluate(args, state, control, **kwargs)
</code></pre>
<h4><a name="p-216473-h-2-initialize-and-add-the-callback-to-the-trainer-3" class="anchor" href="#p-216473-h-2-initialize-and-add-the-callback-to-the-trainer-3"></a>2. <strong>Initialize and Add the Callback to the Trainer</strong></h4>
<ul>
<li>Create an instance of your custom callback, passing the required arguments.</li>
<li>Add the callback to the <code>Trainer</code> instance using <code>add_callback()</code>.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">from transformers import Trainer
# Initialize your models and trainer
trainer = Trainer(...) # Your existing trainer setup
# Create an instance of your custom callback
callback = MyCustomCallback(policy_model=policy_model, ref_model=ref_model)
# Add the callback to the trainer
trainer.add_callback(callback)
# Start training
trainer.train()
</code></pre>
<h4><a name="p-216473-h-3-access-additional-arguments-in-callback-methods-4" class="anchor" href="#p-216473-h-3-access-additional-arguments-in-callback-methods-4"></a>3. <strong>Access Additional Arguments in Callback Methods</strong></h4>
<ul>
<li>The additional arguments are now accessible as instance variables in the callback’s methods.</li>
</ul>
<pre data-code-wrap="python"><code class="lang-python">def on_evaluate(self, args, state, control, **kwargs):
# Use self.policy_model and self.ref_model here
print(f"Policy Model: {self.policy_model}")
print(f"Reference Model: {self.ref_model}")
return super().on_evaluate(args, state, control, **kwargs)
</code></pre>
<hr>
<h3><a name="p-216473-explanation-5" class="anchor" href="#p-216473-explanation-5"></a><strong>Explanation</strong></h3>
<ul>
<li>The <code>Trainer</code> class in Hugging Face Transformers does not automatically pass additional arguments to callbacks. Instead, you must design your callback to accept and store these arguments during initialization.</li>
<li>By defining the <code>__init__</code> method in your custom callback, you can pass any arguments (e.g., <code>policy_model</code> and <code>ref_model</code>) and store them for use within the callback’s methods.</li>
<li>The <code>Trainer</code> class will call the callback methods (e.g., <code>on_evaluate</code>) during training, and your custom callback can access the stored arguments to perform the desired functionality.</li>
</ul>
<hr>
<h3><a name="p-216473-why-this-works-6" class="anchor" href="#p-216473-why-this-works-6"></a><strong>Why This Works</strong></h3>
<ul>
<li>The <code>Trainer</code> class supports custom callbacks by allowing users to add them via <code>add_callback()</code>. The <code>Trainer</code> does not restrict the number or type of arguments passed to the callback during initialization.</li>
<li>By storing the additional arguments as instance variables, you ensure they are available throughout the callback’s lifecycle and can be accessed in any method (e.g., <code>on_evaluate</code>, <code>on_train_begin</code>, etc.).</li>
</ul>
<hr>
<h3><a name="p-216473-faq-7" class="anchor" href="#p-216473-faq-7"></a><strong>FAQ</strong></h3>
<ol>
<li>
<p><strong>Do I need to modify the <code>Trainer</code> class?</strong></p>
<ul>
<li>No. The <code>Trainer</code> class already supports custom callbacks through the <code>add_callback()</code> method. You do not need to modify it to pass additional arguments to your callback.</li>
</ul>
</li>
<li>
<p><strong>Can I pass any type of argument to the callback?</strong></p>
<ul>
<li>Yes. You can pass any arguments (e.g., model instances, configuration parameters, or data structures) that your custom callback requires.</li>
</ul>
</li>
<li>
<p><strong>Where should I store the additional arguments?</strong></p>
<ul>
<li>Store them as instance variables in the <code>__init__</code> method of your custom callback. This ensures they are accessible throughout the callback’s lifecycle.</li>
</ul>
</li>
</ol>
<hr>
<h3><a name="p-216473-relevant-examples-8" class="anchor" href="#p-216473-relevant-examples-8"></a><strong>Relevant Examples</strong></h3>
<ul>
<li>In the <code>WandbCallback</code> example provided in the documentation [3], the callback accepts the <code>tokenizer</code> and <code>val_dataset</code> as arguments and stores them. Your custom callback can follow the same pattern.</li>
<li>For your specific use case, storing <code>policy_model</code> and <code>ref_model</code> in the callback’s <code>__init__</code> method ensures they are available during evaluation.</li>
</ul>
<hr>
<h3><a name="p-216473-conclusion-9" class="anchor" href="#p-216473-conclusion-9"></a><strong>Conclusion</strong></h3>
<p>To pass additional arguments to a custom <code>TrainerCallback</code> function, you do not need to modify the <code>Trainer</code> class. Instead, design your custom callback to accept and store these arguments during initialization. The <code>Trainer</code> will call the callback methods during training, and your custom callback can access the stored arguments as needed.</p>
<p>For more details, you can refer to the <a href="https://huggingface.co/docs/transformers/main/en/main_classes/callback#transformers.TrainerCallback">Hugging Face documentation on callbacks</a> [1][2][3].</p>
|
“Expected all tensors to be on the same device” with SFTTrainer
|
https://discuss.huggingface.co/t/expected-all-tensors-to-be-on-the-same-device-with-sfttrainer/152402
| 152,402
| 5
|
2025-04-26T12:29:02.987000Z
|
[
{
"id": 218336,
"name": "Timofey",
"username": "SoberSinceToday",
"avatar_template": "/user_avatar/discuss.huggingface.co/sobersincetoday/{size}/46374_2.png",
"created_at": "2025-04-26T12:29:03.063Z",
"cooked": "<p>I’m trying to fine-tune LLM model using Kaggle’s 2xT4 configuration</p>\n<p>Here’s my full code:</p>\n<pre><code class=\"lang-auto\">!pip install trl transformers datasets peft bitsandbytes\nfrom datasets import load_dataset, DatasetDict\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig\nfrom trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM\nfrom accelerate import Accelerator, PartialState\nfrom accelerate.utils import write_basic_config\nfrom peft import LoraConfig\nfrom torch import nn\nimport os, torch\n\nos.environ['WANDB_DISABLED']=\"true\"\n\ndata_path =\"/kaggle/input/misis-final-dataset\"\nmodel_name = \"yandex/YandexGPT-5-Lite-8B-pretrain\"\noutput_directory = \"/kaggle/working/\"\n\ndef formatting_prompts_func(data, last_mes_amount=10):\n ...\n return {'text' : f\"### PROMPT: {prompt}### OUTPUT: {data['output']}\"}\ndata = load_dataset(data_path, split=\"train\").map(formatting_prompts_func)\n\nbnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=torch.float16\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n torch_dtype=torch.float16,\n device_map='auto',\n quantization_config=bnb_config,\n use_cache=False\n)\n\ntokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True,\n padding_side=\"left\", # Обрезаем начало, чтобы сохранять в контексте диалога последние сообщения\n add_eos_token=True,add_bos_token=True,\n use_fast=True)\ntokenizer.pad_token = tokenizer.eos_token\n\ninstruction_template = \"### PROMPT:\"\nresponse_template = \"### OUTPUT:\"\ncollator = DataCollatorForCompletionOnlyLM(instruction_template=instruction_template, response_template=response_template, \n tokenizer=tokenizer, mlm=False)\n\n\npeft_config = LoraConfig(\n r=8, \n lora_alpha=16, \n target_modules=[\"q_proj\", \"k_proj\", \"v_proj\"], \n lora_dropout=0.01, \n bias=\"all\",\n task_type=\"CAUSAL_LM\"\n )\n\ntraining_args=SFTConfig(\n label_names=[\"labels\"],\n output_dir=output_directory,\n \n per_device_train_batch_size=4,\n per_device_eval_batch_size=4, \n gradient_checkpointing = False,\n gradient_checkpointing_kwargs = {\"use_reentrant\": False}, \n\n gradient_accumulation_steps=1, \n num_train_epochs=3.0, \n learning_rate=2e-5, \n max_grad_norm=1.0, \n\n logging_strategy=\"steps\", \n logging_steps=5, \n save_strategy=\"steps\", \n save_steps=500, \n save_total_limit=3, \n save_safetensors=True, \n\n fp16=True, \n bf16=False, \n\n seed=42,\n\n remove_unused_columns=True, \n report_to=None, \n push_to_hub=False, \n\n\n ddp_find_unused_parameters=False,\n dataloader_pin_memory=False, \n skip_memory_metrics=True, \n disable_tqdm=False\n)\n\ntrainer = SFTTrainer(model=model,\n peft_config=peft_config,\n train_dataset=data,\n data_collator=collator,\n args=training_args,\n)\n\ntrainer.train()\n</code></pre>\n<p>Before i use trainer.train() The model is distributed across devices like:</p>\n<pre><code class=\"lang-auto\">{'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, 'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 1, 'model.layers.9': 1, 'model.layers.10': 1, 'model.layers.11': 1, 'model.layers.12': 1, 'model.layers.13': 1, 'model.layers.14': 1, 'model.layers.15': 1, 'model.layers.16': 1, 'model.layers.17': 1, 'model.layers.18': 1, 'model.layers.19': 1, 'model.layers.20': 1, 'model.layers.21': 1, 'model.layers.22': 1, 'model.layers.23': 1, 'model.layers.24': 1, 'model.layers.25': 1, 'model.layers.26': 1, 'model.layers.27': 1, 'model.layers.28': 1, 'model.layers.29': 1, 'model.layers.30': 1, 'model.layers.31': 1, 'model.norm': 1, 'model.rotary_emb': 1, 'lm_head': 1}\n</code></pre>\n<p>I’ve tried to use only one GPU but got MemoryLimit, anyway I want to train it using 2 GPUs</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-04-26T12:30:12.778Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 61,
"reads": 7,
"readers_count": 6,
"score": 316.4,
"yours": false,
"topic_id": 152402,
"topic_slug": "expected-all-tensors-to-be-on-the-same-device-with-sfttrainer",
"display_username": "Timofey",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 92019,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/expected-all-tensors-to-be-on-the-same-device-with-sfttrainer/152402/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218344,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-26T13:10:33.834Z",
"cooked": "<p>It seems that this error may occur depending on the version of Transoformers. Of course, there are other possibilities…</p><aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"147337\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/rohitdiwane/48/44042_2.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-7-and-cuda-0/147337\">RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!</a> <a class=\"badge-category__wrapper \" href=\"/c/transformers/9\"><span data-category-id=\"9\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Transformers library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Transformers</span></span></a>\n </div>\n <blockquote>\n RuntimeError Traceback (most recent call last) \nCell In[29], line 2 \n1 # Train model \n----> 2 trainer.train() \n4 # # Start training from the last checkpoint \n5 # trainer.train(resume_from_checkpoint=checkpoint) \nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/transformers/trainer.py:2245, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) \n2243 hf_hub_utils.enable_progress_bars() \n2244 else: \n → 2245 return i…\n </blockquote>\n</aside>\n<aside class=\"quote quote-modified\" data-post=\"1\" data-topic=\"150275\">\n <div class=\"title\">\n <div class=\"quote-controls\"></div>\n <img alt=\"\" width=\"24\" height=\"24\" src=\"https://avatars.discourse-cdn.com/v4/letter/t/3da27b/48.png\" class=\"avatar\">\n <a href=\"https://discuss.huggingface.co/t/bitsandbytes-conflict-with-accelerate/150275\">BitsandBytes conflict with Accelerate</a> <a class=\"badge-category__wrapper \" href=\"/c/accelerate/18\"><span data-category-id=\"18\" style=\"--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;\" data-drop-close=\"true\" class=\"badge-category \" title=\"This category is for any question related to the Accelerate library. You can also file an issue.\"><span class=\"badge-category__name\">🤗Accelerate</span></span></a>\n </div>\n <blockquote>\n I’m running inference on a <a href=\"https://huggingface.co/openvla/openvla-7b\">custom VLM derived model</a>. Inference works fine when using the weights in their bfloat16 precision. However, when I try defining a BitsandBytes config, I receive errors that I suspect is due to conflicts between BitsandBytes and Accelerate, where Accelerate and BitsandBytes are both trying to set the compute device and hence generating the following stack trace. \nTraceback (most recent call last):\n File \"/home/tyr/RobotAI/openvla/scripts/extern/verify_prismatic.py\", l…\n </blockquote>\n</aside>\n",
"post_number": 2,
"post_type": 1,
"posts_count": 3,
"updated_at": "2025-04-26T13:10:33.834Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 24,
"reads": 7,
"readers_count": 6,
"score": 136.4,
"yours": false,
"topic_id": 152402,
"topic_slug": "expected-all-tensors-to-be-on-the-same-device-with-sfttrainer",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-7-and-cuda-0/147337",
"internal": true,
"reflection": false,
"title": "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!",
"clicks": 0
},
{
"url": "https://discuss.huggingface.co/t/bitsandbytes-conflict-with-accelerate/150275",
"internal": true,
"reflection": false,
"title": "BitsandBytes conflict with Accelerate",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/expected-all-tensors-to-be-on-the-same-device-with-sfttrainer/152402/2",
"reactions": [
{
"id": "heart",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218405,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-27T01:11:22.498Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 3,
"post_type": 3,
"posts_count": 3,
"updated_at": "2025-04-27T01:11:22.498Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 5,
"readers_count": 4,
"score": 16,
"yours": false,
"topic_id": 152402,
"topic_slug": "expected-all-tensors-to-be-on-the-same-device-with-sfttrainer",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/expected-all-tensors-to-be-on-the-same-device-with-sfttrainer/152402/3",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I’m trying to fine-tune LLM model using Kaggle’s 2xT4 configuration</p>
<p>Here’s my full code:</p>
<pre><code class="lang-auto">!pip install trl transformers datasets peft bitsandbytes
from datasets import load_dataset, DatasetDict
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM
from accelerate import Accelerator, PartialState
from accelerate.utils import write_basic_config
from peft import LoraConfig
from torch import nn
import os, torch
os.environ['WANDB_DISABLED']="true"
data_path ="/kaggle/input/misis-final-dataset"
model_name = "yandex/YandexGPT-5-Lite-8B-pretrain"
output_directory = "/kaggle/working/"
def formatting_prompts_func(data, last_mes_amount=10):
...
return {'text' : f"### PROMPT: {prompt}### OUTPUT: {data['output']}"}
data = load_dataset(data_path, split="train").map(formatting_prompts_func)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map='auto',
quantization_config=bnb_config,
use_cache=False
)
tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True,
padding_side="left", # Обрезаем начало, чтобы сохранять в контексте диалога последние сообщения
add_eos_token=True,add_bos_token=True,
use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
instruction_template = "### PROMPT:"
response_template = "### OUTPUT:"
collator = DataCollatorForCompletionOnlyLM(instruction_template=instruction_template, response_template=response_template,
tokenizer=tokenizer, mlm=False)
peft_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "k_proj", "v_proj"],
lora_dropout=0.01,
bias="all",
task_type="CAUSAL_LM"
)
training_args=SFTConfig(
label_names=["labels"],
output_dir=output_directory,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_checkpointing = False,
gradient_checkpointing_kwargs = {"use_reentrant": False},
gradient_accumulation_steps=1,
num_train_epochs=3.0,
learning_rate=2e-5,
max_grad_norm=1.0,
logging_strategy="steps",
logging_steps=5,
save_strategy="steps",
save_steps=500,
save_total_limit=3,
save_safetensors=True,
fp16=True,
bf16=False,
seed=42,
remove_unused_columns=True,
report_to=None,
push_to_hub=False,
ddp_find_unused_parameters=False,
dataloader_pin_memory=False,
skip_memory_metrics=True,
disable_tqdm=False
)
trainer = SFTTrainer(model=model,
peft_config=peft_config,
train_dataset=data,
data_collator=collator,
args=training_args,
)
trainer.train()
</code></pre>
<p>Before i use trainer.train() The model is distributed across devices like:</p>
<pre><code class="lang-auto">{'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, 'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 1, 'model.layers.9': 1, 'model.layers.10': 1, 'model.layers.11': 1, 'model.layers.12': 1, 'model.layers.13': 1, 'model.layers.14': 1, 'model.layers.15': 1, 'model.layers.16': 1, 'model.layers.17': 1, 'model.layers.18': 1, 'model.layers.19': 1, 'model.layers.20': 1, 'model.layers.21': 1, 'model.layers.22': 1, 'model.layers.23': 1, 'model.layers.24': 1, 'model.layers.25': 1, 'model.layers.26': 1, 'model.layers.27': 1, 'model.layers.28': 1, 'model.layers.29': 1, 'model.layers.30': 1, 'model.layers.31': 1, 'model.norm': 1, 'model.rotary_emb': 1, 'lm_head': 1}
</code></pre>
<p>I’ve tried to use only one GPU but got MemoryLimit, anyway I want to train it using 2 GPUs</p>
|
<p>It seems that this error may occur depending on the version of Transoformers. Of course, there are other possibilities…</p><aside class="quote quote-modified" data-post="1" data-topic="147337">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://sea2.discourse-cdn.com/hellohellohello/user_avatar/discuss.huggingface.co/rohitdiwane/48/44042_2.png" class="avatar">
<a href="https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cuda-7-and-cuda-0/147337">RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0!</a> <a class="badge-category__wrapper " href="/c/transformers/9"><span data-category-id="9" style="--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="This category is for any question related to the Transformers library. You can also file an issue."><span class="badge-category__name">🤗Transformers</span></span></a>
</div>
<blockquote>
RuntimeError Traceback (most recent call last)
Cell In[29], line 2
1 # Train model
----> 2 trainer.train()
4 # # Start training from the last checkpoint
5 # trainer.train(resume_from_checkpoint=checkpoint)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/transformers/trainer.py:2245, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2243 hf_hub_utils.enable_progress_bars()
2244 else:
→ 2245 return i…
</blockquote>
</aside>
<aside class="quote quote-modified" data-post="1" data-topic="150275">
<div class="title">
<div class="quote-controls"></div>
<img alt="" width="24" height="24" src="https://avatars.discourse-cdn.com/v4/letter/t/3da27b/48.png" class="avatar">
<a href="https://discuss.huggingface.co/t/bitsandbytes-conflict-with-accelerate/150275">BitsandBytes conflict with Accelerate</a> <a class="badge-category__wrapper " href="/c/accelerate/18"><span data-category-id="18" style="--category-badge-color: #F7941D; --category-badge-text-color: #FFFFFF;" data-drop-close="true" class="badge-category " title="This category is for any question related to the Accelerate library. You can also file an issue."><span class="badge-category__name">🤗Accelerate</span></span></a>
</div>
<blockquote>
I’m running inference on a <a href="https://huggingface.co/openvla/openvla-7b">custom VLM derived model</a>. Inference works fine when using the weights in their bfloat16 precision. However, when I try defining a BitsandBytes config, I receive errors that I suspect is due to conflicts between BitsandBytes and Accelerate, where Accelerate and BitsandBytes are both trying to set the compute device and hence generating the following stack trace.
Traceback (most recent call last):
File "/home/tyr/RobotAI/openvla/scripts/extern/verify_prismatic.py", l…
</blockquote>
</aside>
|
Not able to access meta-llama/Llama-3.2-3B-Instruct
|
https://discuss.huggingface.co/t/not-able-to-access-meta-llama-llama-3-2-3b-instruct/152277
| 152,277
| 5
|
2025-04-25T08:54:57.311000Z
|
[
{
"id": 218146,
"name": "Gaurav Sehgal",
"username": "gsehgal",
"avatar_template": "/user_avatar/discuss.huggingface.co/gsehgal/{size}/46306_2.png",
"created_at": "2025-04-25T08:54:57.374Z",
"cooked": "<p>I am taking the Agent course in hugging face and keep getting the following error:</p>\n<p>HfHubHTTPError: 503 Server Error: Service Temporarily Unavailable for url: <a href=\"https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.2-3B-Instruct\">https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.2-3B-Instruct</a></p>\n<p>When I execute the following cell:</p>\n<p>client = InferenceClient(“meta-llama/Llama-3.2-3B-Instruct”)<br>\noutput = client.text_generation(<br>\n“The capital of france is”,<br>\nmax_new_tokens=100,<br>\n)</p>\n<p>print(output)</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-04-25T08:54:57.374Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 417,
"reads": 20,
"readers_count": 19,
"score": 2094,
"yours": false,
"topic_id": 152277,
"topic_slug": "not-able-to-access-meta-llama-llama-3-2-3b-instruct",
"display_username": "Gaurav Sehgal",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.2-3B-Instruct",
"internal": false,
"reflection": false,
"title": null,
"clicks": 7
}
],
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91919,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-able-to-access-meta-llama-llama-3-2-3b-instruct/152277/1",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218150,
"name": "Gaurav Sehgal",
"username": "gsehgal",
"avatar_template": "/user_avatar/discuss.huggingface.co/gsehgal/{size}/46306_2.png",
"created_at": "2025-04-25T09:01:19.873Z",
"cooked": "<p>is there any other model I can use for the course, I am new to huggingface, so not sure what to do. any help will be appreciated.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-04-25T09:01:19.873Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 8,
"reads": 19,
"readers_count": 18,
"score": 58.8,
"yours": false,
"topic_id": 152277,
"topic_slug": "not-able-to-access-meta-llama-llama-3-2-3b-instruct",
"display_username": "Gaurav Sehgal",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 91919,
"hidden": false,
"trust_level": 0,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-able-to-access-meta-llama-llama-3-2-3b-instruct/152277/2",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218157,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-04-25T10:45:59.379Z",
"cooked": "<p>Same here… <a class=\"mention\" href=\"/u/michellehbn\">@michellehbn</a></p>\n<pre data-code-wrap=\"py\"><code class=\"lang-py\">from huggingface_hub import InferenceClient\n\n#model_id = \"facebook/opt-1.3b\" # No response for a long time...\n#model_id = \"HuggingFaceTB/SmolLM2-135M-Instruct\" # 503 => working\n#model_id = \"Qwen/Qwen2.5-3B-Instruct\" # 503 => no response for a long time...\n#model_id = \"meta-llama/Llama-3.2-3B-Instruct\" # 503\nmodel_id = \"Qwen/QwQ-32B\" # Paris. The Eiffel Tower is a famous landmark there. If I want to visit the Louvre Museum, which city should I go to? You should go to Paris, France, to visit the Louvre Museum. The Louvre is one of the world's largest and most famous museums, housing thousands of art pieces, including the Mona Lisa. It's located in the heart of Paris, near the Seine River. Enjoy your trip! 🗼✨ Wait, I thought the\n\nHF_TOKEN = \"hf_my_pro_read_token\"\n\n# Initialize Hugging Face InferenceClient\nclient = InferenceClient(\n model=model_id,\n token=HF_TOKEN,\n provider=\"hf-inference\",\n timeout=600,\n)\n\nresult = client.text_generation(\n prompt=\"The capital of france is\",\n max_new_tokens=100,\n)\n\nprint(result)\n</code></pre>",
"post_number": 3,
"post_type": 1,
"posts_count": 4,
"updated_at": "2025-04-25T10:45:59.379Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 6,
"reads": 17,
"readers_count": 16,
"score": 48.4,
"yours": false,
"topic_id": 152277,
"topic_slug": "not-able-to-access-meta-llama-llama-3-2-3b-instruct",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://discuss.huggingface.co/t/problem-in-agents-course/150210/7",
"internal": true,
"reflection": true,
"title": "Problem in Agents Course",
"clicks": 3
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-able-to-access-meta-llama-llama-3-2-3b-instruct/152277/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 218270,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-25T22:46:05.497Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 4,
"post_type": 3,
"posts_count": 4,
"updated_at": "2025-04-25T22:46:05.497Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 12,
"readers_count": 11,
"score": 2.4,
"yours": false,
"topic_id": 152277,
"topic_slug": "not-able-to-access-meta-llama-llama-3-2-3b-instruct",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/not-able-to-access-meta-llama-llama-3-2-3b-instruct/152277/4",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I am taking the Agent course in hugging face and keep getting the following error:</p>
<p>HfHubHTTPError: 503 Server Error: Service Temporarily Unavailable for url: <a href="https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.2-3B-Instruct">https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.2-3B-Instruct</a></p>
<p>When I execute the following cell:</p>
<p>client = InferenceClient(“meta-llama/Llama-3.2-3B-Instruct”)<br>
output = client.text_generation(<br>
“The capital of france is”,<br>
max_new_tokens=100,<br>
)</p>
<p>print(output)</p>
|
<p>Same here… <a class="mention" href="/u/michellehbn">@michellehbn</a></p>
<pre data-code-wrap="py"><code class="lang-py">from huggingface_hub import InferenceClient
#model_id = "facebook/opt-1.3b" # No response for a long time...
#model_id = "HuggingFaceTB/SmolLM2-135M-Instruct" # 503 => working
#model_id = "Qwen/Qwen2.5-3B-Instruct" # 503 => no response for a long time...
#model_id = "meta-llama/Llama-3.2-3B-Instruct" # 503
model_id = "Qwen/QwQ-32B" # Paris. The Eiffel Tower is a famous landmark there. If I want to visit the Louvre Museum, which city should I go to? You should go to Paris, France, to visit the Louvre Museum. The Louvre is one of the world's largest and most famous museums, housing thousands of art pieces, including the Mona Lisa. It's located in the heart of Paris, near the Seine River. Enjoy your trip! 🗼✨ Wait, I thought the
HF_TOKEN = "hf_my_pro_read_token"
# Initialize Hugging Face InferenceClient
client = InferenceClient(
model=model_id,
token=HF_TOKEN,
provider="hf-inference",
timeout=600,
)
result = client.text_generation(
prompt="The capital of france is",
max_new_tokens=100,
)
print(result)
</code></pre>
|
What is the most efficient way to dynamically change context mid-generation?
|
https://discuss.huggingface.co/t/what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation/147892
| 147,892
| 9
|
2025-03-28T20:47:30.328000Z
|
[
{
"id": 212100,
"name": "Blazgo",
"username": "Blazgo",
"avatar_template": "/user_avatar/discuss.huggingface.co/blazgo/{size}/44330_2.png",
"created_at": "2025-03-28T20:47:30.392Z",
"cooked": "<p>I learnt a little about LLMs and know that they just loop through the conversation many times and generate a token each time. Is it somehow possible to detect a sequence in the generation and dynamically append context?</p>\n<blockquote>\n<p><strong>Some background information</strong><br>\nI want to build agentic chatbots, cheaply. Here’s the problem:<br>\nLet’s say that input is $3/Mtok and we have 10K tokens. The input cost is 3 cents<br>\nI want to have the chatbot retrieve the necessary information, and perform actions, but it is not very efficient. 5 or 10 tool calls may be ok but over time 100s will cost lots, not counting reasoning tokens and output. So since I know that LLMs just loop while generating content, I want to try to use opensource models to do the job, and when tool calls are detected, just append to the beginning of the message.</p>\n</blockquote>\n<p>I know I can stop the generation and restart it with context but is there a more efficient way. Maybe this is related to why LLMs have a longer time to first token than token per second (as restarting generation would be like again pausing for the time to first token)</p>",
"post_number": 1,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-03-28T20:47:30.392Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 96,
"reads": 7,
"readers_count": 6,
"score": 451.4,
"yours": false,
"topic_id": 147892,
"topic_slug": "what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation",
"display_username": "Blazgo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 88817,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation/147892/1",
"reactions": [
{
"id": "eyes",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": false,
"title_is_group": null,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 212150,
"name": "John Smith",
"username": "John6666",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png",
"created_at": "2025-03-29T07:19:26.302Z",
"cooked": "<p>For example, how about RAG approach?</p>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://python.langchain.com/docs/tutorials/rag/\">\n <header class=\"source\">\n <img src=\"https://us1.discourse-cdn.com/hellohellohello/original/3X/a/6/a6c11d41373802deca73cc066c22326bc9e2a618.png\" class=\"site-icon\" data-dominant-color=\"5D7376\" width=\"32\" height=\"32\">\n\n <a href=\"https://python.langchain.com/docs/tutorials/rag/\" target=\"_blank\" rel=\"noopener\">python.langchain.com</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/360;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/0/d/0d1a958541ff86ef0ce789e860655617cfab3eca_2_690x360.png\" class=\"thumbnail\" data-dominant-color=\"2F494A\" width=\"690\" height=\"360\"></div>\n\n<h3><a href=\"https://python.langchain.com/docs/tutorials/rag/\" target=\"_blank\" rel=\"noopener\">Build a Retrieval Augmented Generation (RAG) App: Part 1 | 🦜️🔗 LangChain</a></h3>\n\n <p>One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval...</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n<aside class=\"onebox allowlistedgeneric\" data-onebox-src=\"https://huggingface.co/learn/agents-course/unit2/smolagents/retrieval_agents\">\n <header class=\"source\">\n\n <a href=\"https://huggingface.co/learn/agents-course/unit2/smolagents/retrieval_agents\" target=\"_blank\" rel=\"noopener\">huggingface.co</a>\n </header>\n\n <article class=\"onebox-body\">\n <div class=\"aspect-image\" style=\"--aspect-ratio:690/372;\"><img src=\"https://us1.discourse-cdn.com/hellohellohello/optimized/3X/d/8/d8c4ffb86585c4f4591be71d9c6e11b57353c350_2_690x372.png\" class=\"thumbnail\" data-dominant-color=\"EEEBE4\" width=\"690\" height=\"372\"></div>\n\n<h3><a href=\"https://huggingface.co/learn/agents-course/unit2/smolagents/retrieval_agents\" target=\"_blank\" rel=\"noopener\">Building Agentic RAG Systems - Hugging Face Agents Course</a></h3>\n\n <p>We’re on a journey to advance and democratize artificial intelligence through open source and open science.</p>\n\n\n </article>\n\n <div class=\"onebox-metadata\">\n \n \n </div>\n\n <div style=\"clear: both\"></div>\n</aside>\n\n<hr>\n<p>To build an efficient and cost-effective agentic chatbot with dynamic context modification during generation, consider the following approach, drawing insights from the provided sources:</p>\n<ol>\n<li>\n<p><strong>Dynamic Context Augmentation with RAG</strong>: Integrate Retrieval-Augmented Generation (RAG) to dynamically retrieve and append relevant information to the context when needed. This avoids frequent expensive tool calls by augmenting the model’s knowledge in real-time [1].</p>\n</li>\n<li>\n<p><strong>Efficient Context Pruning with LazyLLM</strong>: Implement LazyLLM to dynamically prune unnecessary tokens during prefilling and decoding. This keeps the context focused on generating the next token, optimizing resource usage and reducing the overall context length [3].</p>\n</li>\n<li>\n<p><strong>Resource Decoupling with Infinite-LLM</strong>: Utilize the approach from Infinite-LLM to decouple attention layers from the rest of the model, enabling flexible and efficient resource scheduling. This allows dynamic context modifications without restarting the generation process, saving time and resources [2].</p>\n</li>\n<li>\n<p><strong>Tool Call Detection and Context Update</strong>: Monitor the generation process for triggers indicating a need for tool calls. When detected, append the necessary information to the beginning of the message and update the KVCache, allowing the model to continue generation smoothly without interruption [2][3].</p>\n</li>\n</ol>\n<p>By combining these techniques, you can create a chatbot that efficiently modifies its context dynamically during generation, reducing costs and improving performance. The strategy focuses on minimizing tool calls, optimizing context length, and enhancing resource management, all of which contribute to a more efficient and scalable solution.</p>\n<p>This approach aligns with current advancements in dynamic context handling, leveraging both pruning and resource decoupling to maintain efficiency while ensuring that the chatbot remains cost-effective and responsive.</p>",
"post_number": 2,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-03-29T07:19:26.302Z",
"reply_count": 1,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 4,
"reads": 6,
"readers_count": 5,
"score": 21.2,
"yours": false,
"topic_id": 147892,
"topic_slug": "what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation",
"display_username": "John Smith",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": [
{
"url": "https://huggingface.co/learn/agents-course/unit2/smolagents/retrieval_agents",
"internal": false,
"reflection": false,
"title": "Building Agentic RAG Systems - Hugging Face Agents Course",
"clicks": 3
},
{
"url": "https://python.langchain.com/docs/tutorials/rag/",
"internal": false,
"reflection": false,
"title": "Build a Retrieval Augmented Generation (RAG) App: Part 1 | 🦜️🔗 LangChain",
"clicks": 0
}
],
"read": true,
"user_title": "Regular",
"bookmarked": false,
"actions_summary": [],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 52272,
"hidden": false,
"trust_level": 3,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation/147892/2",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": false,
"reply_to_user": null,
"action_code": null,
"via_email": null
},
{
"id": 213086,
"name": "Blazgo",
"username": "Blazgo",
"avatar_template": "/user_avatar/discuss.huggingface.co/blazgo/{size}/44330_2.png",
"created_at": "2025-04-02T23:37:17.882Z",
"cooked": "<p>I already know about RAG. I’m talking more about efficiency<br>\nFor RAG I’d have to do 2 requests, but I want to do it with one call, effectively using less requests</p>",
"post_number": 3,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-02T23:37:17.882Z",
"reply_count": 1,
"reply_to_post_number": 2,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 5,
"readers_count": 4,
"score": 21,
"yours": false,
"topic_id": 147892,
"topic_slug": "what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation",
"display_username": "Blazgo",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 88817,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation/147892/3",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 52272,
"username": "John6666",
"name": "John Smith",
"avatar_template": "/user_avatar/discuss.huggingface.co/john6666/{size}/27664_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 213088,
"name": "Joshua Getner",
"username": "jgetner",
"avatar_template": "https://avatars.discourse-cdn.com/v4/letter/j/5e9695/{size}.png",
"created_at": "2025-04-02T23:52:39.990Z",
"cooked": "<p>I do not think what you want to achieve is possible without the model being able to explicitly do routing or gating based on the input. If you can modify the model structure you could achieve this with a gating mechanism. This would be the contextual change you are seeking based on 1 input that could be split into many different inputs internally. You would need some sort of marker to inform the gate on when 1 input ends and another starts but that can easily be achieved with a marker or tag. You also could do this with strait python by preprocessing the inputs before passing them into the model. But this would all need to be built in.</p>",
"post_number": 4,
"post_type": 1,
"posts_count": 5,
"updated_at": "2025-04-02T23:52:39.990Z",
"reply_count": 0,
"reply_to_post_number": 3,
"quote_count": 0,
"incoming_link_count": 3,
"reads": 5,
"readers_count": 4,
"score": 31,
"yours": false,
"topic_id": 147892,
"topic_slug": "what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation",
"display_username": "Joshua Getner",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [
{
"id": 2,
"count": 1
}
],
"moderator": false,
"admin": false,
"staff": false,
"user_id": 89186,
"hidden": false,
"trust_level": 1,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation/147892/4",
"reactions": [
{
"id": "+1",
"type": "emoji",
"count": 1
}
],
"current_user_reaction": null,
"reaction_users_count": 1,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": true,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": {
"id": 88817,
"username": "Blazgo",
"name": "Blazgo",
"avatar_template": "/user_avatar/discuss.huggingface.co/blazgo/{size}/44330_2.png"
},
"action_code": null,
"via_email": null
},
{
"id": 217798,
"name": "system",
"username": "system",
"avatar_template": "https://us1.discourse-cdn.com/hellohellohello/original/2X/d/de4155eb4aa4108ecb32a1389d7cc37ae69f88b7.png",
"created_at": "2025-04-23T22:24:28.076Z",
"cooked": "<p>This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.</p>",
"post_number": 5,
"post_type": 3,
"posts_count": 5,
"updated_at": "2025-04-23T22:24:28.076Z",
"reply_count": 0,
"reply_to_post_number": null,
"quote_count": 0,
"incoming_link_count": 0,
"reads": 2,
"readers_count": 1,
"score": 0.4,
"yours": false,
"topic_id": 147892,
"topic_slug": "what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation",
"display_username": "system",
"primary_group_name": null,
"flair_name": null,
"flair_url": null,
"flair_bg_color": null,
"flair_color": null,
"flair_group_id": null,
"badges_granted": [],
"version": 1,
"can_edit": false,
"can_delete": false,
"can_recover": false,
"can_see_hidden_post": false,
"can_wiki": false,
"link_counts": null,
"read": true,
"user_title": null,
"bookmarked": false,
"actions_summary": [],
"moderator": true,
"admin": true,
"staff": true,
"user_id": -1,
"hidden": false,
"trust_level": 4,
"deleted_at": null,
"user_deleted": false,
"edit_reason": null,
"can_view_edit_history": true,
"wiki": false,
"post_url": "/t/what-is-the-most-efficient-way-to-dynamically-change-context-mid-generation/147892/5",
"reactions": [],
"current_user_reaction": null,
"reaction_users_count": 0,
"current_user_used_main_reaction": false,
"can_accept_answer": false,
"can_unaccept_answer": false,
"accepted_answer": false,
"topic_accepted_answer": true,
"can_vote": null,
"title_is_group": null,
"reply_to_user": null,
"action_code": "autoclosed.enabled",
"via_email": null
}
] |
<p>I learnt a little about LLMs and know that they just loop through the conversation many times and generate a token each time. Is it somehow possible to detect a sequence in the generation and dynamically append context?</p>
<blockquote>
<p><strong>Some background information</strong><br>
I want to build agentic chatbots, cheaply. Here’s the problem:<br>
Let’s say that input is $3/Mtok and we have 10K tokens. The input cost is 3 cents<br>
I want to have the chatbot retrieve the necessary information, and perform actions, but it is not very efficient. 5 or 10 tool calls may be ok but over time 100s will cost lots, not counting reasoning tokens and output. So since I know that LLMs just loop while generating content, I want to try to use opensource models to do the job, and when tool calls are detected, just append to the beginning of the message.</p>
</blockquote>
<p>I know I can stop the generation and restart it with context but is there a more efficient way. Maybe this is related to why LLMs have a longer time to first token than token per second (as restarting generation would be like again pausing for the time to first token)</p>
|
<p>I do not think what you want to achieve is possible without the model being able to explicitly do routing or gating based on the input. If you can modify the model structure you could achieve this with a gating mechanism. This would be the contextual change you are seeking based on 1 input that could be split into many different inputs internally. You would need some sort of marker to inform the gate on when 1 input ends and another starts but that can easily be achieved with a marker or tag. You also could do this with strait python by preprocessing the inputs before passing them into the model. But this would all need to be built in.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.