ViViT_wlasl_100_200ep_coR_
This model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.6046
- Accuracy: 0.6716
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 36000
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 18.8652 | 0.005 | 180 | 4.6810 | 0.0148 |
| 18.549 | 1.0050 | 360 | 4.6061 | 0.0325 |
| 18.0062 | 2.0050 | 540 | 4.5292 | 0.0414 |
| 17.0211 | 3.0050 | 721 | 4.3745 | 0.0710 |
| 15.802 | 4.005 | 901 | 4.1370 | 0.1036 |
| 14.0884 | 5.0050 | 1081 | 3.8357 | 0.1746 |
| 12.1547 | 6.0050 | 1261 | 3.5013 | 0.2633 |
| 10.1205 | 7.0050 | 1442 | 3.1863 | 0.3284 |
| 8.1175 | 8.005 | 1622 | 2.8482 | 0.3757 |
| 6.3088 | 9.0050 | 1802 | 2.5647 | 0.4704 |
| 4.6768 | 10.0050 | 1982 | 2.2896 | 0.5030 |
| 3.2458 | 11.0050 | 2163 | 2.0975 | 0.5444 |
| 2.2535 | 12.005 | 2343 | 1.9396 | 0.5799 |
| 1.428 | 13.0050 | 2523 | 1.7295 | 0.6006 |
| 0.8599 | 14.0050 | 2703 | 1.6543 | 0.6183 |
| 0.5308 | 15.0050 | 2884 | 1.5458 | 0.6124 |
| 0.3372 | 16.005 | 3064 | 1.5154 | 0.6095 |
| 0.1854 | 17.0050 | 3244 | 1.5216 | 0.6302 |
| 0.1656 | 18.0050 | 3424 | 1.4448 | 0.6361 |
| 0.0996 | 19.0050 | 3605 | 1.4351 | 0.6538 |
| 0.1046 | 20.005 | 3785 | 1.4932 | 0.6479 |
| 0.1091 | 21.0050 | 3965 | 1.2451 | 0.6893 |
| 0.0605 | 22.0050 | 4145 | 1.3669 | 0.6716 |
| 0.1048 | 23.0050 | 4326 | 1.3276 | 0.6982 |
| 0.0915 | 24.005 | 4506 | 1.3500 | 0.6746 |
| 0.0642 | 25.0050 | 4686 | 1.4862 | 0.6065 |
| 0.1054 | 26.0050 | 4866 | 1.6206 | 0.6154 |
| 0.1265 | 27.0050 | 5047 | 1.4605 | 0.6420 |
| 0.0877 | 28.005 | 5227 | 1.5949 | 0.6331 |
| 0.1579 | 29.0050 | 5407 | 1.5345 | 0.6450 |
| 0.1928 | 30.0050 | 5587 | 1.6247 | 0.6450 |
| 0.11 | 31.0050 | 5768 | 1.6054 | 0.6450 |
| 0.0869 | 32.005 | 5948 | 1.5318 | 0.6538 |
| 0.1257 | 33.0050 | 6128 | 1.7027 | 0.6420 |
| 0.1298 | 34.0050 | 6308 | 1.5279 | 0.6479 |
| 0.1271 | 35.0050 | 6489 | 1.5453 | 0.6420 |
| 0.1827 | 36.005 | 6669 | 1.7248 | 0.6391 |
| 0.0853 | 37.0050 | 6849 | 1.5689 | 0.6746 |
| 0.1227 | 38.0050 | 7029 | 1.8474 | 0.6065 |
| 0.1244 | 39.0050 | 7210 | 1.7365 | 0.6538 |
| 0.1471 | 40.005 | 7390 | 1.6086 | 0.6538 |
| 0.1105 | 41.0050 | 7570 | 1.7311 | 0.6509 |
| 0.1412 | 42.0050 | 7750 | 1.6021 | 0.6686 |
| 0.0758 | 43.0050 | 7931 | 1.6046 | 0.6716 |
Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.1
- Downloads last month
- 202
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Shawon16/ViViT_wlasl_100_200ep_coR_
Base model
google/vivit-b-16x2-kinetics400