Summary
Distilled with Distily library using teacher model gpt2 on dataset wikimedia/wikipedia.
Model Architecture:
- Architecture:
GPT2LMHeadModel - Total Parameters: 81,912,576
- Data Type (dtype): torch.bfloat16
- Model Size: 0.16 GB
Benchmark Metrics Comparison
| Metric | | | :--- |
Resource Usage Comparison
- VRAM Use: 7.4254 GB
Distillation (Teacher -> Student) Architecture Difference:
- Architecture:
GPT2LMHeadModel->GPT2LMHeadModel - Total Parameters: 124,439,808 -> 81,912,576
- Data Type (dtype): torch.bfloat16 -> torch.bfloat16
- Model Size: 0.24 GB -> 0.16 GB
Module Diff Details
--- teacher model modules
+++ student model modules
@@ -4,7 +4,7 @@
(wpe): Embedding(1024, 768)
(drop): Dropout(p=0.1, inplace=False)
(h): ModuleList(
- (0-11): 12 x GPT2Block(
+ (0-5): 6 x GPT2Block(
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(attn): GPT2FlashAttention2(
(c_attn): Conv1D()
Train Dataset
Trained on 6,814,337 tokens from the wikimedia/wikipedia dataset.
- Num Samples:
9,900 - Subset:
20231101.en - Split:
train
Training Objective
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))
Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate:
0.0002 - train_batch_size:
4 - eval_batch_size:
8 - seed:
42 - optimizer:
Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type:
polynomial - num_epochs:
1.0 - distillation_objective:
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal)) - lr_scheduler:
<torch.optim.lr_scheduler.LambdaLR object at 0x7f28b3dda890> - student_model_name_or_path:
None - student_config_name_or_path:
distilbert/distilgpt2 - student_model_config:
None - reinitialize_weights:
None - copy_teacher_modules:
[('lm_head', False)] - student_model_as_bitnet:
False - teacher_model_name_or_path:
gpt2 - teacher_load_in_8bit:
False - teacher_load_in_4bit:
False - dataset_uri:
wikimedia/wikipedia - dataset_subset:
20231101.en - dataset_split:
train - dataset_column_name:
text - dataset_sample_size:
10000 - dataset_test_size:
0.01 - gradient_accumulation_steps:
1 - weight_decay:
0.0 - max_grad_norm:
1.0 - warmup_ratio:
0 - warmup_steps:
0 - gradient_checkpointing:
True
Framework Versions
- Distily 0.4.1
- Transformers 4.44.2
- Pytorch 2.3.0
- Datasets 2.21.0
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for distily/distily_validate_extra_grad_stats4
Base model
distilbert/distilgpt2