Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
• 2311.03099 • Published
• 30
A V2 recreation with a few changes:
int_mask and normalize (the latter being enabled by default in mergekit)Expecting little to no change over V2.
This model was merged using the DARE TIES merge method using unsloth/Meta-Llama-3.1-8B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: dare_ties
parameters:
int8_mask: 1.0
normalize: 1.0
random_seed: 42.0
slices:
- sources:
- layer_range: [0, 32]
model: akjindal53244/Llama-3.1-Storm-8B
parameters:
density: 0.8
weight: 0.25
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
density: 0.8
weight: 0.33
- layer_range: [0, 32]
model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.8
weight: 0.42
- layer_range: [0, 32]
model: unsloth/Meta-Llama-3.1-8B-Instruct
tokenizer_source: union
Detailed results can be found here! Summarized results can be found here!
| Metric | Value (%) |
|---|---|
| Average | 30.19 |
| IFEval (0-Shot) | 77.07 |
| BBH (3-Shot) | 32.70 |
| MATH Lvl 5 (4-Shot) | 20.09 |
| GPQA (0-shot) | 9.96 |
| MuSR (0-shot) | 9.09 |
| MMLU-PRO (5-shot) | 32.26 |
| Metric | Change |
|---|---|
| Avg. | +0.12 |
| IFEval (0-Shot) | -3.22 |
| BBH (3-Shot) | +1.09 |
| MATH Lvl 5 (4-Shot) | -1.06 |
| GPQA (0-shot) | +3.02 |
| MuSR (0-shot) | +0.85 |
| MMLU-PRO (5-shot) | +0.08 |