This is the first version of upscaling llama-3. Version 2 is now out and does not have any of the issues that this version has. Please use version 2 instead. Linked bellow:


Llama-3-13B-Instruct

Thank you to Meta for the weights for Meta-Llama-3-8B-Instruct

image/png

This is an upscaling of the Meta-Llama-3-8B-Instruct Ai using techniques created for Mistral-Evolved-11b-v0.1. This Ai model has been upscaled from 8b parameters to 13b parameters without any continuous pretraining or fine-tuning.

From testing, the model seems to function perfectly at fp16, but has some issues at 4-bit quantization using bitsandbytes.

The model that was used to create this one is linked below:

https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct

Downloads last month
7
Safetensors
Model size
13B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rombodawg/rombos_Llama-3-13B-Instruct

Quantizations
2 models