← Back to Projects
Finetuning LLaMA 3.2 model (3B weights)
CompletedFebruary 2026
Finetuning LLaMA 3.2 model using QLoRA to predict product price based on description. Data preparation and model evaluation against business objectives.
Key components
- Finetuning datasets generation
- Base model evaluation for baseline (without finetuning)
- Quantization parameters (quantization is only for the base model)
- Lora Config with key param such as Alpha, Dropout, r and target modules
- SFT Trainer using key hyperparameters and the Lora config
- Setting up Weight & Biases for live metrics collection and visualization
- Selecting the most performing model from the different checkpoints
- Run Inference on the QLoRA finetuned model
Training Performance:

QLoRALLaMA3.2SFTPeftTransformerPythonQuantizationHyperparameters tuningWeight & BiasesNvidia GPU A100