Fine-Tuning Tutorials#
Use the tutorials in this section to gain a deeper understanding of how the NVIDIA NeMo Customizer microservice enables fine-tuning tasks.
Tip
Tutorials are organized by complexity and typically build on one another. The tutorials often reference a CUSTOMIZER_BASE_URL
whose value will depend on the ingress in your particular cluster. If you are using the minikube demo installation, it will be http://nemo.test
. The demo installation’s value for DEPLOYMENT_BASE_URL
is http://nemo.test
and the value for NIM_PROXY_BASE_URL
is http://nim.test
. Otherwise, you will need to consult with your own cluster administrator for the ingress values.
Getting Started#
Learn the fundamentals of NeMo Customizer configurations, model types, and how to choose the right approach for your project.
Dataset Preparation#
Learn how to format datasets for different model types.
Customization Jobs#
Learn how to start a LoRA customization job using a custom dataset.
Learn how to start a SFT customization job using a custom dataset.
Learn how to start a DPO (Direct Preference Optimization) customization job using preference data.
Learn how to start a Knowledge Distillation (KD) job using a teacher and student model.
Learn how to fine-tune embedding models using LoRA merged training for improved question-answering and retrieval tasks.
Learn how to import a private Hugging Face model and fine-tune it.
Monitoring & Optimization#
Learn how to check job metrics using MLflow or Weights & Biases.
Learn how to optimize the token-per-GPU throughput for a LoRA optimization job.