bar
heidloff.net - Building is my Passion
Niklas Heidloff
Cancel

Memory-efficient Fine-tuning with with QLoRA

LoRA-based fine-tuning of Large Language Models freezes the original weights and only trains a small number of parameters making the training much more efficient. QLoRA goes one step further and re...

Preparing Data for Fine-tunings of Large Language Models

Fine-tuning large language models with instructions is a great technique to customize models efficiently. This post explains briefly how data can be turned into instructions. In my earlier post In...

Text Generation Inference for Foundation Models

Serving AI models is resource intensive. There are various model inference platforms that help operating these models as efficiently as possible. This post summarizes two platforms for classic ML a...

Model Distillation for Large Language Models

Model Distillation is a very interesting concept to build small models which are almost as efficient as larger models for specific tasks. This post describes the concept in general and how it can b...

Language Support for Large Language Models

Many of the leading Large Language Models only support limited languages currently, especially open-source models and models built by researchers. This post describes some options how to get these ...

Fine-tuning Models for Question Answering

Question Answering is one of the most interesting scenarios for Generative AI. While base models have often been trained with massive amounts of data, they have not always been fine-tuned for speci...

Hugging Face Transformers APIs

Hugging Face provides the Transformers library to load pretrained and to fine-tune different types of transformers-based models in an unique and easy way. This post gives a brief summary about its ...

Watsonx.ai Trial on the IBM Cloud

Watsonx.ai is IBM’s next generation enterprise studio for AI builders to train, validate, tune and deploy AI models including foundation models. This post describes briefly the available trial vers...

Python and PyTorch for AI Engineers

IT professionals who want to become AI engineers need to learn new technologies. This post summarizes the languages and frameworks that I’ve started to look into recently. As a developer I’ve lear...

Understanding In-Context Learning for LLMs

There are different ways to train and tune LLM models. This post summarizes some interesting findings from a research paper whether prompts can change the behavior of models. There are several opt...

Disclaimer
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Trending Tags