bar
heidloff.net - Building is my Passion
Niklas Heidloff
Cancel

Synergizing Reasoning and Acting (ReAct) in LLMs

Large Language Models are extremely powerful, but they can only return data that existed when they were trained and they cannot invoke APIs and business logic. The technique ReAct combines Chain-of...

Reinforcement Learning from Human Feedback (RLHF)

Fine-tuning of Large Language Models optimizes models for certain AI tasks and/or improves performance for smaller and less resource intensive models. This post describes how to further improve mod...

Memory-efficient Fine-tuning with with QLoRA

LoRA-based fine-tuning of Large Language Models freezes the original weights and only trains a small number of parameters making the training much more efficient. QLoRA goes one step further and re...

Preparing Data for Fine-tunings of Large Language Models

Fine-tuning large language models with instructions is a great technique to customize models efficiently. This post explains briefly how data can be turned into instructions. In my earlier post In...

Text Generation Inference for Foundation Models

Serving AI models is resource intensive. There are various model inference platforms that help operating these models as efficiently as possible. This post summarizes two platforms for classic ML a...

Model Distillation for Large Language Models

Model Distillation is a very interesting concept to build small models which are almost as efficient as larger models for specific tasks. This post describes the concept in general and how it can b...

Language Support for Large Language Models

Many of the leading Large Language Models only support limited languages currently, especially open-source models and models built by researchers. This post describes some options how to get these ...

Fine-tuning Models for Question Answering

Question Answering is one of the most interesting scenarios for Generative AI. While base models have often been trained with massive amounts of data, they have not always been fine-tuned for speci...

Hugging Face Transformers APIs

Hugging Face provides the Transformers library to load pretrained and to fine-tune different types of transformers-based models in an unique and easy way. This post gives a brief summary about its ...

Watsonx.ai Trial on the IBM Cloud

Watsonx.ai is IBM’s next generation enterprise studio for AI builders to train, validate, tune and deploy AI models including foundation models. This post describes briefly the available trial vers...

Disclaimer
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Trending Tags