bar
heidloff.net - Building is my Passion
Niklas Heidloff
Cancel

Semantic Searches with Elasticsearch

In the most recent versions Elasticsearch provides semantic searches. This post summarizes how this new functionality can be utilized. Semantic searches allow finding relevant information even if ...

Tokenizing Text for Vector Searches with Java

Vector-based searches allow finding semantic relevant information without the presence of keywords. Often vector-based search engines can only handle documents with limited lengths. This post descr...

Enhancements of LLMs via Self-Reflections

One of the key challenges of LLMs is hallucination. Retrieval Augmented Generation (RAG) reduces hallucination but cannot eliminate it. This post summarizes a new concept to address this shortcomin...

Fine-tuning of LLMs with long Contexts via LongLoRA

The context windows of Large Language Models define how much input can be provided in prompts. Fine-tuning of LLMs with longer context windows is resource intensive and expensive. With the new Long...

New Decoder Foundation Models from IBM

IBM announced the general availability of the first models in the watsonx Granite model series which are decoder based Foundation Models trained with enterprise-focused datasets curated by IBM. IB...

Accessing Watsonx Models from LangChain

LangChain allows chaining and orchestrating LLMs tasks for scenarios like Retrieval Augmented Generation and agents. Below is a snippet that shows how Watsonx.ai models can be accessed in LangChain...

Synergizing Reasoning and Acting (ReAct) in LLMs

Large Language Models are extremely powerful, but they can only return data that existed when they were trained and they cannot invoke APIs and business logic. The technique ReAct combines Chain-of...

Reinforcement Learning from Human Feedback (RLHF)

Fine-tuning of Large Language Models optimizes models for certain AI tasks and/or improves performance for smaller and less resource intensive models. This post describes how to further improve mod...

Memory-efficient Fine-tuning with with QLoRA

LoRA-based fine-tuning of Large Language Models freezes the original weights and only trains a small number of parameters making the training much more efficient. QLoRA goes one step further and re...

Preparing Data for Fine-tunings of Large Language Models

Fine-tuning large language models with instructions is a great technique to customize models efficiently. This post explains briefly how data can be turned into instructions. In my earlier post In...

Disclaimer
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Trending Tags