bar
heidloff.net - Building is my Passion
Niklas Heidloff
Cancel

Optimizing Generative AI for Question Answering

Transformer based AI models can generate amazing answers to users’ questions. While the underlaying Large Language Models are not retrained, the performance of Question Answering AI can be improved...

Introduction to Neural Information Retrieval

Large Language Models can improve search results significantly, since they don’t try to find exact word matches but passages of text that fit best to the questions. This post explains high level in...

Integrating generative AI in Watson Assistant

Large Language Models can improve the user experience of virtual assistants like Watson Assistant by providing answers rather than lists of links. With Watson Assistant’s ‘Bring your own Search’ fe...

Generative AI Sample Code for Question Answering

As Large Language Models have been trained with massive amounts of data, they can provide impressively fluent answers. Unfortunately, the answers are not always correct. Passing in context to quest...

Generative AI for Question Answering Scenarios

One of the most impressive features of Large Language Models (LLMs) is the ability to answer questions in fluent language. This post describes some of the underlaying techniques and how to avoid ha...

Understanding Foundation Models

Foundation Models are a game change and a disruptor for many industries. Especially since ChatGPT has been released, people realize a new era of AI has begun. In this blog I share my experience lea...

Introduction to Multi-task Prompt Tuning

Training Foundation Models is expensive. Techniques like Prompt Engineering address this by freezing the models and providing context in prompts to optimize results at the expense of losing perform...

The Importance of Prompt Engineering

Foundation Models are the foundation for different AI downstream tasks. To leverage these generic models for specific tasks, prompt engineering is a technique to optimize the results without having...

Running the Large Language Model FLAN-T5 locally

While there are several playgrounds to try Foundation Models, sometimes I prefer running everything locally during development and for early trial and error experimentations. This post explains how...

Introduction to Prompt Tuning

Training foundation models and even fine-tuning models for custom domains is expensive and requires lots of resources. To avoid changing the pretrained models, a new more resource-efficient techniq...

Disclaimer
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Trending Tags