Training and fine-tuning models takes time. During this process it’s important to see progress. This post describes how to visualize output in Tensorboard running locally. The Hugging Face Trainer...
Metrics to evaluate Search Results
Via Retrieval Augmented Generation search results can be passed as context into prompts to Large Language Models to support the models to generate good responses. Passing the right search results i...
Hybrid and Vector Searches with Elasticsearch
Semantic searches allow finding relevant information even if there are no classic keyword matches. Recent research has shown that combinations of semantic and classic keyword searches often outperf...
Semantic Searches with Elasticsearch
In the most recent versions Elasticsearch provides semantic searches. This post summarizes how this new functionality can be utilized. Semantic searches allow finding relevant information even if ...
Tokenizing Text for Vector Searches with Java
Vector-based searches allow finding semantic relevant information without the presence of keywords. Often vector-based search engines can only handle documents with limited lengths. This post descr...
Enhancements of LLMs via Self-Reflections
One of the key challenges of LLMs is hallucination. Retrieval Augmented Generation (RAG) reduces hallucination but cannot eliminate it. This post summarizes a new concept to address this shortcomin...
Fine-tuning of LLMs with long Contexts via LongLoRA
The context windows of Large Language Models define how much input can be provided in prompts. Fine-tuning of LLMs with longer context windows is resource intensive and expensive. With the new Long...
New Decoder Foundation Models from IBM
IBM announced the general availability of the first models in the watsonx Granite model series which are decoder based Foundation Models trained with enterprise-focused datasets curated by IBM. IB...
Accessing Watsonx Models from LangChain
LangChain allows chaining and orchestrating LLMs tasks for scenarios like Retrieval Augmented Generation and agents. Below is a snippet that shows how Watsonx.ai models can be accessed in LangChain...
Synergizing Reasoning and Acting (ReAct) in LLMs
Large Language Models are extremely powerful, but they can only return data that existed when they were trained and they cannot invoke APIs and business logic. The technique ReAct combines Chain-of...