bar
heidloff.net - Building is my Passion
Niklas Heidloff
Cancel

Code Execution with the Bee Agent Framework

LLM Agents and Function Calling are powerful techniques to build modern AI applications. IBM Research open sourced the Bee Agent Framework. This post describes how with this framework agents can ge...

Simple Bee Agent Framework Example

LLM Agents and Function Calling are powerful techniques to build modern AI applications. This post describes a simple example how to use agents and tools with LLM models on watsonx.ai via the new B...

Multi Agents and Function Calling with watsonx.ai

LLM Agents and Function Calling are powerful techniques to build modern AI applications. This post describes a sample how to use multiple crewAI agents, custom Python functions and LLM models on wa...

Agents and Function Calling with watsonx.ai

LLM Agents and Function Calling are powerful techniques to build modern AI applications. This post describes a sample how to use Langchain agents, custom Python functions and LLM models on watsonx....

Custom LLM Metrics in watsonx.governance

watsonx.governance is IBM’s AI platform to govern generative AI models built on any platform and deployed in the cloud or on-premises. This post describes how to collect metrics from LLMs running o...

Introduction to watsonx.governance

When building Generative AI pilots, MVPs, PoCs, etc. the focus is often to figure out if AI works and adds value in general. Running AI applications in production requires additional capabilities l...

OpenShift AI Platform based on Open Source

AI is evolving fast and lots of new frameworks come up frequently. The open-source project Open Data Hub (ODH) brings several of these modern frameworks together. Red Hat uses ODH as upstream proje...

Running LLMs locally via Podman Desktop

Podman Desktop is a great open source alternative to commercial offerings in order to run containers locally. With the new Podman AI Lab extension Large Language Models can be tested locally via te...

25 Years at IBM

Today is my 25th anniverary at IBM. Times flies when you’re having fun. I have the pleasure to work with so many nice and smart people. Thank you to my colleagues, to my managers, to my mentors, c...

Fine-tuning LLMs with Apple MLX locally

MLX is a framework for machine learning with Apple silicon from Apple Research. This post describes how to fine-tune a 7b LLM locally in less than 10 minutes on a MacBook Pro M3. MLX is designe...

Disclaimer
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Trending Tags