bar
heidloff.net - Building is my Passion
Niklas Heidloff
Cancel

Agents and Function Calling with watsonx.ai

LLM Agents and Function Calling are powerful techniques to build modern AI applications. This post describes a sample how to use Langchain agents, custom Python functions and LLM models on watsonx....

Custom LLM Metrics in watsonx.governance

watsonx.governance is IBM’s AI platform to govern generative AI models built on any platform and deployed in the cloud or on-premises. This post describes how to collect metrics from LLMs running o...

Introduction to watsonx.governance

When building Generative AI pilots, MVPs, PoCs, etc. the focus is often to figure out if AI works and adds value in general. Running AI applications in production requires additional capabilities l...

OpenShift AI Platform based on Open Source

AI is evolving fast and lots of new frameworks come up frequently. The open-source project Open Data Hub (ODH) brings several of these modern frameworks together. Red Hat uses ODH as upstream proje...

Running LLMs locally via Podman Desktop

Podman Desktop is a great open source alternative to commercial offerings in order to run containers locally. With the new Podman AI Lab extension Large Language Models can be tested locally via te...

25 Years at IBM

Today is my 25th anniverary at IBM. Times flies when you’re having fun. I have the pleasure to work with so many nice and smart people. Thank you to my colleagues, to my managers, to my mentors, c...

Fine-tuning LLMs with Apple MLX locally

MLX is a framework for machine learning with Apple silicon from Apple Research. This post describes how to fine-tune a 7b LLM locally in less than 10 minutes on a MacBook Pro M3. MLX is designe...

How to stay up to Date with AI News

Recently several people have asked me how I follow AI news. Below are some great resources. YouTube I like watching videos during my lunch break workouts. I can highly recommend the following cha...

Fine-tuning LLMs locally with Apple Silicon

With recent MacBook Pro machines and frameworks like MLX and llama.cpp fine-tuning of Large Language Models can be done with local GPUs. This post describes how to use InstructLab which provides an...

Running fine-tuned LLM Models on watsonx.ai

Watsonx.ai is IBM’s AI platform built for business. It is provided as SaaS and as software which can be deployed on multiple clouds and on-premises. This post describes how to deploy custom fine-tu...

Disclaimer
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Trending Tags