Fine-tuning of Large Language Models allows customizations of AI models for specific tasks and data. While Fine-tuning used to require a lot of compute, technologies like InstructLab, QLoRA and lam...
watsonx Platform Demos
watsonx is IBM’s AI & data platform that’s built for business. It supports the complete AI lifecycle including Large Language Models, developer tools, governance and more. This post describes a...
Deploying Agentic Applications on watsonx.ai
In watsonx.ai custom Python code can be deployed and accessed via REST APIs. This allows deploying agentic applications, models and more. This post describes the new feature “AI Service” in watsonx...
Structured Output of Large Language Models
The first Large Language Models only returned plain text. Later models learned how to return JSON which is important for agents and Function Calling. This post summarizes how modern models can even...
Observability for Agents via the Bee Agent Framework
The Bee Agent Framework is an open-source project for building, deploying, and serving powerful multi-agent workflows at scale. One of its strengths is observability. While other frameworks require...
Key Concepts of DeepSeek-R1
The Chinese startup DeepSeek open-sourced their reasoning model DeepSeek-R1 and created several smaller distilled versions. This post summarizes some of the key concepts how the new models have bee...
Getting started with Llama Stack
Llama Stack is an open-source effort from Meta that aims to standardize the core building blocks for AI application development. This post describes how to get started with the stack running on des...
Developing Agents with DSPy
DSPy is a Python-based open-source framework from Stanford university. Developers can write code to build compound systems, for example agents. DSPy is the framework for programming—rather than...
Building Skills for watsonx Assistant via Code
Watsonx Assistant and watsonx Orchestrate are great tools for low-code and no-code developers to build conversational experiences. Additionally, they provide extensibility options for pro-code deve...
Risk Detection via Granite Guardian Models
Granite Guardian models are specialized language models in the Granite family that can detect harms and risks in generative AI systems. This post describes how to get started using them. Recently ...