Watsonx.ai Agent Lab is a low-code tool for building and deploying agents. It can also be used by pro-code developers to generate code for agentic applications which can be extended and optimized. ...
Getting started with watsonx.ai Agent Lab
IBM launched the beta release of Agent Lab, a low-code tool for building and deploying agents on watsonx.ai. This post describes a simple example. Here are the official resources: Documentatio...
Deploying Agentic Applications on watsonx.ai
In watsonx.ai custom Python code can be deployed and accessed via REST APIs. This allows deploying agentic applications, models and more. This post describes the new feature “AI Service” in watsonx...
Structured Output of Large Language Models
The first Large Language Models only returned plain text. Later models learned how to return JSON which is important for agents and Function Calling. This post summarizes how modern models can even...
Observability for Agents via the Bee Agent Framework
The Bee Agent Framework is an open-source project for building, deploying, and serving powerful multi-agent workflows at scale. One of its strengths is observability. While other frameworks require...
Key Concepts of DeepSeek-R1
The Chinese startup DeepSeek open-sourced their reasoning model DeepSeek-R1 and created several smaller distilled versions. This post summarizes some of the key concepts how the new models have bee...
Getting started with Llama Stack
Llama Stack is an open-source effort from Meta that aims to standardize the core building blocks for AI application development. This post describes how to get started with the stack running on des...
Developing Agents with DSPy
DSPy is a Python-based open-source framework from Stanford university. Developers can write code to build compound systems, for example agents. DSPy is the framework for programming—rather than...
Building Skills for watsonx Assistant via Code
Watsonx Assistant and watsonx Orchestrate are great tools for low-code and no-code developers to build conversational experiences. Additionally, they provide extensibility options for pro-code deve...
Risk Detection via Granite Guardian Models
Granite Guardian models are specialized language models in the Granite family that can detect harms and risks in generative AI systems. This post describes how to get started using them. Recently ...