IBM provides Watson NLP (Natural Language Understanding), Watson Speech To Text and Watson Text To Speech as containers which can be embedded in cloud-native applications. There is quite a bit of information available about these technologies. This post lists links to documentation, tutorials, test environments, and more.
Overview
- IBM’s Embeddable AI
- IBM’s announcement regarding its embeddable AI software portfolio
- Rob Thomas on Accelerating AI Adoption with Ecosystem Partners
- IBM Digital Self-Serve Co-Create Experience for Embeddable AI
- TechZone: Embeddable AI
- IBM Developer: Watson Libraries
Natural Language Processing
Overview and Documentation
Development
- Running and Deploying IBM Watson NLP Containers
- Running IBM Watson NLP locally in Containers
- Running IBM Watson NLP in Minikube
- Understanding IBM Watson Containers
- Deploying Watson NLP to IBM Code Engine
- Watson Embedded AI Runtime Client Libraries
- Embed Model Builder (init Containers)
- Watson NLP Python Client
Operations
- Building custom IBM Watson NLP Images
- Automation for embedded IBM Watson Deployments
- Setting up OpenShift and Applications in one Hour
- Repo: Automation for Watson NLP Deployments
- Deploying TechZone Toolkit Modules on existing Clusters
- Serving Watson NLP on Kubernetes with KServe ModelMesh
- Repo: Samples
- Serve Models on Amazon ECS with AWS Fargate
Training
- Training IBM Watson NLP Models
- Watson Studio Environment for IBMers and Partners
- Text Classification
- Repo: Samples
- Sentiment and Emotion Analysis
- Topic Modeling
- Entities and Keywords Extraction
Speech To Text
- IBM Watson Speech Libraries for Embed
- Trial
- Entitlement Key
- Running IBM Watson Speech to Text in Containers
- Running IBM Watson Text To Speech in Minikube
- Documentation
- Model Catalog
- (SaaS) API Documentation
- SaaS Documentation
- Convert speech to text, and extract meaningful insights from data
- Watson Speech To Text Analysis Notebook
- STT Spring Application