Visual Recognition with TensorFlow and OpenWhisk

My colleague Ansgar Schmidt and I have built a new demo which uses TensorFlow to predict types of flowers. The model is trained in a Kubernetes cluster on the IBM Cloud. Via a web application pictures can be uploaded to initiate the prediction code which is executed as OpenWhisk function.

We’d like to open source and document this demo soon. Keep an eye on our blogs. For now here is a screenshot of the web application.


As starting point for this demo we’ve used a codelab from Google TensorFlow for Poets. The lab shows how to leverage the transfer learning capabilities in TensorFlow. Essentially you can use predefined visual recognition models and retrain only the last layer of the neural network for your own categories. TensorFlow provides different visual recognition models. Since we wanted to run the prediction code in OpenWhisk we choose the smaller MobileNet model.

Over the next days we’ll blog about how to use Kubernetes and Object Storage to train and store your own models and how to use OpenWhisk to execute predictions.

If you want to experiment before this you can run the following code locally.

$ docker run -it --rm /bin/bash
$ apt-get update
$ apt-get install -y git
$ git clone
$ cd tensorflow-for-poets-2
$ curl | tar xz -C tf_files
$ ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
$ python -m scripts.retrain \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=tf_files/models/ \
  --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --architecture="${ARCHITECTURE}" \

The training takes roughly 5 minutes on my MacBook Pro. After this you can find the model in the files ‘tf_files/retrained_graph.pb’ and ‘tf_files/retrained_labels.txt’.

In order to run a prediction run the following command. Since this image has been part of the training set you might want to get (‘wget http://…’) another image first and change the image parameter.

$ python -m scripts.label_image \
    --graph=tf_files/retrained_graph.pb  \

As result you’ll get something like this:

Evaluation time (1-image): 0.079s
daisy 0.979356
dandelion 0.0125334
sunflowers 0.00809442
roses 1.40769e-05
tulips 2.08439e-06

Update 12/01/17: Ansgar and I have blogged about this project in more detail.