Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

How to colourise black & white pictures with OpenVINO™ on Ubuntu containers (Part 2)

This article was last updated 2 years ago.


This blog is the last part of a series – don’t miss parts one and zero. We’re on a mission to demonstrate OpenVINO™ on Ubuntu containers; from the consistently outstanding developer experience to the added trust to your software supply chain. In this blog, I’ll guide you on your way to building and deploying an AI colouriser app on MicroK8s. The demo will give you a better feel of the Ubuntu in containers experience and how it makes developers’ lives easier, especially in complex environments like AI/ML.

The story so far: I misplanned my Christmas shopping and had to find a last-minute present for my grandparents (still hoping they’re not reading). Fortunately, I came across this blog. It gave me the best Christmas present idea ever: a handcrafted photo book of their childhood pictures, colourised using deep learning. Easy peasy, you think (sarcastically). But seriously, thanks to OpenVINO™ and Ubuntu containers, it is much easier than it sounds! You’ll see.

“What are you looking at?” (original picture source)

Demo architecture

“As a user, I can drag and drop black and white pictures to the browser so that it displays their ready-to-download colourised version.” – said the PM (me).

For that – replied the onetime software engineer (still me) – we only need:

  • A fancy yet lightweight frontend component.
  • OpenVINO™ Model Server to serve the neural network colourisation predictions.
  • A very light backend component.

Whilst we could target the Model Server directly with the frontend (it exposes a REST API), we need to apply transformations to the submitted image. The colourisation models, in fact, each expect a specific input.

Finally, we’ll deploy these three services with Kubernetes because… well… because it’s groovy. And if you think otherwise (everyone is allowed to have a flaw or two), you’ll find a fully functional docker-compose.yaml in the source code repository.

Architecture diagram for the demo app (originally coloured tomatoes)

If anything I wrote before doesn’t make sense to you, make sure to read part one of the demo blog series. In the upcoming sections, we will first look at each component and then show how to deploy them with Kubernetes using MicroK8s. Don’t worry; the full source code is freely available, and I’ll link you to the relevant parts.

Neural network – OpenVINO Model Server

The colourisation neural network is published under the BSD 2-clause License, accessible from the Open Model Zoo. It is pre-trained, so we don’t need to understand it in order to use it. However, let’s look closer to understand what input it expects. I also strongly encourage you to read the original work from Richard Zhang, Phillip Isola, and Alexei A. Efros. They made the approach super accessible and understandable on this website and in the original paper.

Neural network architecture (from arXiv:1603.08511 [cs.CV])

As you can see on the network architecture diagram, the neural network uses an unusual colour space: LAB. There are many 3-dimensional spaces to code colours: RGB, HSL, HSV, etc. The LAB format is relevant here as it fully isolates the colour information from the lightness information. Therefore, a grayscale image can be coded with only the L (for Lightness) axis. We will send only the L axis to the neural network’s input. It will generate predictions for the colours coded on the two remaining axes: A and B.

From the architecture diagram, we can also see that the model expects a 256×256 pixels input size. For these reasons, we cannot just send our RGB-coded grayscale picture in its original size to the network. We need to first transform it.

We compare the results of two different model versions for the demo. Let them be called ‘V1’ (Siggraph) and ‘V2’. The models are served with the same instance of the OpenVINO™ Model Server as two different models. (We could also have done it with two different versions of the same model – read more in the documentation.)

Finally, to build the Docker image, we use the first stage from the Ubuntu-based development kit to download and convert the model. We then rebase on the more lightweight Model Server image.

# Dockerfile
FROM openvino/ubuntu20_dev:latest AS omz
# download and convert the model
…
FROM openvino/model_server:latest
# copy the model files and configure the Model Server
…

Backend – Ubuntu-based Flask app (Python)

For the backend microservice that interfaces between the user-facing frontend and the Model Server hosting the neural network, we chose to use Python. There are many valuable libraries to manipulate data, including images, specifically for machine learning applications. To provide web serving capabilities, Flask is an easy choice.

The backend takes an HTTP POST request with the to-be-colourised picture. It synchronously returns the colourised result using the neural network predictions. In between – as we’ve just seen – it needs to convert the input to match the model architecture and to prepare the output to show a displayable result.

Here’s how the transformation pipeline looks like on the input:

And the output looks something like that:

To containerise our Python Flask application, we use the first stage with all the development dependencies to prepare our execution environment. We copy it onto a fresh Ubuntu base image to run it, configuring the model server’s gRPC connection (as detailed in the previous blog).

Frontend – Ubuntu-based NGINX container and Svelte app

Finally, I put together a fancy UI for you to try the solution out. It’s an effortless single-page application with a file input field. It can display side-by-side the results from the two different colourisation models.

I used Svelte to build the demo as a dynamic frontend. Below each colourisation result, there’s even a saturation slider (using a CSS transformation) so that you can emphasise the predicted colours and better compare the before and after.

To ship this frontend application, we again use a Docker image. We first build the application using the Node base image. We then rebase it on top of the preconfigured NGINX LTS image maintained by Canonical. A reverse proxy on the frontend side serves as a passthrough to the backend on the /API endpoint to simplify the deployment configuration. We do that directly in an NGINX.conf configuration file copied to the NGINX templates directory. The container image is preconfigured to use these template files with environment variables.

Deployment with Kubernetes

I hope you had the time to scan some black and white pictures, because things are about to get serious(ly colourised).

We’ll assume you already have a running Kubernetes installation from the next section. If not, I encourage you to run the following steps or go through this MicroK8s tutorial.

# https://microk8s.io/docs
sudo snap install microk8s --classic

# Add current user ($USER) to the microk8s group
sudo usermod -a -G microk8s $USER && sudo chown -f -R $USER ~/.kube
newgrp microk8s

# Enable the DNS, Storage, and Registry addons required later
microk8s enable dns storage registry

# Wait for the cluster to be in a Ready state
microk8s status --wait-ready

# Create an alias to enable the `kubectl` command
sudo snap alias microk8s.kubectl kubectl
Yes, you deployed a Kubernetes cluster in about two command lines.

Build the components’ Docker images

Every component comes with a Dockerfile to build itself in a standard environment and ship its deployment dependencies (read What are containers for more information). They all create an Ubuntu-based Docker image for a consistent developer experience.

Before deploying our colouriser app with Kubernetes, we need to build and push the components’ images. They need to be hosted in a registry accessible from our Kubernetes cluster. We will use the built-in local registry with MicroK8s. Depending on your network bandwidth, building and pushing the images will take a few minutes or more.

sudo snap install docker
cd ~ && git clone https://github.com/valentincanonical/colouriser-demo.git

# Backend
docker build backend -t localhost:32000/backend:latest
docker push localhost:32000/backend:latest

# Model Server
docker build modelserver -t localhost:32000/modelserver:latest
docker push localhost:32000/modelserver:latest

# Frontend
docker build frontend -t localhost:32000/frontend:latest
docker push localhost:32000/frontend:latest

Apply the Kubernetes configuration files

All the components are now ready for deployment. The Kubernetes configuration files are available as deployments and services YAML descriptors in the ./K8s folder of the demo repository. We can apply them all at once, in once command:

kubectl apply -f ./k8s

Give it a few minutes. You can watch the app being deployed with watch kubectl status. Of all the services, the frontend one has a specific NodePort configuration to make it publicly accessible by targeting the Node IP address.

Once ready, you can access the demo app at http://localhost:30000/ (or replace localhost with a cluster node IP address if you’re using a remote cluster). Pick an image from your computer, and get it colourised!

All in all, the project was pretty easy considering the task we accomplished. Thanks to Ubuntu containers, building each component’s image with multi-stage builds was a consistent and straightforward experience. And thanks to OpenVINO™ and the Open Model Zoo, serving a pre-trained model with excellent inference performance was a simple task accessible to all developers.

That’s a wrap!

You didn’t even have to share your pics over the Internet to get it done. You are now ready to add some spice for Christmas Eve family dinner: “- do you remember your cute blue anorak?, – I’m pretty sure it was red, – no, no, too bad the photos were black and white, – oh wait, I have something for you.” Pretty nice, eh?

Thanks for reading this series; I hope you enjoyed it. Feel free to reach out on socials. I’ll leave you with the last colourisation example, right in the holidays’ theme.

Christmassy colourisation example (original picture source)

Wishing you all the best holidays, see you in 2022!


ubuntu logo

What’s the risk of unsolved vulnerabilities in Docker images?

Recent surveys found that many popular containers had known vulnerabilities. Container images provenance is critical for a secure software supply chain in production. Benefit from Canonical’s security expertise with the LTS Docker images portfolio, a curated set of application images, free of vulnerabilities, with a 24/7 commitment.

Integrate with hardened LTS images ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Docker container security: demystifying FIPS-enabled containers with Ubuntu Pro

In today’s rapidly changing digital environment, the significance of robust Docker container security measures cannot be overstated. Even the containerised...

DIY chiselled Ubuntu: crafting your own chiselled Ubuntu base image

In a previous post, I explained how we made our Ubuntu image 15 times smaller by chiselling a specific slice of Ubuntu for .NET developers. In this blog, I...

Chiselled Ubuntu containers: the benefits of combining Distroless and Ubuntu

Last August, we announced 6 MB-size Ubuntu base images designed for self-contained .NET applications — we called them “chiselled Ubuntu”. How did we make our...