Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

How to build and deploy your first AI/ML model on Ubuntu

James Nunns

on 2 October 2018

Tags: AI , cloud , Kaggle , Kubeflow , kubernetes , ML , Ubuntu

This article is more than 6 years old.


Artificial intelligence and machine learning (AI/ML) have stolen the hearts and minds of the public, the press and businesses.

The technological advances in the field have helped to transport AI from the world of fiction, into something more tangible, and within touching distance.

However, despite the hype, AI in the ‘real world’ isn’t quite yet a reality.

AI is yet to take over, or see mass adoption, and there are still lengthy debates to be had as to what exactly can be considered AI and what is not.

Still, AI promises much, and there seems to be no stopping its forward march. For better or for worse, AI is here to stay.

Fortunately for us, Ubuntu is the premier platform these ambitions. Developer workstations, racks, in the cloud and to the edge with smart connected IoT – Ubuntu can be seen being used far and wide as the platform of choice.

What this means is that we have quite a lot to talk about when it comes to AI and machine learning. From introducing the topic in our first webinar, ‘AI, ML, & Ubuntu: Everything you need to know,’ (which you can still watch on demand) to now detailing, ‘How to build and deploy your first AI/ML model on Ubuntu’ in our latest webinar.

In this webinar, join Canonical’s Kubernetes Product Manager Carmine Rimi for a demo with instructions for everything from Ubuntu, with multiple examples of how to get an AI/ML environment on Ubuntu environment up and running.

By the end of this webinar you will know how to:

  • Run a Kaggle experiment on Kubeflow on Microk8s on Ubuntu
  • Finish with reproducible instructions fit for any machine learning exercise
  • Be able to run experiments locally or in the cloud
  • Leave with a summary of commands needed to quickly launch your own ML environment

And finally, we’ll be taking some time to answer your questions in a Q&A session.

It’s time to stop just talking about artificial intelligence and machine learning and become an active participant by learning how to build and deploy your first AI/ML model on the developers platform of choice – Ubuntu.

Join us on the 17th October to begin your AI journey.

Register for webinar

kubeflow logo

Run Kubeflow anywhere, easily

With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.

Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.

Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.

Learn more about Charmed Kubeflow ›

kubeflow logo

What is Kubeflow?

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

Learn more about Kubeflow ›

kubeflow logo

Install Kubeflow

The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.

You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high-availability.

Install Kubeflow ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Canonical at Cloud Expo 2024

The Cloud Expo Madrid 2024 will run on 16 and 17 October at IFEMA Madrid. Come down to our booth, K87, to meet us and chat about Cloud, AI, ML and more.

How to deploy AI workloads at the edge using open source solutions

Running AI workloads at the edge with Canonical and Lenovo AI is driving a new wave of opportunities in all kinds of edge settings—from predictive maintenance...

Data Centre AI evolution: combining MAAS and NVIDIA smart NICs

It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service...