Canonical at NVIDIA GTC 2019
Canonical
on 7 March 2019
Date: March 18 – 21, 2019
Location: San Jose Convention Center – San Jose, CA
Booth: 1422
20% Discount Code: GMXGTC
Canonical is delighted to be attending Nvidia’s GTC Conference in San Jose, California March 18 – 21, 2019.
Canonical presentations
Unprivileged GPU Containers on a LXD Cluster
Thursday, Mar 21, 11:00 AM – 11:50 AM – SJCC Room 212B (Concourse Level)
In this presentation by Stephane Graber and Christian Brauner, you’ll learn how to run GPU workloads securely in isolated unprivileged containers across a multi-node LXD cluster. We’ll explain what unprivileged containers are and why they’re safe, and then use demos to show how the power of LXD can be used to create a whole cluster of unprivileged containers that are isolated from each other. We will also show how LXD makes it trivial to pass through physical GPUs to containers and how it exposes a wide range of NVIDIA-specific options by leveraging NVIDIA’s libnvidia-container library. This way each container can easily get a dedicated GPU or GPUs to run workloads. This session will help participants understand how running GPU-Intensive workloads can be effortless with the help of a dedicated container manager that is aware of NVIDIA-specific features.
Up and Running with Kubeflow Anywhere
Thursday, Mar 21, 11:00 AM – 11:50 AM – SJCC Room 210A (Concourse Level)
In his session Tim Van Steenburgh will explain how Kubeflow has emerged as the de facto way to do ML on Kubernetes. Learn about the fastest way to go from GPU machines to an operational Kubeflow cluster on any public or private cloud, bare metal, or even your own laptop. We’ll demonstrate the tools and steps necessary to quickly deploy Kubeflow at any scale. We’ll also show you a fast and exciting new way to deploy Kubeflow on a single laptop or desktop, which is perfect for local ML development. Don’t lose time deploying and configuring Kubernetes and Kubeflow — get to the ML as quickly as possible.
Canonical booth
With Kubeflow running in a fully supported K8s environment, your engineers will be able to focus on building apps for their specific use case, while we take care of what is needed to run the apps at scale. Then, once you’re comfortable you can choose to manage the solution using our suite of automation tooling, or consume it as a managed solution, with Canonical managing, supporting, and upgrading the solution under commercial SLAs.
If you’re interested in seeing how these offerings are supporting companies with AI/ML engineers then join us at booth 1422.
At the booth we will be holding a number of demos, including:
- AI/ML – Beyond Kubeflow. Pipelines – what they are and how they benefit data scientist with repeatable patterns
- Kuberneters with JaaS
- MicroK8s/Kubeflow on a GPU
Canonical will be hosting a happy hour Wednesday, March 20 from 7pm to 10pm at the Continental Bar Lounge & Patio. Join us for an evening of drinks/tacos and tantalising chat around K8s and Kubeflow. We will have representatives from both Canonical and Nvidia. Please register below to attend.
Kubeflow and DrinkFlow Happy Hour
Date: Wednesday, March 20, 2019
Time: 7pm to 10pm
Location: The Continental Bar Lounge & Patio
Drink Tickets: Two per person
Run Kubeflow anywhere, easily
With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.
Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like
katib or pipelines-ui.
Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.
What is Kubeflow?
Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.
Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and
configurable steps, with machine learning specific frameworks and libraries.
Install Kubeflow
The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple,
portable and scalable.
You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install
with MicroK8s on any of these environments and can be scaled to high-availability.
Newsletter signup
Related posts
Deploy GenAI applications with Canonical’s Charmed Kubeflow and NVIDIA NIM
It’s been over two years since generative AI (GenAI) took off with the launch of ChatGPT. From that moment on, a variety of applications, models and libraries...
Let’s talk about open source, AI and cloud infrastructure at GITEX 2024
October 14 – 18, 2024. Dubai. Hall 26, Booth C40 The largest tech event of the world – GITEX 2024 – is taking place in Dubai next week. This event is a great...
Canonical joins OPEA to enable Enterprise AI
Canonical is committed to enabling organizations to secure and scale their AI/ML projects in production. This is why we are pleased to announce that we have...