Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Build your LLM factory with NVIDIA and Canonical

Hands-on workshop at World AI Summit, Amsterdam

Sign up for the workshop

Time and Date: 2:30pm-3:30pm, Oct 11th, 2023
Location: Track 7, Room Jeanne D’Arc. Taets Art & Event Park, Amsterdam, Netherland

Building an AI factory demands an integrated suite of tools. Such a setup empowers large teams - consisting of data scientists, researchers, data engineers, analysts, and other AI professionals - to collaboratively build AI models, emphasising scalability and methodical approaches.

However, creating an AI factory is challenging due to the intricate interplay of hardware and software. NVIDIA and Canonical have forged an end-to-end solution to simplify the development and deployment of machine learning models. This solution facilitates collaboration among AI developers, while abstracting complexities related to hardware, drivers, and other foundational software.

In this workshop, participants will discover:

  • How to scale AI efforts, transitioning from locally-running AI containers to an open-source MLOps platform, powered by NVIDIA AI containers and Charmed Kubeflow.
  • The customization and deployment process of large language models (LLMs), starting with a foundational model.
  • An in-depth look at the components used, emphasising security enhancements, reproducibility, and integration techniques.

By the workshop’s conclusion, attendees will:

  • Possess a comprehensive understanding of a stack combining both hardware and software tailored for LLMs.
  • Be adept at using open-source tools for developing and deploying LLMs.
  • Grasp methods to navigate and surmount AI scaling challenges.
  • Have the opportunity to direct questions to NVIDIA and Canonical experts in real time.

Agenda:

Duration: 60 minutes

  • Introduction to LLMs
  • Architecture of the end-to-end solution from hardware layer to open source libraries
  • Demo of the solution
  • Walkthrough customization & deployment of the LLM for the chosen use case
  • Live Q&A

What you need to bring:

A laptop to start building your project hands-on:

Speakers and tutors:

Contact information
  • In submitting this form, I confirm that I have read and agree to Canonical's Privacy Notice and Privacy Policy.