Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

2020-08-13

Production AI from data lake to server

Solving hardware challenges in ML environments

Infrastructure is a critical component when enabling AI/ML teams to produce the fastest and most valuable results for high performance computing problems while maximising resource utilisation. Research capabilities can be accelerated to tackle complex workloads by leveraging the purpose-built workstations and servers that solve interrelated hardware problems, from prototyping on the workstation to deploying and scaling on the server.

We will discuss

  • Design and practice considerations from workstation to server with practical examples
  • Security, performance and cost prioritizations
  • The role of Kubeflow in making AI work best for business needs

Who should attend

AI/ML data engineers, scientists, research leaders, product managers, developers and ops teams who want to maximise time spent on producing results.