ODSC East 2022 Keynote: NVIDIA and Red Hat on Accelerating AI/ML Deployments with Enterprise-Grade MLOps

ODSC - Open Data Science
4 min readMay 17, 2022

--

At ODSC East, Red Hat’s Abhinav Joshi and NVIDIA’S Matt Akins spoke at length during their keynote on how to accelerate AI/machine learning deployments with enterprise-grade MLOps, while also providing a use case.

As artificial intelligence (AI) continues to grow in use, we’re seeing entire industries embrace the use of the technology as well as integrate it into other cloud-native applications. The reason is clear. AI/ML deployments have shown that they can empower both teams within companies and their customers. Abhinav Joshi detailed these points clearly during the introduction portion of his talk. But in short, these applications deliver a lot of new insights and information from customers to an organization’s teams. As NVIDIA’s Matt Akins mentioned, “Data is a critical business asset.”

Because of this, over the last several years more and more companies are taking the bull by the horns and applying AI/ML tools to their business models. These companies and organizations represent some of the best-known sectors of the economy. In healthcare, you see these AI/ML deployments being used to improve and speed up diagnoses. They also use these programs to improve overall outcomes by recognizing patterns within specific illnesses that can be missed by medical professionals. Within the financial services sector, the benefits of ML/AI deployments not only reduce instances of fraud but also allow for even greater personalization of services.

These are only a few examples, but the reach of AI/ML tools can be felt in telecommunications, insurance, and of course automobiles. This is seen with the growth of autonomous driving, personalized maintenance needs, and improved supply chains; among others. So it’s clear that growth is here to stay. This is why Red Hat and NVIDIA partnered to accelerate enterprise AI projects with GPUs, MLOps, and more.

Within their collaboration, Red Hat and NVIDIA are powering open-sourced powered MLOps that lean on their specific expertise — data engineering, computer acceleration (GPU, DPU), ML data sources & databases, and infrastructure. This enables teams to work together more effectively and securely.

Abhinav Joshi of Red Hat spoke about a use case with Turkcell, a leading mobile phone provider based in Turkey. This provider is quite large for its region as it services over 68.9 million subscribers in nine counties. In this use case, it was detailed how old legacy platforms were negatively affecting overall time-to-market incentives for digital services. Next, it was mentioned how monolithic approaches to AI/ML projects were slow to come online while requiring a great deal of initial investment. Finally, the lack of automation also hindered the process of getting new digital services from concept, team, then to customers. To solve this issue, a hybrid approach was applied by building a reliable and scalable cloud environment where the client can create the tools and applications they need when they need them.

The results in this use case were staggering. With delivery speeds doubling, and savings related to AI workload costs reaching upwards of 70%, NVIDIA’s and Red Hat’s Enterprise solutions offer organizations a comprehensive choice in framework and tools while allowing for self-service experiences that allow data scientists to rapidly build and share AI models before rollout. This sandbox allows Turkcell to innovate in-house at their pace.

What was interesting about the keynote was toward the end of Joshi’s portion where he stated that through companies such as Turkcell were able to create the stack needed to provide solutions to their issues, the majority of organizations would require the open-source community to think in terms of an integrated AI platform, “that has been pre-tested, certified, fully supported, and almost ready for AI practitioners so they can start using it instantly to create AI capabilities.” This would allow data scientists to work directly on their projects instead of having to work on developing the infrastructure resources and software tools.

The ability to directly begin creating models instead of having to worry about infrastructure and software tools can be a gamechanger for organizations looking to accelerate their own AI/ML workloads. There are plenty of benefits though there are still challenges that must be addressed, such as the availability of AI infrastructure. But with open-sourced powered “Integrated AI Platform”, enterprise-grade support, software tools, and MLOps many of the issues holding teams back can be addressed in the near term.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.