AWS brings managed open source MLflow to Amazon SageMaker

4 min read
AWS brings managed open source MLflow to Amazon SageMaker

It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More



An AWS service, available since 2017, is foundational for today’s popular generative AI models.

Amazon SageMaker launched in 2017 and has been steadily iterated on in the years since. While much of the limelight and attention in the gen AI world at AWS over the last year has been focused on Amazon Bedrock, Amazon SageMaker continues to offer a critical set of capabilities.

Amazon SageMaker is an AWS service for managing the entire machine learning lifecycle, from building and training models to deploying and managing predictive models at scale. It provides a managed environment and tools for customers to build, train, and deploy machine learning and deep learning models. Hundreds of thousands of customers are using Amazon SageMaker for tasks like training popular gen AI models and deploying machine learning workloads. Amazon SageMaker is used as a service that helped to train Stability AI’s Stable Diffusion and it is the machine learning (ML) framework that helped to enable the Luma’s Dream Machine text-to-video generator.

AWS is now expanding the capabilities further with the general availability of the managed MLflow on SageMaker service. MLflow is a popular open-source platform for the machine learning lifecycle, including experimentation, reproducibility, deployment and monitoring of machine learning models. With the availability of managed MLFlow for Amazon SageMaker, AWS is giving its users more power and choice for building the next generation of AI models.


VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


“Given the current pace of innovation in the space, our customers are looking to move quickly from experimentation to production, and really accelerate time to market,” Ankur Mehrotra, director and general manager of Amazon SageMaker at AWS told VentureBeat. “So we’re launching MLflow as a managed capability within SageMaker where you can, with a few clicks, set up and launch MLflow within aSageMaker development environment.”

What MLflow brings to AWS users

Developers and organizations widely use the open-source MLflow project for MLOps. Mehrotra highlighted that the new managed MLflow on SageMaker service offers enterprise users more choice without replacing existing features.

By offering MLflow as a fully managed service tightly coupled with SageMaker, AWS aims to provide an integrated experience leveraging the capabilities of both platforms.

“As they’re iterating over their models, creating different variants they can log those metrics in MLflow and track and compare different iterations really easily which is something that MLflow is great for,” Mehrotra said. “And then they can register those models in a model registry and then easily from there deploy those models.”

A key aspect of the new managed MLflow service is its deep integration with existing SageMaker components and workflows. Actions taken in MLflow automatically sync to services like the SageMaker Model Registry.

“We’ve built this in a way where it’s integrated with the rest of SageMaker capabilities, whether it’s training or deployment model hosting or our SageMaker Model Registry, so customers get a fully managed seamless experience of using MLflow within SageMaker,” Mehrotra explained

AWS has already had several organizations try out the managed service while it was in beta. Among the early users are web hosting provider GoDaddy as well as Toyota Connected which is a subsidiary of Toyota Motor Corporation.

The SageMaker and Bedrock intersection

While Amazon SageMaker has traditionally focused on the end-to-end machine learning lifecycle, AWS has introduced new services like Amazon Bedrock aimed at building generative AI applications. 

Mehrotra clarified SageMaker’s role in this emerging AI ecosystem.

“SageMaker is basically the service for building a model, training a model, deploying the model, whereas Bedrock is the best service for creating generative AI-based applications,” Mehrotra said. “Many of our customers use multiple services – SageMaker, Bedrock and others – to create their generative AI solutions.”

He highlighted how developers can build models in SageMaker and then deploy them into AI applications via Bedrock, leveraging its serverless capabilities. The two services are complementary parts of AWS’s broader generative AI stack.

Amazon SageMaker’s strategic path forward

Looking ahead, Mehrotra outlined some of the key priorities driving Amazon SageMaker’s product roadmap and investments. He noted that AWS focuses on a few different areas.

One key area of focus is on helping to improve scale while optimizing cost.

“We are also focusing on reducing the undifferentiated, heavy lifting for customers as they build new AI solutions,” he said. “You’re going to see more capabilities from us that make it really easy and simple for customers to create these solutions and take them to market faster.”



Source link