Amazon Bedrock is now generally available as AWS enterprise GenAI efforts get serious

4 min read


VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


Making a foundation model for generative AI available is just the beginning, not a comprehensive solution, in supporting the complex demands of enterprise use cases.

Today in a major step forward, Amazon Web Services (AWS) announced the general availability of its Amazon Bedrock service, a vital tool in meeting the requirements of enterprise applications for generative AI.

AWS first introduced Amazon Bedrock in April as a preview service, offering a series of foundation models as a service on its cloud platform. The preview was subsequently expanded in July with the addition of more models, including Anthropic Claude 2 and Stability AI SDXL 1.0 models. Now readily available, Amazon Bedrock supports a range of models, including the company’s own, Amazon Titan Embeddings.

The transition of a service to general availability on AWS is not a decision made lightly; rather, it’s the culmination of rigorous testing and subsequent enhancements, informed by initial user feedback.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

“It’s a normal process for us to launch something in preview, testing closely with a few customers to get feedback and these are very deep interactions, so we don’t want to start with a lot of people,” Vasi Philomin, VP and GM for Generative AI at Amazon told VentureBeat. “We’ve got a team that has to interact with these customers to really understand where we could do better and what other things we may be missing.”

How Amazon Bedrock has improved to be enterprise ready for GA

The path to general availability is about reliability and hardening such that the service is production ready for enterprise workloads.

Among the many things that AWS has improved in Amazon Bedrock to enable enterprises, is adding compliance for regulations. One such regulation that the service is now compliant with according to Philomin is the European Union’s GDPR (General Data Protection Regulation).
“We’re talking about enterprise customers and they need to be in compliance with GDPR and that requires a lot of work and we’ve done all of that,” he said.

As part of compliance, enterprises typically also need observability and audit capabilities. To that end, Amazon Bedrock as a generally available service also now integrates with the Amazon CloudWatch service for logging.

Cost control is another critical component for making any service ready for broad enterprise consumption. After all, most organizations have accounting departments and budgets that need to be respected.

Provision throughput is a capability that AWS announced for Amazon Bedrock as part of today’s updates. It allows customers to pay for a set amount of throughput from a generative AI model, guaranteeing cost protections and performance levels. With provision throughput, customers can specify how many “model units” or tokens they need, avoiding throttling issues if demand spikes.
Philomin noted that the provision throughput feature gives customers guaranteed cost caps and assured throughput for their applications, which is important for truly adopting these technologies at scale in an enterprise setting.

Amazon Titan embeddings brings new power to generative AI accuracy

A key part of the general availability today is Amazon Titan Embeddings model, which AWS built on its own.

Amazon Titan Embeddings is useful for retrieval augmented generation (RAG) use cases, which help to dramatically improve the accuracy of generative AI. It works by taking words as input and converting them to mathematical vector representations known as embeddings. This allows it to break down documents and queries into an embedding space, improving accuracy when retrieving relevant document fragments to use as answers.

Philomin commented that when Amazon Titan Embeddings was first made available in preview, the initial group of users had a lot of feedback. One of the things they asked for was a larger token window, to enable the model to handle larger documents. That change is now reflected in the generally available service, to help ensure it can meet enterprise requirements.

Amazon Titan Embeddings are also being used in combination with other large language models (LLMs) on Amazon Bedrock. Philomin noted that Amazon Titan Embeddings are being used by some customers in combination with Anthropic’s Claude2 model to implement chatbots where knowledge is stored externally as documents. The Titan embeddings model embeds documents into a vector space, while Claude2 is used for the conversational capabilities. This allows the chatbot to retrieve relevant knowledge fragments from the embedded documents to answer questions, without requiring retraining of the language models as the knowledge sources evolve.

CodeWhisperer previews new features

Alongside the general availability of Amazon Bedrock, AWS also today announced a preview of new capabilities for Amazon CodeWhisperer generative AI service.

The new capabilities now enable enterprise users to benefit from an organization’s own private code repositories in a safe and secure manner.

“This unlocks new levels of developer productivity,” Philomin said. “General coding assistants are usually their general purpose; they know how to write code generally, but they wouldn’t know anything about your internal code, because they’ve never had an opportunity to learn from that.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link