Deepgram Brings Real-Time Speech Intelligence to Amazon SageMaker AI

The streaming speech solution by Deepgram delivers enterprise-grade speech-to-text, text-to-speech, and voice agents with sub-second latency directly through the SageMaker API.

Topics

  • Deepgram, a real-time Voice AI platform, has announced native integration with Amazon SageMaker AI, delivering streaming, real-time speech-to-text (STT), text-to-speech (TTS), and the Voice Agent API as Amazon SageMaker AI real-time endpoints, no custom pipelines or orchestration required. 

    Teams can now build, deploy, and scale voice-powered applications inside their existing AWS workflows while maintaining the security and compliance benefits of their AWS environment.

    “Deepgram’s integration with Amazon SageMaker represents an important step forward for real-time voice AI,” said Scott Stephenson, CEO and Co-Founder, Deepgram. 

    ALSO READ: Amplience Extends AWS Partnership with Amazon Q Business Integration

    “By bringing our streaming speech models directly into SageMaker, enterprises can deploy speech-to-text, text-to-speech, and voice agent capabilities with sub-second latency, all within their AWS environment.” 

    “This collaboration extends SageMaker’s functionality and gives developers a powerful way to build and scale voice-driven applications securely and efficiently.”

    Native streaming via Amazon SageMaker endpoints means no workarounds or hoops to jump through, just clean, real-time inferences through the SageMaker API. The integration enables sub-second latency and enterprise-grade reliability for high-scale use cases like contact centres, trading floors, and live analytics.

    Built to run on AWS, the solution supports streaming responses via InvokeEndpointWithResponseStream and keeps data within AWS. 

    ALSO READ: Box and AWS Launch Multi-Year AI Collaboration

    Customers can deploy Deepgram in their Amazon Virtual Private Cloud (Amazon VPC) or as a managed service, aligning with stringent data residency and compliance requirements.

    “Enterprise developers need to build voice AI applications at scale without compromising on speed, accuracy, or security,” said Stephenson. 

    “Our native integration with Amazon SageMaker removes the complexity from deploying real-time voice capabilities, allowing AWS customers to focus on innovation rather than infrastructure.” 

    “By bringing our state-of-the-art speech models directly into the AWS environment where companies already operate, we’re making it dramatically easier for organisations to create voice experiences that truly transform how they engage with customers and analyse conversations at scale.”

    ALSO READ: OnAudience Announces New Integration with Amazon Ads

    The integration is also backed by a strong relationship with AWS. Deepgram is an AWS Generative AI Competency Partner and has signed a multi-year Strategic Collaboration Agreement (SCA) with AWS to accelerate enterprise adoption.

    “Deepgram’s new Amazon SageMaker AI integration makes it simple for customers to bring real-time voice capabilities into their AWS workflows,” said Ankur Mehrotra, General Manager for Amazon SageMaker at AWS. 

    “By offering streaming speech-to-text and text-to-speech directly through Amazon SageMaker endpoints, Deepgram helps developers accelerate innovation while maintaining data security and compliance on AWS.” 

    “This integration is a great example of how Deepgram is expanding its market reach by making generative AI more accessible and powerful through AWS services, while enabling our mutual customers to build sophisticated voice applications.”

    ALSO READ: Pickcel Announces Collaboration with Amazon Signage

    Topics

    More Like This