• Open

    Build character consistent storyboards using Amazon Nova in Amazon Bedrock – Part 2
    In this post, we take an animated short film, Picchu, produced by FuzzyPixel from Amazon Web Services (AWS), prepare training data by extracting key character frames, and fine-tune a character-consistent model for the main character Mayu and her mother, so we can quickly generate storyboard concepts for new sequels like the following images.  ( 21 min )
    Build character consistent storyboards using Amazon Nova in Amazon Bedrock – Part 1
    The art of storyboarding stands as the cornerstone of modern content creation, weaving its essential role through filmmaking, animation, advertising, and UX design. Though traditionally, creators have relied on hand-drawn sequential illustrations to map their narratives, today’s AI foundation models (FMs) are transforming this landscape. FMs like Amazon Nova Canvas and Amazon Nova Reel offer […]  ( 20 min )

  • Open

    Authenticate Amazon Q Business data accessors using a trusted token issuer
    In this post, we showed how to implement TTI authentication for Amazon Q data accessors. We covered the setup process for both ISVs and enterprises and demonstrated how TTI authentication simplifies the user experience while maintaining security standards.  ( 20 min )
    Unlocking the future of professional services: How Proofpoint uses Amazon Q Business
    Proofpoint has redefined its professional services by integrating Amazon Q Business, a fully managed, generative AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. In this post, we explore how Amazon Q Business transformed Proofpoint’s professional services, detailing its deployment, functionality, and future roadmap.  ( 20 min )
    Enhancing LLM accuracy with Coveo Passage Retrieval on Amazon Bedrock
    In this post, we show how to deploy Coveo’s Passage Retrieval API as an Amazon Bedrock Agents action group to enhance response accuracy, so Coveo users can use their current index to rapidly deploy new generative experiences across their organization.  ( 19 min )
    Train and deploy models on Amazon SageMaker HyperPod using the new HyperPod CLI and SDK
    In this post, we demonstrate how to use the new Amazon SageMaker HyperPod CLI and SDK to streamline the process of training and deploying large AI models through practical examples of distributed training using Fully Sharded Data Parallel (FSDP) and model deployment for inference. The tools provide simplified workflows through straightforward commands for common tasks, while offering flexible development options through the SDK for more complex requirements, along with comprehensive observability features and production-ready deployment capabilities.  ( 27 min )

  • Open

    Build a serverless Amazon Bedrock batch job orchestration workflow using AWS Step Functions
    In this post, we introduce a flexible and scalable solution that simplifies the batch inference workflow. This solution provides a highly scalable approach to managing your FM batch inference needs, such as generating embeddings for millions of documents or running custom evaluation or completion tasks with large datasets.  ( 20 min )
    Natural language-based database analytics with Amazon Nova
    In this post, we explore how natural language database analytics can revolutionize the way organizations interact with their structured data through the power of large language model (LLM) agents. Natural language interfaces to databases have long been a goal in data management. Agents enhance database analytics by breaking down complex queries into explicit, verifiable reasoning steps and enabling self-correction through validation loops that can catch errors, analyze failures, and refine queries until they accurately match user intent and schema requirements.  ( 20 min )
    Deploy Amazon Bedrock Knowledge Bases using Terraform for RAG-based generative AI applications
    In this post, we demonstrated how to automate the deployment of Amazon Knowledge Bases for RAG applications using Terraform.  ( 20 min )
    Document intelligence evolved: Building and evaluating KIE solutions that scale
    In this blog post, we demonstrate an end-to-end approach for building and evaluating a KIE solution using Amazon Nova models available through Amazon Bedrock. This end-to-end approach encompasses three critical phases: data readiness (understanding and preparing your documents), solution development (implementing extraction logic with appropriate models), and performance measurement (evaluating accuracy, efficiency, and cost-effectiveness). We illustrate this comprehensive approach using the FATURA dataset—a collection of diverse invoice documents that serves as a representative proxy for real-world enterprise data.  ( 23 min )
    Announcing the new cluster creation experience for Amazon SageMaker HyperPod
    With the new cluster creation experience, you can create your SageMaker HyperPod clusters, including the required prerequisite AWS resources, in one click, with prescriptive default values automatically applied. In this post, we explore the new cluster creation experience for Amazon SageMaker HyperPod.  ( 18 min )

  • Open

    August 2025
    Pupdate It’s been warm and dry[1], so the boys have enjoyed some nice long walks. Fringe Edinburgh Fringe was a regular feature of the twenty-teens for us, but then Covid happened. This year was our first time back, and it was great. We saw: They were all fantastic, and I’m not going to pick favourites. […]  ( 13 min )
    August 2025
    Pupdate It’s been warm and dry[1], so the boys have enjoyed some nice long walks. Fringe Edinburgh Fringe was a regular feature of the twenty-teens for us, but then Covid happened. This year was our first time back, and it was great. We saw: They were all fantastic, and I’m not going to pick favourites. […]  ( 13 min )
  • Open

    SRE Weekly Issue #492
    View on sreweekly.com A message from our sponsor, Observe, Inc.: Built on a scalable, cost-efficient data lake, Observe delivers AI-powered observability at scale. With its context-aware Knowledge Graph and AI SRE, Observe enables Capital One, Topgolf, and Dialpad to ingest hundreds of terabytes daily and resolve issues faster—at drastically lower cost. Learn how Observe is […]  ( 4 min )

  • Open

    Detect Amazon Bedrock misconfigurations with Datadog Cloud Security
    We’re excited to announce new security capabilities in Datadog Cloud Security that can help you detect and remediate Amazon Bedrock misconfigurations before they become security incidents. This integration helps organizations embed robust security controls and secure their use of the powerful capabilities of Amazon Bedrock by offering three critical advantages: holistic AI security by integrating AI security into your broader cloud security strategy, real-time risk detection through identifying potential AI-related security issues as they emerge, and simplified compliance to help meet evolving AI regulations with pre-built detections.  ( 19 min )
    Set up custom domain names for Amazon Bedrock AgentCore Runtime agents
    In this post, we show you how to create custom domain names for your Amazon Bedrock AgentCore Runtime agent endpoints using CloudFront as a reverse proxy. This solution provides several key benefits: simplified integration for development teams, custom domains that align with your organization, cleaner infrastructure abstraction, and straightforward maintenance when endpoints need updates.  ( 21 min )
    Introducing auto scaling on Amazon SageMaker HyperPod
    In this post, we announce that Amazon SageMaker HyperPod now supports managed node automatic scaling with Karpenter, enabling efficient scaling of SageMaker HyperPod clusters to meet inference and training demands. We dive into the benefits of Karpenter and provide details on enabling and configuring Karpenter in SageMaker HyperPod EKS clusters.  ( 21 min )

  • Open

    Meet Boti: The AI assistant transforming how the citizens of Buenos Aires access government information with Amazon Bedrock
    This post describes the agentic AI assistant built by the Government of the City of Buenos Aires and the GenAIIC to respond to citizens’ questions about government procedures. The solution consists of two primary components: an input guardrail system that helps prevent the system from responding to harmful user queries and a government procedures agent that retrieves relevant information and generates responses.  ( 22 min )
    Empowering air quality research with secure, ML-driven predictive analytics
    In this post, we provide a data imputation solution using Amazon SageMaker AI, AWS Lambda, and AWS Step Functions. This solution is designed for environmental analysts, public health officials, and business intelligence professionals who need reliable PM2.5 data for trend analysis, reporting, and decision-making. We sourced our sample training dataset from openAFRICA. Our solution predicts PM2.5 values using time-series forecasting.  ( 23 min )
    How Amazon Finance built an AI assistant using Amazon Bedrock and Amazon Kendra to support analysts for data discovery and business insights
    The Amazon Finance technical team develops and manages comprehensive technology solutions that power financial decision-making and operational efficiency while standardizing across Amazon’s global operations. In this post, we explain how the team conceptualized and implemented a solution to these business challenges by harnessing the power of generative AI using Amazon Bedrock and intelligent search with Amazon Kendra.  ( 22 min )

  • Open

    Mercury foundation models from Inception Labs are now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart
    In this post, we announce that Mercury and Mercury Coder foundation models from Inception Labs are now available through Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. We demonstrate how to deploy these ultra-fast diffusion-based language models that can generate up to 1,100 tokens per second on NVIDIA H100 GPUs, and showcase their capabilities in code generation and tool use scenarios.  ( 24 min )

  • Open

    Learn how Amazon Health Services improved discovery in Amazon search using AWS ML and gen AI
    In this post, we show you how Amazon Health Services (AHS) solved discoverability challenges on Amazon.com search using AWS services such as Amazon SageMaker, Amazon Bedrock, and Amazon EMR. By combining machine learning (ML), natural language processing, and vector search capabilities, we improved our ability to connect customers with relevant healthcare offerings.  ( 22 min )

  • Open

    SRE Weekly Issue #491
    View on sreweekly.com A message from our sponsor, Spacelift: Infrastructure Security Virtual Event – This Wednesday, August 27 Join the IaCConf community on August 27 for a free virtual event that dives into IaC security best practices and real-world stories. Hear from three speakers on: Taking a Platform Approach to Safer Infrastructure How Tagged, Vetted […]  ( 4 min )

  • Open

    Enhance Geospatial Analysis and GIS Workflows with Amazon Bedrock Capabilities
    Applying emerging technologies to the geospatial domain offers a unique opportunity to create transformative user experiences and intuitive workstreams for users and organizations to deliver on their missions and responsibilities. In this post, we explore how you can integrate existing systems with Amazon Bedrock to create new workflows to unlock efficiencies insights. This integration can benefit technical, nontechnical, and leadership roles alike.  ( 23 min )
    Beyond the basics: A comprehensive foundation model selection framework for generative AI
    As the model landscape expands, organizations face complex scenarios when selecting the right foundation model for their applications. In this blog post we present a systematic evaluation methodology for Amazon Bedrock users, combining theoretical frameworks with practical implementation strategies that empower data scientists and machine learning (ML) engineers to make optimal model selections.  ( 20 min )
    Accelerate intelligent document processing with generative AI on AWS
    In this post, we introduce our open source GenAI IDP Accelerator—a tested solution that we use to help customers across industries address their document processing challenges. Automated document processing workflows accurately extract structured information from documents, reducing manual effort. We will show you how this ready-to-deploy solution can help you build those workflows with generative AI on AWS in days instead of months.  ( 20 min )
    Amazon SageMaker HyperPod enhances ML infrastructure with scalability and customizability
    In this post, we introduced three features in SageMaker HyperPod that enhance scalability and customizability for ML infrastructure. Continuous provisioning offers flexible resource provisioning to help you start training and deploying your models faster and manage your cluster more efficiently. With custom AMIs, you can align your ML environments with organizational security standards and software requirements.  ( 20 min )

  • Open

    Fine-tune OpenAI GPT-OSS models using Amazon SageMaker HyperPod recipes
    This post is the second part of the GPT-OSS series focusing on model customization with Amazon SageMaker AI. In Part 1, we demonstrated fine-tuning GPT-OSS models using open source Hugging Face libraries with SageMaker training jobs, which supports distributed multi-GPU and multi-node configurations, so you can spin up high-performance clusters on demand. In this post, […]  ( 24 min )
    Inline code nodes now supported in Amazon Bedrock Flows in public preview
    We are excited to announce the public preview of support for inline code nodes in Amazon Bedrock Flows. With this powerful new capability, you can write Python scripts directly within your workflow, alleviating the need for separate AWS Lambda functions for simple logic. This feature streamlines preprocessing and postprocessing tasks (like data normalization and response formatting), simplifying generative AI application development and making it more accessible across organizations.  ( 18 min )
    Accelerate enterprise AI implementations with Amazon Q Business
    Amazon Q Business offers AWS customers a scalable and comprehensive solution for enhancing business processes across their organization. By carefully evaluating your use cases, following implementation best practices, and using the architectural guidance provided in this post, you can deploy Amazon Q Business to transform your enterprise productivity. The key to success lies in starting small, proving value quickly, and scaling systematically across your organization.  ( 20 min )
    Speed up delivery of ML workloads using Code Editor in Amazon SageMaker Unified Studio
    In this post, we walk through how you can use the new Code Editor and multiple spaces support in SageMaker Unified Studio. The sample solution shows how to develop an ML pipeline that automates the typical end-to-end ML activities to build, train, evaluate, and (optionally) deploy an ML model.  ( 21 min )
    How Infosys Topaz leverages Amazon Bedrock to transform technical help desk operations
    In this blog, we examine the use case of a large energy supplier whose technical help desk agents answer customer calls and support field agents. We use Amazon Bedrock along with capabilities from Infosys Topaz™ to build a generative AI application that can reduce call handling times, automate tasks, and improve the overall quality of technical support.  ( 23 min )

  • Open

    Create personalized products and marketing campaigns using Amazon Nova in Amazon Bedrock
    Built using Amazon Nova in Amazon Bedrock, The Fragrance Lab represents a comprehensive end-to-end application that illustrates the transformative power of generative AI in retail, consumer goods, advertising, and marketing. In this post, we explore the development of The Fragrance Lab. Our vision was to craft a unique blend of physical and digital experiences that would celebrate creativity, advertising, and consumer goods while capturing the spirit of the French Riviera.  ( 19 min )
    Tyson Foods elevates customer search experience with an AI-powered conversational assistant
    In this post, we explore how Tyson Foods collaborated with the AWS Generative AI Innovation Center to revolutionize their customer interaction through an intuitive AI assistant integrated into their website. The AI assistant was built using Amazon Bedrock,  ( 26 min )
    Enhance AI agents using predictive ML models with Amazon SageMaker AI and Model Context Protocol (MCP)
    In this post, we demonstrate how to enhance AI agents’ capabilities by integrating predictive ML models using Amazon SageMaker AI and the MCP. By using the open source Strands Agents SDK and the flexible deployment options of SageMaker AI, developers can create sophisticated AI applications that combine conversational AI with powerful predictive analytics capabilities.  ( 22 min )

  • Open

    Simplify access control and auditing for Amazon SageMaker Studio using trusted identity propagation
    In this post, we explore how to enable and use trusted identity propagation in Amazon SageMaker Studio, which allows organizations to simplify access management by granting permissions to existing AWS IAM Identity Center identities. The solution demonstrates how to implement fine-grained access controls based on a physical user's identity, maintain detailed audit logs across supported AWS services, and support long-running user background sessions for training jobs.  ( 24 min )
    Benchmarking document information localization with Amazon Nova
    This post demonstrates how to use foundation models (FMs) in Amazon Bedrock, specifically Amazon Nova Pro, to achieve high-accuracy document field localization while dramatically simplifying implementation. We show how these models can precisely locate and interpret document fields with minimal frontend effort, reducing processing errors and manual intervention.  ( 21 min )
    How Infosys built a generative AI solution to process oil and gas drilling data with Amazon Bedrock
    We built an advanced RAG solution using Amazon Bedrock leveraging Infosys Topaz™ AI capabilities, tailored for the oil and gas sector. This solution excels in handling multimodal data sources, seamlessly processing text, diagrams, and numerical data while maintaining context and relationships between different data elements. In this post, we provide insights on the solution and walk you through different approaches and architecture patterns explored, like different chunking, multi-vector retrieval, and hybrid search during the development.  ( 23 min )
    Streamline employee training with an intelligent chatbot powered by Amazon Q Business
    In this post, we explore how to design and implement custom plugins for Amazon Q Business to create an intelligent chatbot that streamlines employee training by retrieving answers from training materials. The solution implements secure API access using Amazon Cognito for user authentication and authorization, processes multiple document formats, and includes features like RAG-enhanced responses and email escalation capabilities through custom plugins.  ( 24 min )

  • Open

    Create a travel planning agentic workflow with Amazon Nova
    In this post, we explore how to build a travel planning solution using AI agents. The agent uses Amazon Nova, which offers an optimal balance of performance and cost compared to other commercial LLMs. By combining accurate but cost-efficient Amazon Nova models with LangGraph orchestration capabilities, we create a practical travel assistant that can handle complex planning tasks while keeping operational costs manageable for production deployments.  ( 19 min )
  • Open

    SRE Weekly Issue #490
    View on sreweekly.com A message from our sponsor, Observe, Inc.: Built on a scalable, cost-efficient data lake, Observe delivers AI-powered observability at scale. With its context-aware Knowledge Graph and AI SRE, Observe enables Capital One, Topgolf, and Dialpad to ingest hundreds of terabytes daily and resolve issues faster—at drastically lower cost. Learn how Observe is […]  ( 4 min )

  • Open

    Introducing Amazon Bedrock AgentCore Gateway: Transforming enterprise AI agent tool development
    In this post, we discuss Amazon Bedrock AgentCore Gateway, a fully managed service that revolutionizes how enterprises connect AI agents with tools and services by providing a centralized tool server with unified interface for agent-tool communication. The service offers key capabilities including Security Guard, Translation, Composition, Target extensibility, Infrastructure Manager, and Semantic Tool Selection, while implementing sophisticated dual-sided security architecture for both inbound and outbound connections.  ( 22 min )
    Build a scalable containerized web application on AWS using the MERN stack with Amazon Q Developer – Part 1
    In a traditional SDLC, a lot of time is spent in the different phases researching approaches that can deliver on requirements: iterating over design changes, writing, testing and reviewing code, and configuring infrastructure. In this post, you learned about the experience and saw productivity gains you can realize by using Amazon Q Developer as a coding assistant to build a scalable MERN stack web application on AWS.  ( 20 min )
    Optimizing Salesforce’s model endpoints with Amazon SageMaker AI inference components
    In this post, we share how the Salesforce AI Platform team optimized GPU utilization, improved resource efficiency and achieved cost savings using Amazon SageMaker AI, specifically inference components.  ( 20 min )
    Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs
    In this post, we demonstrate the implementation of a practical RAG chat-based assistant using a comprehensive stack of modern technologies. The solution uses NVIDIA NIMs for both LLM inference and text embedding services, with the NIM Operator handling their deployment and management. The architecture incorporates Amazon OpenSearch Serverless to store and query high-dimensional vector embeddings for similarity search.  ( 26 min )
    Introducing Amazon Bedrock AgentCore Identity: Securing agentic AI at scale
    In this post, we explore Amazon Bedrock AgentCore Identity, a comprehensive identity and access management service purpose-built for AI agents that enables secure access to AWS resources and third-party tools. The service provides robust identity management features including agent identity directory, agent authorizer, resource credential provider, and resource token vault to help organizations deploy AI agents securely at scale.  ( 22 min )

  • Open

    Scalable intelligent document processing using Amazon Bedrock Data Automation
    In the blog post Scalable intelligent document processing using Amazon Bedrock, we demonstrated how to build a scalable IDP pipeline using Anthropic foundation models on Amazon Bedrock. Although that approach delivered robust performance, the introduction of Amazon Bedrock Data Automation brings a new level of efficiency and flexibility to IDP solutions. This post explores how Amazon Bedrock Data Automation enhances document processing capabilities and streamlines the automation journey.  ( 18 min )
    Whiteboard to cloud in minutes using Amazon Q, Amazon Bedrock Data Automation, and Model Context Protocol
    We’re excited to share the Amazon Bedrock Data Automation Model Context Protocol (MCP) server, for seamless integration between Amazon Q and your enterprise data. In this post, you will learn how to use the Amazon Bedrock Data Automation MCP server to securely integrate with AWS Services, use Bedrock Data Automation operations as callable MCP tools, and build a conversational development experience with Amazon Q.  ( 19 min )
    Bringing agentic Retrieval Augmented Generation to Amazon Q Business
    In this blog post, we explore how Amazon Q Business is transforming enterprise data interaction through Agentic Retrieval Augmented Generation (RAG).  ( 19 min )
    Empowering students with disabilities: University Startups’ generative AI solution for personalized student pathways
    University Startups, headquartered in Bethesda, MD, was founded in 2020 to empower high school students to expand their education beyond a traditional curriculum. University Startups is focused on special education and related services in school districts throughout the US. In this post, we explain how University Startups uses generative AI technology on AWS to enable students to design a specific plan for their future either in education or the work force.  ( 20 min )
    Citations with Amazon Nova understanding models
    In this post, we demonstrate how to prompt Amazon Nova understanding models to cite sources in responses. Further, we will also walk through how we can evaluate the responses (and citations) for accuracy.  ( 20 min )

  • Open

    Securely launch and scale your agents and tools on Amazon Bedrock AgentCore Runtime
    In this post, we explore how Amazon Bedrock AgentCore Runtime simplifies the deployment and management of AI agents.  ( 26 min )
    PwC and AWS Build Responsible AI with Automated Reasoning on Amazon Bedrock
    This post presents how AWS and PwC are developing new reasoning checks that combine deep industry expertise with Automated Reasoning checks in Amazon Bedrock Guardrails to support innovation.  ( 19 min )
    How Amazon scaled Rufus by building multi-node inference using AWS Trainium chips and vLLM
    In this post, Amazon shares how they developed a multi-node inference solution for Rufus, their generative AI shopping assistant, using Amazon Trainium chips and vLLM to serve large language models at scale. The solution combines a leader/follower orchestration model, hybrid parallelism strategies, and a multi-node inference unit abstraction layer built on Amazon ECS to deploy models across multiple nodes while maintaining high performance and reliability.  ( 20 min )
    Build an intelligent financial analysis agent with LangGraph and Strands Agents
    This post describes an approach of combining three powerful technologies to illustrate an architecture that you can adapt and build upon for your specific financial analysis needs: LangGraph for workflow orchestration, Strands Agents for structured reasoning, and Model Context Protocol (MCP) for tool integration.  ( 23 min )
    Amazon Bedrock AgentCore Memory: Building context-aware agents
    In this post, we explore Amazon Bedrock AgentCore Memory, a fully managed service that enables AI agents to maintain both immediate and long-term knowledge, transforming one-off conversations into continuous, evolving relationships between users and AI agents. The service eliminates complex memory infrastructure management while providing full control over what AI agents remember, offering powerful capabilities for maintaining both short-term working memory and long-term intelligent memory across sessions.  ( 23 min )
    Build a conversational natural language interface for Amazon Athena queries using Amazon Nova
    In this post, we explore an innovative solution that uses Amazon Bedrock Agents, powered by Amazon Nova Lite, to create a conversational interface for Athena queries. We use AWS Cost and Usage Reports (AWS CUR) as an example, but this solution can be adapted for other databases you query using Athena. This approach democratizes data access while preserving the powerful analytical capabilities of Athena, so you can interact with your data using natural language.  ( 22 min )

  • Open

    Train and deploy AI models at trillion-parameter scale with Amazon SageMaker HyperPod support for P6e-GB200 UltraServers
    In this post, we review the technical specifications of P6e-GB200 UltraServers, discuss their performance benefits, and highlight key use cases. We then walk though how to purchase UltraServer capacity through flexible training plans and get started using UltraServers with SageMaker HyperPod.  ( 18 min )
    How Indegene’s AI-powered social intelligence for life sciences turns social media conversations into insights
    This post explores how Indegene’s Social Intelligence Solution uses advanced AI to help life sciences companies extract valuable insights from digital healthcare conversations. Built on AWS technology, the solution addresses the growing preference of HCPs for digital channels while overcoming the challenges of analyzing complex medical discussions on a scale.  ( 26 min )
    Unlocking enhanced legal document review with Lexbe and Amazon Bedrock
    In this post, Lexbe, a legal document review software company, demonstrates how they integrated Amazon Bedrock and other AWS services to transform their document review process, enabling legal professionals to instantly query and extract insights from vast volumes of case documents using generative AI. Through collaboration with AWS, Lexbe achieved significant improvements in recall rates, reaching up to 90% by December 2024, and developed capabilities for broad human-style reporting and deep automated inference across multiple languages.  ( 19 min )
    Automate AIOps with SageMaker Unified Studio Projects, Part 2: Technical implementation
    In this post, we focus on implementing this architecture with step-by-step guidance and reference code. We provide a detailed technical walkthrough that addresses the needs of two critical personas in the AI development lifecycle: the administrator who establishes governance and infrastructure through automated templates, and the data scientist who uses SageMaker Unified Studio for model development without managing the underlying infrastructure.  ( 24 min )
    Automate AIOps with Amazon SageMaker Unified Studio projects, Part 1: Solution architecture
    This post presents architectural strategies and a scalable framework that helps organizations manage multi-tenant environments, automate consistently, and embed governance controls as they scale their AI initiatives with SageMaker Unified Studio.  ( 24 min )

  • Open

    Demystifying Amazon Bedrock Pricing for a Chatbot Assistant
    In this post, we'll look at Amazon Bedrock pricing through the lens of a practical, real-world example: building a customer service chatbot. We'll break down the essential cost components, walk through capacity planning for a mid-sized call center implementation, and provide detailed pricing calculations across different foundation models.  ( 20 min )
    Fine-tune OpenAI GPT-OSS models on Amazon SageMaker AI using Hugging Face libraries
    Released on August 5, 2025, OpenAI’s GPT-OSS models, gpt-oss-20b and gpt-oss-120b, are now available on AWS through Amazon SageMaker AI and Amazon Bedrock. In this post, we walk through the process of fine-tuning a GPT-OSS model in a fully managed training environment using SageMaker AI training jobs.  ( 24 min )
  • Open

    SRE Weekly Issue #489
    View on sreweekly.com A message from our sponsor, Observe, Inc.: Observe‘s free Masterclass in Observability at Scale is coming on September 4th at 10am Pacific! We’ll explore how to architect for observability at scale – from streaming telemetry and open data lakes to AI agents that proactively instrument your code and surface insights. Learn more […]  ( 4 min )

  • Open

    The DIVA logistics agent, powered by Amazon Bedrock
    In this post, we discuss how DTDC and ShellKode used Amazon Bedrock to build DIVA 2.0, a generative AI-powered logistics agent.  ( 21 min )
    Automate enterprise workflows by integrating Salesforce Agentforce with Amazon Bedrock Agents
    This post explores a practical collaboration, integrating Salesforce Agentforce with Amazon Bedrock Agents and Amazon Redshift, to automate enterprise workflows.  ( 25 min )
    How Amazon Bedrock powers next-generation account planning at AWS
    In this post, we share how we built Account Plan Pulse, a generative AI tool designed to streamline and enhance the account planning process, using Amazon Bedrock. Pulse reduces review time and provides actionable account plan summaries for ease of collaboration and consumption, helping AWS sales teams better serve our customers.  ( 19 min )

  • Open

    Pioneering AI workflows at scale: A deep dive into Asana AI Studio and Amazon Q index collaboration
    Today, we’re excited to announce the integration of Asana AI Studio with Amazon Q index, bringing generative AI directly into your daily workflows. In this post, we explore how Asana AI Studio and Amazon Q index transform enterprise efficiency through intelligent workflow automation and enhanced data accessibility.  ( 20 min )
    Responsible AI for the payments industry – Part 1
    This post explores the unique challenges facing the payments industry in scaling AI adoption, the regulatory considerations that shape implementation decisions, and practical approaches to applying responsible AI principles. In Part 2, we provide practical implementation strategies to operationalize responsible AI within your payment systems.  ( 21 min )
    Responsible AI for the payments industry – Part 2
    In Part 1 of our series, we explored the foundational concepts of responsible AI in the payments industry. In this post, we discuss the practical implementation of responsible AI frameworks.  ( 19 min )
    Process multi-page documents with human review using Amazon Bedrock Data Automation and Amazon SageMaker AI
    In this post, we show how to process multi-page documents with a human review loop using Amazon Bedrock Data Automation and Amazon SageMaker AI.  ( 19 min )
2025-09-04T20:17:44.113Z osmosfeed 1.15.1