Datadog has announced the general availability of LLM Observability, a new product designed to help AI application developers and machine learning engineers efficiently monitor, improve, and secure large language model (LLM) applications. This innovative solution addresses the challenges of implementing and bringing generative AI features to production environments, ensuring companies can confidently deploy and monitor their AI applications. With LLM Observability, users can gain visibility into each step of the LLM chain, identify root causes of errors, and optimize performance and cost. The product also provides out-of-the-box quality and safety evaluations to mitigate security and privacy risks.
In an increasingly competitive market, companies are racing to release generative AI features, but the complexity of LLM chains, their non-deterministic nature, and security risks pose significant challenges. Datadog’s LLM Observability offers a comprehensive solution, providing prompt and response clustering, seamless integration with Datadog Application Performance Monitoring (APM), and out-of-the-box evaluation and sensitive data scanning capabilities. This enables companies to enhance the performance, accuracy, and security of generative AI applications while keeping data private and secure.
The product has already seen success with companies like WHOOP and AppFolio, who are using LLM Observability to evaluate performance, monitor production, and increase quality of their AI applications. As the adoption of LLM-based technologies continues to grow, Datadog’s LLM Observability provides a much-needed solution for teams to manage and understand performance, detect drifts or biases, and resolve issues before they impact the business or end-user experience.











