Datadog (AI Observability)
VisitDatadog provides an extensive monitoring and observability platform that has integrated features for AI and LLM application health and performance.
9 companies in this category
Datadog provides an extensive monitoring and observability platform that has integrated features for AI and LLM application health and performance.
Grafana Labs provides a comprehensive open-source observability stack that can be adapted and extended for LLM observability, including tracing, logging, and metrics.
Weights & Biases Prompts is a module within the W&B MLOps platform for managing, evaluating, and monitoring LLM prompts and models.
LangSmith is a platform for debugging, monitoring, testing, and evaluating LLM applications, built by the creators of the LangChain framework.
LangSmith is a platform by LangChain for debugging, testing, evaluating, and monitoring LLM applications built with LangChain.
Galileo offers an MLOps platform focused on data quality for AI, with specific tools for evaluating and monitoring LLMs.
Tristan offers an enterprise-grade LLM operations platform with tools for prompt management, monitoring, and evaluation.
TruLens is an open-source framework for providing high-quality, trustworthy LLM applications through evaluation and explainability.
DeepEval is an open-source Python library for unit testing and evaluating LLM applications, focusing on reliability and quality.