Session: Practical Generative AI Observability: Metrics and Tools for Real-Time Monitoring

As generative AI systems power ever more critical applications, ensuring the reliability, fairness, and performance of these systems demands robust observability frameworks. This presentation focuses on the emerging discipline of Generative AI Observability through a deep dive into strategies, methods, and best practices for real-time monitoring of generative systems. Attendees will learn metrics techniques to track key performance indicators such as output coherence, accuracy, and latency, while also gaining insights into how to detect and mitigate issues like bias, hallucination, and model drift. We’ll explore state-of-the-art observability tools designed for generative AI including those tailored for large language models, RAG frameworks, and multimodal systems. The discussion will cover innovation in monitoring the components in the pipeline, from data collection and preprocessing to inference execution and outputs, as well as the integration of observability into LLMOps workflows for continuous improvement. The talk will walk through real-world cases to show how leading organizations maintain reliability, transparency, and ethical compliance in their generative AI solutions. By the end of the session, participants will have actionable knowledge to construct and support observability frameworks that improve system robustness and make their generative AI applications trustworthy and accountable.

Presenters: