Session: Hallucinations, Prompt Manipulations, and Mitigating Risk: Putting Guardrails around your LLM-Powered Applications
Large Language Models (LLMs) offer a future where we can interact with computers naturally like we interact with humans. However amazing this potential, enterprise adoption is still hindered by the risk of hallucinations – the propensity of LLMs to respond inaccurately but in a voice of complete confidence. Also, LLMs by themselves can be easily manipulated by end-user prompts to respond in ways that can damage trust or create liability. In order for LLM-powered applications to be fully utilized in real-world production settings, risk must be mitigated through effective guardrail strategies.
Attendees of this talk will learn about:
- Pre-processing techniques for protecting LLMs against end-user prompt manipulation
- Approaches for evaluating LLM outputs, and using LLM-as-a-judge
- An overview of open source guardrail frameworks for mitigating risk
- How to put what is learned into action through a live demonstration