Talk
Intermediate

Building robust LLM observability and improve security posture with OpenLit and OpenTelemetry

Review Pending

LLMs are game-changing in today’s world but understanding their behavior is much more crucial as it can improve the output a thousand times. Simply observing the input and output – the "prompt" and the "response" is no longer sufficient for building robust and dependable LLM-powered applications.

LLM Observability refers to a more in-depth approach to monitoring large language models, where it captures not only the basic outputs but also metrics, traces, and patterns of behavior. Without observability, identifying and fixing anomalies, performance issues, detection of sensitive data leak, inaccuracies becomes difficult.
In this talk, we will discuss how we can develop complete end-to-end Observability for LLM using OpenLit.

Outcomes that LLM observability gives to enhance the performance
Response latency: How quickly the model responds to user queries.
Data leak detection: Are PII and sensitive data included in response?
Classified data seek rate: Effective guardrails for excluding the classified data. Tracking the frequency of such user queries and from whom?
Token usage: Tracking token consumption to manage operational costs.
Prompt effectiveness: Evaluating how well the crafted prompts generate the desired outputs.

Almost every person nowadays uses LLMs and AI applications. Many products are using them to make some modern tech solutions so all the Cloud Native professionals can understand the benefits from it and can implement them in their projects, which will eventually help them improve their performance and also security.

Knowledge Commons (Open Hardware, Open Science, Open Data etc.)
Which track are you applying for?
Open Data Devroom

0 %
Approvability
0
Approvals
1
Rejections
0
Not Sure

Does not fit the open data devroom.

Reviewer #1
Rejected