LLM insights (beta)

Last updated:

|Edit this page

Beyond our native LLM observability product, we've teamed up with various LLM platforms to track metrics for LLM apps. This makes it easy to answer questions like:

  • What are my LLM costs by customer, model, and in total?
  • How many of my users are interacting with my LLM features?
  • Are there generation latency spikes?
  • Does interacting with LLM features correlate with other metrics (retention, usage, revenue, etc.)?

Supported integrations

Currently, we support integrations for the following platforms:

Dashboard templates

Once you've installed an integration, dashboard templates help you quickly set up relevant insights. Here are examples for Langfuse, Helicone, Traceloop, and Keywords AI.

To create your own dashboard from a template:

  1. Go the dashboard tab in PostHog.
  2. Click the New dashboard button in the top right.
  3. Select LLM metrics – [name of the integration you installed] from the list of templates.

Questions? Ask Max AI.

It's easier than reading through 560 docs articles.

Community questions

Was this page useful?

Next article

Integrating with Helicone

You can integrate with Helicone and bring data into PostHog for analysis. Additionally, we offer a dashboard template to help you quickly get insights into your LLM product. How to install the integration Sign up for Helicone and add it to your app. Similar to how you set Helicone-Auth header when configuring your LLM client, add two new headers Helicone-Posthog-Key and Helicone-Posthog-Host with your PostHog host and API key (you can find these in your PostHog project settings…

Read next article