What's new on LaunchDarkly

Release automation, feature flags, experimentation & analytics, and AI engineering—on a single platform

changelog
September 04, 2025

Changelog: Relative difference charts, custom metric filters, AI Config approval updates, and more

Relative difference charts

We’ve added relative difference charts to Guarded Releases, making it easier to see how metrics shift during a rollout. Instead of abstract probabilities, you’ll now see a direct comparison of your metric across control and treatment variations, answering the real question: “Did errors go up compared to before?”

This view is more intuitive, familiar to analysts and engineers, and makes Guardian rollouts easier to understand or explain to your team.

Image #1


Filters for custom metric events

We’ve made it simple to measure what matters. You can now filter custom metric events by event properties or context attributes directly in LaunchDarkly. For example: instead of instrumenting a separate error event for each page type, you can send one (error_event) with (page_type) as a property and then define distinct metrics per page right in the LaunchDarkly app. Image #2

This update simplifies instrumentation, speeds up metric creation, and adds flexibility across both Release Guardian and Experimentation.Check out the documentation or blog post to get started.


Smarter data filtering in OTel and Observability SDKs

Sending all observability data can be expensive and noisy. With inExperiment filtering, you can now configure your OTel collector or Observability SDKs to send data only for flags running a Guarded Release or Experiment. This reduces cost, avoids dual-shipping unnecessary data, and makes monitoring more efficient at scale.

Recommended collector configuration is available in our documentation.


Approvals updates for AI Configs

You can now require reviews and approvals for changes to AI Config variations and targeting rules before they go live. Approvals help keep AI behavior safe, transparent, and aligned with team standards. They make it easier to collaborate with teammates, add accountability to high-stakes updates, and stay informed with notifications on requests and decisions.

Get started with AI Config Approvals in our documentation


Multiple comparison corrections

Experimentation results are now more statistically rigorous. LaunchDarkly automatically applies multiple comparison corrections when analyzing experiments with many metrics or variations. This reduces false positives and ensures your reported wins are reliable even in complex tests. Now teams can move faster with confidence, knowing their decisions are backed by best-practice statistical methods.

Get started with multiple comparison corrections in our guide.


Other improvements

  • Added a new System” theme option that auto-matches your operating system preference
  • Introduced Scoped Clients in the Go SDK for more flexible client management
  • Released Experimentation demo sandbox for safer, hands-on testing
  • Optionally hide child spans in the Traces view
  • Configurable visualization types in Logs and Traces views
  • Display badge for external data sources on metrics list

Bug fixes

  • Fixed display of roles with nil base permissions in UI
  • Fixed a bug with newly created rules in AI Configs targeting
  • Fixed display of empty stack traces
  • Fixed unintentional text wrapping on clone flag dialog