We’ve added relative difference charts to Guarded Releases, making it easier to see how metrics shift during a rollout. Instead of abstract probabilities, you’ll now see a direct comparison of your metric across control and treatment variations, answering the real question: “Did errors go up compared to before?”
This view is more intuitive, familiar to analysts and engineers, and makes Guardian rollouts easier to understand or explain to your team.
We’ve made it simple to measure what matters. You can now filter custom metric events by event properties or context attributes directly in LaunchDarkly. For example: instead of instrumenting a separate error event for each page type, you can send one (error_event) with (page_type) as a property and then define distinct metrics per page right in the LaunchDarkly app.
This update simplifies instrumentation, speeds up metric creation, and adds flexibility across both Release Guardian and Experimentation.Check out the documentation or blog post to get started.
Sending all observability data can be expensive and noisy. With inExperiment filtering, you can now configure your OTel collector or Observability SDKs to send data only for flags running a Guarded Release or Experiment. This reduces cost, avoids dual-shipping unnecessary data, and makes monitoring more efficient at scale.
Recommended collector configuration is available in our documentation.
You can now require reviews and approvals for changes to AI Config variations and targeting rules before they go live. Approvals help keep AI behavior safe, transparent, and aligned with team standards. They make it easier to collaborate with teammates, add accountability to high-stakes updates, and stay informed with notifications on requests and decisions.
Get started with AI Config Approvals in our documentation
Experimentation results are now more statistically rigorous. LaunchDarkly automatically applies multiple comparison corrections when analyzing experiments with many metrics or variations. This reduces false positives and ensures your reported wins are reliable even in complex tests. Now teams can move faster with confidence, knowing their decisions are backed by best-practice statistical methods.
Get started with multiple comparison corrections in our guide.