What's new on LaunchDarkly

Release automation, feature flags, experimentation & analytics, and AI engineering—on a single platform

changelog
August 07, 2025

Changelog: Guardrail metrics for guarded rollouts, Experimentation enhancements, and more...

Guardrail metrics for guarded rollouts

You can now define guardrail metrics that will be added to any guarded rollout by default. This ensures each release starts with a consistent set of trusted metrics. Teams can still customize their configuration, but the default behavior encourages alignment across teams.

Image #1

Learn more about guardrail metrics


Experimentation Enhancements

Our Experimentation team has been hard at work developing features that make it easier for experimenters to start, run, and monitor experiments, and we’ve just launched a few of our frequently requested customer features!

A/A Tests

LaunchDarkly now makes it simple to run A/A tests—an industry standard practice to validate an experiment platform and your own test setups—so you can have full confidence that any differences observed in your A/B/n tests are due to the changes being tested, not errors in the experiment setup.

Image #2

Learn more about creating A/A tests


New experiment list dashboard

For teams with healthy experimentation programs, they often have many experiments across their features, products, and teams. The Experiments dashboard now surfaces more information about the status of experiments across your program so you and your team can more easily track your progress and outcomes as you coordinate shipping new value for your customers. Coming soon, we’ll provide additional sorting and tagging functionality for greater controls.

Image #3


Experiment Notifications

You can now receive notifications for when experiments start or end. The launch or conclusion of experiments your team is working on can be exciting moments, and many contributors and stakeholders wish to keep up with what’s happening with their experiments of interest. Now, it’s easier than ever to keep up with experiment statuses just by “following” an experiment to receive email or slack notifications.

Image #4


Other improvements

  • If you use a mobile or client-side SDK to evaluate a flag without the right SDK availability set, we’ll notify you on the flag page and prompt you to fix it
  • Added a new KPI view type and the ability to plot metrics directly to the Trends chart in product analytics
  • Product analytics charts now auto-update when making configuration changes
  • The event and metrics selection menu in product analytics has been redesigned to make chart setup faster, easier, and more intuitive for users
  • Improved performance and buffer in the session replay playback
  • Reduced initial bundle sizes to improve performance of the web app
  • Improvements for displaying and searching flag evaluations in session replay
  • More accurate p-value threshold in experimentation
  • Layout and copy improvements to archive flag flow


Bug fixes

  • Fixed iteration analysis queries being incorrectly disabled for draft experiments
  • Fixed context deletion button issues
  • Fixed layout and UX issues when managing flag templates
  • Fixed rendering issues on retention chart
  • Fixed issue with incorrect SDK keys in code snippets
  • Fixed regression with text contrast on small buttons
  • Fixed display of archive experiment modal header for long experiment names
  • Fixed missing overflow menu icon on Metrics page