After you ship your feature, you’re probably wondering… Is it working? Are people using it? Rather than trying to track down data in other tools that don’t have your feature flag’s context, you can now do it all in one place. Turn your feature on, see its impact on our new Feature Flag Monitoring view. To start, we’ll show errors and web vitals (LCP, CLS, and INP) on a system-wide level with flag change annotations overlaid to see correlation. Coming very soon, this will get leveled up significantly to scope the metric data just to the users evaluating your flag. We’ll also be adding session replays for your flag, user counts per variation, additional metrics, and much more!
To activate Feature Flag Monitoring, just initialize our client-side observability plugin to get started.
We’ve launched a completely redesigned experience for designing an experiment, focused on helping teams move faster with greater clarity and confidence. The refreshed UX introduces a more intuitive layout, improved draft and preview states, streamlined flag and metric selection, and clearer guidance throughout setup—including enhanced validation, audience targeting, and metric configuration. This update lays a strong foundation for iteration and experimentation, giving customers a more predictable and powerful way to design, manage, and refine experiments from start to finish. For more, read the create experiment documentation.
We’re rolling out Data Saving Mode, a large foundational upgrade to our Flag Delivery Network that can reduce SDK data transfer by over 90% by sending only changes (deltas) instead of full payloads on reconnect. This will unlock significant cost savings for customers, as well as faster performance, and support for large-scale use cases that may have previously been prohibitive. Early access is available now for customers using the latest versions of Go and Node.js server-side SDKs, with more SDKs coming soon. If interested, reach out to your LaunchDarkly account team now to learn more and start saving.
We're updating the method for detecting regression in percentile metrics. Previously, regression for percentile metrics was determined by checking whether intervals overlapped. With this update, we're moving to a relative difference-based approach, which can detect regressions that the interval overlap method might miss. This also brings percentile metrics in line with how mean metrics are evaluated, providing a more consistent experience in the UI.
Released a new Ruby AI SDK for use with AI Configs
Added stage duration and rollout percentage details for guarded and progressive rollout approval requests
Improved metric regression display by sorting results chronologically
Fixed layout and other UX improvements when managing flag templates
Fixed incorrect sdk keys in code snippets for Java and .NET (client) on the onboarding page
Fixed styling problems with disabled button states in observability components
Remove unnecessary attempt to load /app as a script resource on some pages
Fixed issue with delete context button not working in some instances
Fixed an issue when attempting to re-create a deleted flag with a team maintainer