Developers now have more flexibility in how they manage feature flags and AI Configs. By installing our new Model Context Protocol (MCP) server, you can empower AI agents like Cursor, Claude, and Windsurf to interact with your LaunchDarkly data. To get started, read our documentation or follow the tutorial.
Client-side observability is officially in open beta for every LaunchDarkly account. Every customer now has access to session replay, errors, logs, traces, and alerts. Read the documentation on the in-product observability features and how to set up your SDK.
We’ve added a new Tutorials section to our documentation site, where you can explore end-to-end examples of how to use LaunchDarkly platform features, courtesy of our Developer Experience team.
The LaunchDarkly CLI, ldcli, now includes a command for uploading your sourcemaps, which are then available as part of session replays.
We’ve made several improvements to our recent overhaul of individual context targeting for flags based on customer feedback. This includes displaying more rows by default, a bug fix with display names, and more. Thanks to everyone who left feedback so far, we’ve got additional improvements on the way!
Left align icons in targeting rule ellipsis menu to be consistent with other menus
Fixed an XSS vulnerability in our login redirect.
Fixed a bug with invalid variation options being displayed for some flags when stopping monitoring on a guarded release.
Fixed a bug with autocomplete in our JSON targeting editor not working properly in some cases
When trying to delete an AI Configs model that doesn’t exist, properly return a 404 instead of a 500
Fixed a bug with displaying an incomplete list of available approval reviewers in some views