Cursor has introduced Automations, a new system designed to automatically launch coding agents based on triggers like code changes, Slack messages, scheduled timers, or incidents from tools such as PagerDuty. Announced Thursday, the feature allows engineers to run always-on agents inside the Cursor environment that can review code, investigate issues, or manage development workflows without requiring a developer to manually prompt each task.
The system is meant to help teams manage the growing complexity created by agentic coding, where engineers increasingly rely on multiple AI agents to write and modify code. As these agents generate more output, the burden shifts to reviewing, monitoring, and maintaining that work. Cursor’s Automations framework attempts to address that imbalance by letting agents automatically start when specific events occur, with humans stepping in only when needed.
Instead of the typical workflow where developers manually prompt AI tools, Automations allows agents to run in response to defined triggers. According to Jonas Nelle, Cursor’s engineering chief for asynchronous agents, the goal isn’t to remove humans entirely from the process but to shift their role. Engineers remain involved, but the system calls them in at key points rather than requiring them to initiate every action.
Internally, Cursor has already used a version of the concept through Bugbot, a tool that automatically reviews code whenever a pull request is opened or updated. With the new Automations framework, the company has expanded that approach to broader workflows. Agents can now conduct security reviews, perform deeper code audits, or analyze changes across the codebase without waiting for a human prompt.
Other automations extend beyond code review. Some agents monitor incidents, automatically launching investigations when a PagerDuty alert fires and querying logs through integrations like Datadog. Others handle routine coordination tasks, such as posting weekly summaries of code changes to a team’s Slack workspace.
The company says it already runs hundreds of automations per hour across its own codebase, handling tasks that previously required engineers to manually launch or supervise agents. By automating these workflows, Cursor argues that models can spend more time analyzing complex issues rather than operating only in short prompt-driven sessions.
The release arrives as competition in the agentic coding tools market continues to intensify. OpenAI and Anthropic have both rolled out updates to their coding-focused agents in recent weeks, pushing the broader ecosystem toward tools capable of handling larger parts of the software development process.
Despite that competition, Cursor’s growth has remained strong. Data from Ramp indicates the company holds about 25% of generative AI customers among Ramp’s client base. At the same time, the rapid adoption of agent-driven coding tools has fueled significant revenue gains. Bloomberg recently reported that Cursor’s annual revenue has surpassed $2 billion, doubling in just three months.
With Automations, Cursor is positioning its platform less as a simple coding assistant and more as a system that coordinates multiple agents working across a codebase. As AI-generated code becomes more common, tools that manage, review, and maintain that output may become just as important as the agents that produce it.
This analysis is based on reporting from Cursor.
Image courtesy of Cursor.
This article was generated with AI assistance and reviewed for accuracy and quality.