Enterprise

Uber: Leading Engineering Through an Agentic Shift

The team detailed their internal platform, adoption metrics, technical architecture, and the cultural and organizational challenges of scaling agentic AI across Uber’s massive engineering organization.

This entry is part 4 of 4 in the series Transforming DevOps for the AI Era

In this practical, behind-the-scenes presentation, the Uber Dev Platform team shares how they are driving a company-wide transition from traditional AI-assisted coding (autocomplete/Copilot-style) to full agentic workflows—where engineers act more like tech leads orchestrating AI agents that handle complex, multi-step tasks asynchronously.

They detail the internal platform, adoption metrics, technical architecture, and cultural/organizational challenges of scaling agentic AI at Uber’s massive engineering organization.

In these new workflows, engineers act more like tech leads who orchestrate AI agents. The agents handle complex, multi-step tasks asynchronously while humans focus on direction and review.

As of early to mid-2026, adoption has reached impressive levels. Eighty-four percent of Uber developers now qualify as “agentic coding users.”

They actively use CLI-based agents or make more agentic and multi-step requests instead of simple tab-completion. In IDE-based tools, 65 to 72 percent of code is AI-generated. For command-line agentic tools like Claude Code, the figure reaches 100 percent. Claude Code usage nearly doubled in just three months, climbing from 32 percent to 63 percent.

Uber’s strategic goals center on reducing toil and freeing engineers for higher-value creative and architectural work. The company treats AI agents like junior team members that execute tasks in the background. Humans then review and direct the outcomes.

Technology Platform

On the technical side, Uber built a comprehensive agentic platform internally. It includes async multi-agent workflows, a “Minion” background agent platform, an MCP Gateway, Uber Agent Builder, and the AIFX CLI.

These tools enable toil automation and background task execution. Engineers define goals and specifications, while agents independently handle implementation, research, or repetitive work. They return results for human oversight.

The platform also emphasizes observability and debugging. It provides detailed agent logs, quality grading, and monitoring. An AI-powered code review pipeline—called uReview and similar tools—manages the surge in AI-generated changes. Automated testing supports large-scale migrations and refactoring. Cross-functional “tiger teams” accelerate the rollout across the organization.

Conclusion: Challenges and the Roadmap Forward

Several challenges have emerged during the shift. Non-technical hurdles include cultural resistance and varying comfort levels with agents among engineers. Review fatigue has increased because of the higher volume of code changes. New skills are now essential in prompt engineering, agent orchestration, and verification.

Quality and trust remain critical concerns. Uber has focused on building guardrails, observability, and automated quality checks to prevent the accumulation of technical debt. Measurement efforts prioritize real productivity gains rather than simple adoption metrics.

Overall, Uber is aggressively investing in internal infrastructure to make agentic AI a core part of its engineering operating model. The company is moving beyond hype to production-scale usage. Success depends not just on tools but on thoughtful platform building, process changes, and leadership that emphasizes human-AI collaboration.

Transforming DevOps for the AI Era

The State of AI Code Quality: Hype vs Reality — Itamar Friedman, Qodo

Related Articles

Back to top button