The emerging skillset of wielding coding agents — Beyang Liu, Sourcegraph / Amp

Debates on Coding Agents and AI Utility 00:00

  • Discussion begins with differing opinions in the developer community about the effectiveness of AI coding agents.
  • Jonathan Blow and others express skepticism, suggesting current AI agents are mostly hype, while others argue they're genuinely productive, especially for average programmers.
  • It's acknowledged that perspectives often depend on the programmer's skill level, with top 1% coders less likely to see benefit compared to mainstream developers.

Shifting Paradigms in AI Coding Tools 03:04

  • User behavior research at Canva found most people were misusing coding agents by applying outdated best practices from older AI tools.
  • Rapid advances in model capabilities—especially in the last six months—have rendered many prior practices obsolete.
  • There have been three “eras” in AI coding tools: initial AI text generation, chat-based copilots, and now tool-using agentic models.
  • Each new model capability necessitates a different application architecture and user experience.

Design Principles in the Agent Era 06:31

  • Coding agents should make file edits autonomously instead of asking permission for each change, reducing user micromanagement.
  • The era of thick, feature-heavy clients (like custom VS Code forks for LLMs) is likely ending in favor of lighter, command-driven experiences.
  • Swapping LLM models is now more difficult due to deep coupling—changing the underlying “brain” of an agent affects everything.
  • Token-intensive agents seem expensive compared to chatbots, but are justified by the human time saved.
  • Flexible pricing models and Unix-like composability are predicted to eclipse vertically-integrated solutions.
  • Migrating from old chatbot-era constraints, Sourcegraph built AMP, a new coding agent application designed for tool-using LLMs.

Minimal UI and Product Demonstration 11:01

  • AMP is intentionally designed with a very minimal interface: just a text box, either as a VS Code extension or a bare-bones CLI.
  • The VS Code extension leverages useful features like diff viewing for quick code review.
  • Demo shows AMP autonomously handling a real code change request: updating an external service connector icon, with little user direction.

Interaction Patterns and Feedback Loops 13:48

  • AMP agents call tools (e.g., file editors, bash, external APIs) without explicit user instruction for each step, reducing the cognitive load on the user.
  • The system uses sub-agents for tasks like search, which can be explored but are hidden by default to avoid overwhelming users.
  • It’s common (and encouraged) for users to run multiple agent threads in parallel for multitasking and deeper code understanding.
  • The interface and user base support running more than one agent thread per project, supporting habits like investigating architecture while another agent modifies code.

Power User Behaviors and Best Practices 21:13

  • Early release to a small, experimental user base revealed that spending (in inference costs) among top users can reach thousands of dollars per month.
  • Power users regularly write long, detailed prompts rather than short, vague instructions—AMP’s interface encourages this by making the Enter key insert a newline.
  • Directing agents to relevant project context and customizing feedback loops are vital to making progress in complex, out-of-distribution codebases.
  • Fast feedback loops are built by integrating tools like Storybook and Playwright to test UI-driven changes efficiently.
  • Power users sometimes intentionally instruct agents on custom test/build procedures to ensure full loop completion and accurate validation.

Improving Code Understanding and Review 27:44

  • Contrary to fears that agents encourage sloppy coding, power users leverage them to quickly understand unfamiliar codebases and onboard new team members.
  • Agents are effective at summarizing large diffs and identifying key entry points for code reviews, which enhances—rather than shortcuts—thorough review processes.
  • Sub-agents help compartmentalize large tasks and preserve main context windows, avoiding quality degradation in long-running threads.

Anti-Patterns and Responsible Use 30:33

  • Common anti-patterns include over-micromanaging the agent (treating it like a chatbot) and giving under-detailed prompts.
  • LLMs need either sufficient context or detailed prompting, especially for nuanced, production-level code changes.
  • Coding agents should not be used to bypass code review entirely; humans remain responsible for shipped code.

Advanced Patterns: Agent Parallelism and Composability 31:54

  • Top users run multiple agents in parallel, dividing complex projects (like compiler work) among them, and let agents run unattended based on robust feedback loops.
  • The future likely lies in composable “building block” agents rather than massive, unified agent fleets, giving power users flexibility in constructing workflows.

Key Takeaways and Closing Thoughts 33:16

  • Coding agents are real, high-ceiling tools—learning to wield them will become as critical as mastering editors or programming languages.
  • Effective use is learned through practice and community sharing; AMP encourages this via thread sharing mechanisms and a published manual for new users.
  • The talk concludes with practical offers to try AMP and a live Q&A regarding prompt habits and agent robustness against typos.