OpenCode: Claude Code but Open Source, with Any Model, and frontier TUI - with Dax Reed (@thdxr)

Introduction & Dax Reed's Background 00:03

  • Dax Reed has focused on building open source tools for developers over the past five years, with an emphasis on enhancing developer workflows.
  • Reed is known for creative projects, including a terminal-based coffee company accessible via SSH, aimed at developers as customers.
  • His experience with terminal UX informed subsequent projects, pushing the boundaries of what can be achieved in terminal environments.

The Origins and Philosophy of OpenCode 02:00

  • OpenCode began as a response to the limited options available to Vim/Neovim users, inspired by the rise of tools like Cursor and Claude Code.
  • The project seeks to provide a terminal-based AI code assistant that's flexible and not tied to a single LLM provider.
  • The team prioritizes product experience and UX rather than squeezing incremental improvements from LLMs.
  • OpenCode is built for developers by developers, driving feature development based on immediate needs and community feedback.

OpenCode Product Demo & Key Features 05:16

  • OpenCode offers a full-screen terminal user interface (TUI), enabling more complex UI interactions than typical CLI tools.
  • Major differentiator: supports any AI model (defaulting to Sonnet 4 but adaptable to others as models improve).
  • Includes a "share" feature, enabling users to snapshot a session, receive a URL, and share conversation and diffs—useful for code review and collaboration.
  • Features built-in file explorer, diff viewer, and responsive design, resembling early stages of code review tools.
  • Focus remains on code review and workflow enhancement rather than direct code editing inside the TUI, to avoid feature bloat and deep maintenance challenges.

Technical Architecture and Modes 10:05

  • OpenCode functions with an agent loop similar to Claude Code: system prompts and tool schemas are replicated for interoperability.
  • Implements concept of "modes" (e.g., plan mode, implement mode, Gemini mode), allowing customization by combining system prompts, models, and tool sets.
  • The tool uses a client-server architecture: the server is in TypeScript/bun (compiled as native executable, runs anywhere without Node.js), while TUIs are optimized for performance.
  • Frontends in other technologies may emerge for desktop, web, and mobile clients; the core server is designed to be modular and extensible.
  • Bundled adapters and tool integrations are dynamically downloaded as needed for flexibility.

Compaction, Context Management & Simplicity Philosophy 16:27

  • OpenCode handles context window limitations with a compaction mechanism similar to Claude Code, but focuses on straightforward solutions.
  • Most users seldom encounter compaction issues due to frequent session resets, though it's acknowledged as an occasional pain point.
  • Avoids investing heavily in fleeting model-specific optimizations, preferring durable improvements in product experience.
  • Emphasizes the importance of resisting complexity and unnecessary optimization as model capabilities and user needs evolve quickly.

Evaluation, Benchmarks & Iterative Improvement 19:31

  • Current AI code tool benchmarks are disconnected from real-world developer tasks; OpenCode is working on its own benchmarks based on actual developer workflows.
  • Internal changes are validated on whether they improve actual user experience, often surfacing unexpected regressions, such as problematic fallback strategies from other projects.
  • The team encourages experimentation but values focused iterations based on community and internal developer feedback.

Plugins, Model Integrations & Usage Patterns 23:08

  • Plans are in place for a plugin system so that community-driven integrations, especially those with experimental or questionable value, can be handled as optional add-ons.
  • Usage patterns vary; some users process hundreds of millions of tokens monthly, but most use far less, suggesting that token-based pricing could work better for many than flat-rate plans.
  • Telemetry is intentionally limited to facilitate enterprise adoption and ensure privacy.

Open Source Contributions, Community, and Philosophy 25:32

  • The team asks for prospective contributors to discuss features before submitting pull requests to prevent chaotic development or feature bloat.
  • Open source is leveraged mainly for community-driven integrations, long-tail LSP support, and compatibility across LLMs and languages.
  • Core product feature development remains tightly curated to maintain product coherence and quality.

Competitive Landscape & Vision 27:27

  • OpenCode is positioned as the leading open source alternative to Claude Code but does not expect to overtake it while Sonnet 4 remains dominant.
  • If a new open source or non-Anthropic model matches or surpasses Sonnet 4, OpenCode's model-agnostic design could make it the primary choice for many.
  • The product is ready to adapt to developments across the model ecosystem (OpenAI, Gemini, etc.) via its flexible plugin and integration system.

Future Directions, Monetization, and Closing Thoughts 34:02

  • Monetization will focus on enterprise needs—team management, authentication, usage metrics—rather than charging for core open source functionality, to keep the product accessible.
  • The founder sees a market shift where winning products will hinge on "boring," reliable UX enhancements rather than intricate model trickery.
  • Building basic, robust product features is seen as a major opportunity that others have overlooked amidst rapid AI advances.
  • The OpenCode team is committed to scaling through community, iterative improvement, and maintaining a strong on-the-ground focus on developer productivity.

Final Remarks & Call to Action 36:26

  • Users are encouraged to try OpenCode (opencode.ai) and to communicate with the team before contributing code.
  • The project’s success is driven by real developer needs, open collaboration, and a focus on practical, everyday use cases over theoretical benchmarks.