[Full Workshop] Vibe Coding at Scale: Customizing AI Assistants for Enterprise Environments

Introduction to Vibe Coding 00:24

  • Vibe coding emphasizes focusing on the output rather than the code itself, embracing exponential growth in AI-generated code.
  • Building trust and adding guardrails to AI is crucial as agents run longer and generate more code.
  • The journey of vibe coding progresses through stages: Yolo Vibes (creativity, speed, instant gratification), Structured Vibes (balance, sustainability, maintainability, quality control), and Spectrum Vibes (scale, reliability, velocity through emerging best practices).

Yolo Vibe Coding Explained 02:55

  • Yolo vibe coding is an "outcome first" approach, where users interact via natural language, often in a chat panel, and auto-accept changes.
  • It's highly effective for rapid prototyping, proof of concepts, and enabling non-technical individuals (e.g., UX designers) to communicate ideas visually.
  • It serves as a powerful learning tool, allowing users to quickly get code running and understand underlying technologies.
  • Personal projects, like a water tracking app or building something with kids, become feasible over a weekend.

Yolo Vibe Coding Demo & VS Code Setup 05:08

  • The demo begins in an empty VS Code, utilizing Copilot in agent mode.
  • A setting to "scaffold new workspace" in the tools picker can be disabled for simpler HTML projects, but is beneficial for more complex project setups.
  • It's recommended to use popular and consistent front-end stacks like React Vite and Material Design, as AI performs better with well-known frameworks.
  • The "auto approve" setting (preferably set for the workspace for safety) allows Copilot to run commands and apply changes without constant confirmation.
  • The AI successfully generated a Material Design water hydration tracker, demonstrating its ability to create visually appealing UIs based on high-level design principles (e.g., "Apple design principles").
  • A comparison with a Fluent Design version showed varying aesthetic outcomes, highlighting AI's design capabilities and limitations.
  • Visual editing allows users to select elements in a browser preview to attach their CSS and HTML context to the chat, enabling specific visual modifications (e.g., adding animated particles to a header).
  • The "undo" button provides checkpoints, allowing users to revert AI-generated changes step-by-step.

Yolo Vibe Coding Toolbox 26:02

  • The Copilot agent panel offers flexible layout options, including moving it to the editor, a drop-down menu, or a separate window, to optimize workspace.
  • The "new workspace flow" streamlines project creation by optimizing for common "make me an app" scenarios and helping with stack selection.
  • Voice dictation (Command I) provides a fast, local, and private way to interact with Copilot, with accessibility features like text read-back.
  • Visual context attachment allows sending screenshots and element details (CSS/HTML) to the AI for targeted modifications.
  • Auto-accepting changes and auto-saving features contribute to a fluid, "vibe" workflow.

Structured Vibe Coding 31:59

  • Structured vibe coding is a middle stage that balances the speed of yolo with a more organized approach, ideal for enterprise use cases.
  • It involves providing a consistent tech stack, clear LLM instructions, guardrails, and custom tools with expert/internal domain knowledge.
  • This approach yields faster, more consistent results, helps non-technical people contribute, and enables rapid bootstrapping of greenfield projects with internal design systems.
  • It allows for customization to internal stacks, deployment infrastructure, and specific workloads.

Implementing Structured Vibe Coding (Instructions, Prompts, Modes) 39:18

  • github/copilot-instructions.mmd files provide a foundational grounding knowledge for Copilot, guiding its behavior across the codebase (e.g., specifying frameworks, versions, and preferred tools).
  • Scoped instructions (github/instructions/name.instructions.mmd) use glob patterns to apply rules to specific file types, although they currently require the target file to be in context.
  • "Prompts" are reusable tasks (e.g., for writing tests or generating specs) that can be easily injected into the chat, standardizing AI interactions across a team.
  • "Custom Modes" (an insiders-only feature) allow users to define and enforce specific development techniques, such as Test-Driven Development (TDD).
  • A TDD mode example was demonstrated, which guided the AI to understand the problem, write failing tests first, seek user confirmation, and then implement the code while continually running tests.

Multi-Client Protocol (MCP) Servers 57:29

  • MCP servers extend VS Code's capabilities by integrating custom tools and services.
  • They can be set up by editing a JSON file or using an "install server" protocol, adding configurations to user or workspace settings.
  • Examples include Playwright MCP (for browser testing, screenshots, accessibility audits) and GisPad MCP (using GitHub Gists for knowledge base and prompts).
  • MCP servers support secure input types for sensitive data like API tokens, which are encrypted at rest.
  • They can connect to custom HTTP or SSE (Server-Sent Events) servers (though SSE is noted as deprecated for hosting reasons).
  • Tools provided by MCP servers are cached to avoid proactively starting all servers when Copilot opens.
  • Agent mode is necessary for executing tools provided by MCP servers, as ask mode does not inherently support function calling.
  • "Tool sets" (insiders-only) allow grouping related tools (e.g., a "research tool" set containing Perplexity and fetch) for better organization and constraint.
  • Sampling is an MCP feature (insiders-only) that allows the server to leverage the client's LLM, primarily for summarizing content to reduce token usage.
  • Tool calling remains probabilistic; even when explicitly suggested, the AI might not always use a specific tool.

Key Takeaways and Workflow Tips 75:25

  • Continuously refine AI instructions as mistakes occur.
  • Commit code often to create checkpoints, allowing the AI to be creative while maintaining a working state.
  • Use the "pause" button to review AI actions if it goes off track.
  • Well-structured, self-explaining codebases with updated instructions and examples optimize AI performance.
  • Experiment to find the right balance between broad tasks and detailed specifications for different scenarios.
  • Provide continuous feedback and iterate on AI outputs.
  • Use custom modes, prompts, and instructions to embed best practices within the team's workflow.
  • Leverage AI as a thought partner for critiquing ideas and specs (e.g., asking it to generate critical questions about a design).