tldraw has incorporated multiple AI-powered features using their canvas as a base for experimentation.
Make Real: Turning Wireframes into Real Apps 03:36
The "Make Real" feature uses models like GPT-4 with vision to transform user sketches and wireframes into actual prototypes and applications.
Users can send screenshots to models with instructions to produce functional components.
Users can annotate defects directly on the canvas and prompt the model to fix bugs, making the iteration process easy and accessible to non-programmers.
This feature empowered people without coding skills to create software, launching at the end of 2023.
"Draw Fast" leverages latent consistency models for rapid image generation, updating images in near real time based on user drawings.
Allows users to manipulate and interact with AI-generated images directly within the canvas, though performance may vary.
tldraw Computer: Multimodal Graph-Based AI Composition 08:07
Demonstrates "tldraw computer", where users build graphs of modular components that accept different inputs (text, drawings, instructions) and produce outputs (images, speech, text).
Components execute scripts, chain outputs between blocks, and generate multimedia content automatically.
Collaboration with Google: Used Gemini 1.5 and Gemini Flash for multimodal, fast, and interactive demos.
Showcases AI composability—combining different blocks for tasks like creating commercials based on input sketches and text.
The system leverages large language models to process nonlinear and multimodal tasks, such as combining numerical and non-numerical inputs intelligently.
Models can make creative inferences (e.g., interpreting "octopus" as "eight" when summing numbers), showing reasoning capabilities beyond strict coding.
Demonstrates feedback loops and automatic, iterative task pipelines, reminiscent of game-like automation (e.g., playing Factorio with loops and cycles).
Users have built multi-stage prompting chains for tasks like sentiment analysis and asynchronous decision-making.
Conceptualizes tldraw computer as a way to design "computers" that align with intuitive, pre-technical expectations (giving instructions, connecting processes visually).
Integrating AI Agents and Creative Collaboration 15:46
"Teach" demonstrates AI as a virtual collaborator—users can instruct the model (e.g., "draw a cat") and the model returns structured text that maps to editable canvas shapes.
The system can respond to complex or imaginative prompts, integrating closely with the editable canvas for creative tasks.
Performance includes fun and playful outputs, illustrating the balance between innovation and lighthearted experimentation.
Open SDK and hackable canvas encourage developers to create diverse and advanced applications, from simulations to collaborative tools.
Notable third-party uses include Grant Cott's liquid simulation and integration by companies like Observable.
Steve encourages the audience to experiment and build innovative projects using tldraw's technology, affirming that the ecosystem is still in its early days with substantial potential ahead.