Coding As Fast As You Think

How I Currently Use AI To Augment My Development Workflow

In today’s rapidly evolving development landscape, harnessing the power of AI can feel like a superpower—giving you instant feedback, brainstorming partners, and even automated testing. This guide is a deep dive into how I currently integrate AI into my workflow to write, test, and deploy code faster than ever before. If you’re an experienced developer who’s already comfortable slinging code but wants to take advantage of AI’s productivity boosts, you’re in the right place.

👨‍💻 Who This Guide Is For

This isn’t a beginner’s guide to programming. Rather, it’s for seasoned engineers looking to optimize their existing workflow. You’ll see how AI can act as a brainstorming buddy, a design consultant, and a coding partner all rolled into one. The key is figuring out how to best orchestrate these tools so they complement your developer intuition without taking it over.

🛠️ My Toolkit

I use three main models, each with a distinct role:

GPT-o1: My brainstorming collaborator, helping shape project requirements and architecture discussions.

Claude 3.5 Sonnet: My go-to implementation partner for actual coding tasks.

GPT-4o: Handling multimodality, such as converting my whiteboard sketches to Mermaid.js diagrams.

I access these models primarily through two tools:

1. macOS ChatGPT client: Great for voice-enabled Q&A and chatting with models.

2. Cursor: Perfect for feeding in specific instructions, generating code, and handling version control details like pull requests and commit messages.

Other tools I use:

  • Claude Chat: Access Anthropic models via chat client

  • Tabby & Ghostty: Tabby is my daily driver terminal, slowly starting to use Ghostty.

  • Amazon Q (Formerly Fig): Terminal Autocomplete, if anyone knows of more modern open source alternatives to this — please let me know!

  • DeployMate: Autogenerate my CI/CD Pipelines

I will cover my terminal and Cursor extension set up in another post if there interest — let me know!

📚️ What You’ll Learn

Over the course of this guide, I’ll walk you step by step through:

Brainstorming & Requirements Gathering: Forming a clear project scope with GPT-o1.

Design & Architecture: AI Augmented Whiteboard sessions to design and refine refine architectures.

Instructions Documents: Creating detailed Markdown instruction files for Cursor to create amazing code.

Cursor Integration: How to give Cursor precise prompts and structure your debugging workflow.

Code Generation & Testing: Getting the most out of AI Augmented Coding while maintaining full control and oversight.

By the end, you’ll see how all these pieces come together, creating a seamless pipeline that transforms your ideas into working code—almost as quickly as you can think it up.

Let’s jump right in and explore how AI can supercharge your development process, starting with how I translate raw ideas into actionable requirements.

🧠 Brainstorming & Requirements Gathering

When I’m starting a new project, my first step is brainstorming and gathering high-level requirements. Here’s where GPT-o1 shines as a thoughtful brainstorming buddy—someone (or, well, something) that can push me to ask the right questions about the problem I’m trying to solve and the people I’m solving it for.

🤔 Understanding the Problem & Audience

I often begin with very open-ended questions about the domain, target users, and the core issues they face. For instance:

  • “I want to build a product that helps construction workers create quotes faster. What problems do they typically face in that process?”

  • “Which types of construction workers struggle most with quote generation—large companies or smaller contractors?”

  • “How do these construction workers currently create quotes, and what are some pain points in that workflow?”

GPT-o1 will respond with ideas about inefficiencies, potential user personas, or overlooked challenges—often prompting me to think about features or angles I hadn’t considered.

🎯 Zeroing in on Features & Requirements

Once I have a clearer picture of the problem space, I use GPT-o1 to brainstorm features that could alleviate the identified pain points. This conversation might look like:

  • “Given these challenges, what key features would help streamline quote creation?”

  • “How might we integrate real-time price lookups or cost estimation tools?”

  • “What kind of reports, dashboards, or analytics might these users need?”

GPT-o1’s suggestions here help shape a high-level feature set. For the quote creation example, it might propose:

  1. Automated Cost Calculation based on materials, labor rates, and regional factors.

  2. Template Library for common project types (e.g., roofing, foundation work, renovations).

  3. Collaboration Tools to share or edit quotes with team members.

  4. Version History to track changes and maintain transparency with clients.

📝 From Brainstorm to High-Level Requirements

Once these potential features are on the table, I’ll ask GPT-o1 to consolidate them into a list of initial requirements. These requirements usually include:

  • Feature Name: A concise title (e.g., “Automated Quote Templates”).

  • Short Description: One or two lines about the user need it addresses.

  • Notes/Questions: Outstanding unknowns or assumptions (e.g., “Should we offer different templates by state or region?”).

This final list serves as my starting framework for the entire project. I save it as a reference in a simple text file or an early draft of my instructions.md—the same file I’ll later refine with more detailed specs.

🏁 What’s Next?

With a high-level outline of who we’re building for and what problems we want to solve, I’m ready to start thinking about how to implement these features. That’s where the AI-augmented design process comes in. In the next step, I’ll grab a whiteboard and ChatGPT Voice Mode to begin turning these raw ideas into tangible architecture and workflow diagrams.

📌 Tip: Keep your brainstorming chats organized. They not only document your initial thoughts but can also serve as inspiration or reference later if you need to pivot or expand your feature set.

🎤 Designing with a Whiteboard & ChatGPT Voice Mode

Designing the system architecture is where those raw ideas and high-level requirements start to take shape. Here’s how I use a good old-fashioned whiteboard—plus ChatGPT Voice Mode—to refine and validate my design concepts in real time.

✍️ Whiteboard for Big-Picture Design

I like to begin by sketching out the overall flow of the system on a whiteboard. This is the moment to visualize:

  • Core Components (e.g., front end, back end, database).

  • User Flows (In this example: creating a quote, editing a quote, approving a quote).

  • Key Integrations (e.g., payment gateways, third-party APIs for cost lookups, etc.).

Why a Whiteboard?

  • Tactile Feedback: Drawing arrows, erasing things, and physically standing in front of the design helps me think through problems in a spatial way.

  • Iterative: Quick to adjust or pivot—just erase and redraw.

  • Shared View: When collaborating with others on-site, it’s easy for everyone to gather around and point at specific parts of the diagram.

🎤 Real-Time Conversations with ChatGPT Voice Mode

While I’m sketching, I’ll have ChatGPT Voice Mode running on my Mac. This lets me ask questions out loud as I go:

“Here’s all the data I need to store—user profiles, project details, quote histories. Would a relational schema or a document-based database (like Firebase or MongoDB) be more efficient for quick lookups and real-time edits?”

“If I choose a relational database, how should I structure my tables and relationships so that fetching data is intuitive and fast?”

“If multiple people are editing the same quote at the same time, how should we handle concurrency and version conflicts to avoid overwriting each other’s changes?”

By speaking these questions, I can get quick insights without breaking my flow to type them out. ChatGPT might point out performance considerations, remind me of best practices, or even suggest alternative approaches I hadn’t thought of.

💡 Why Voice Mode?

  1. Speed of Thought: It keeps the brainstorming momentum going.

  2. Less Context Switching: No need to stop drawing just to type; I can keep my hands on the markers.

  3. Immediate Feedback: Helpful for sanity checks like, “Is this layer too coupled?” or “Are there libraries that handle this pattern well?”

🏗️ Evolving the Architecture

As my sketch becomes more concrete, I’ll often revise it in real time:

  1. Identify Bottlenecks: ChatGPT might highlight areas that could become performance hotspots.

  2. Validate Dependencies: If the design includes external APIs, I’ll confirm with ChatGPT that they can handle the expected load or provide the necessary data.

  3. Tackle Unknowns: Questions like, “How should I layer multiple user roles within an organization?” can spawn deeper design discussions.

I’ll keep refining until the major building blocks feel logically sound.

📸 Converting Whiteboard Sketches to Mermaid.js

Once the main design is settled, I snap a photo of the whiteboard and feed it to GPT-4o through ChatGPT to convert the rough diagram into Mermaid.js code. This will then be used in an instructions.md file (more to come about that), ensuring the design is both visually clear and AI-friendly for subsequent steps.

📌 Tip: Label each component on the whiteboard (e.g., “Database,” “Auth Service,” “Quote Service”) so GPT-4o can accurately translate them into Mermaid.js.

📝 Creating the Instruction Documents

After sketching out your big ideas on the whiteboard, you’re ready to officially document your project’s purpose, scope, and implementation details in a way for AI tools—like Cursor—to parse understand what to build. We do this through multiple instruction documents, each focusing on a different aspect of the project.

🗂 Why Multiple Instruction Documents?

It might be tempting to dump everything into one massive file. But splitting your instructions into separate, focused documents makes it easier for Cursor to pinpoint relevant information based on the current coding task.

Key Benefits

  • Modular Clarity: Each file covers a specific domain (database, backend, frontend, testing), so Cursor doesn’t get confused irrelevant details to the specific task at hand.

  • Focused AI Prompts: Cursor can parse smaller, topic-specific files more thoroughly.

  • Easier Maintenance: Updating one dedicated file is simpler than scanning a giant, all-in-one doc.

📂 The instructions/ Folder

I like to create a folder called instructions/ at the root of my project. Inside, I’ll include multiple Markdown files, each tailored to a specific part of the system:

  1. instructions.md – Grand overview of the entire project.

  2. database_instructions.md – Schema design and database references.

  3. backend_instructions.md – Endpoints, authentication, server architecture.

  4. frontend_instructions.md – UI mock-ups, tech stack decisions, and workflows.

  5. testing_instructions.md – Guidelines for unit, integration, and E2E tests.

Depending on your project, you might add or rename these to fit your needs.

📌 Note: Most AI models may struggle with very new or niche libraries and frameworks unless given ample context. When using advanced or less common technologies, include detailed directions and documentation in your instructions directory. Conversely, if you’re building a web app with well-established technologies (such as React and FastAPI), the AI can usually deliver high-quality output with minimal supplemental information.

✨ instructions.md: The Grand Overview

This file serves as your central hub, laying out the crucial context that Cursor and other readers will need.

For my overview I like using Simon Sinek’s “Start With Why,”:

  • Why: The core motivation or user problem behind your project. Providing this context helps Cursor make better implementation decisions when details aren’t explicitly spelled out. For instance, if the why is to boost performance for users on slow networks, Cursor might automatically recommend more efficient data-fetching patterns.

  • What: The features and scope you plan to address. This is the roadmap of your project, giving the AI a clear idea of exactly what needs to be built.

  • How: The chosen technologies, coding style, or architectural patterns you want to follow. Specifying your coding philosophies (e.g., modular microservices vs. monolith, React vs. Vue) tells Cursor how to approach each feature so it remains consistent with your standards. This is where you incorporate your Mermaid.js diagrams as well as drafting out your desired file structure for the project.

You might also list major milestones or release phases in the main instructions.md file.

🗃️ database_instructions.md

Here’s where you specify:

  • Chosen Database: Whether you’re using Supabase, Firebase, MongoDB, etc.

  • Schema Definitions: Tables/collections, relationships, indexes.

  • Performance Considerations: Any caching strategies or indexing best practices to keep queries fast.

  • Relevant Documentation: Providing references to any SDKs, Libraries you want to use for your database interactions.

By detailing this, you give Cursor enough background to generate coherent model definitions, data-access layers, or migration scripts.

⚙️ backend_instructions.md

Cover all things server-side:

  • Architecture: Monolith vs. microservices, or perhaps a serverless approach.

  • Authentication: How you plan to handle user logins and permissions (JWT, OAuth, sessions).

  • Endpoints: REST/GraphQL endpoints, including expected inputs/outputs.

  • Security: Guidance on handling secrets, API keys, and data sanitization.

Cursor uses this to scaffold your backend files and ensure each endpoint aligns with your security and performance needs.

🖥️ frontend_instructions.md

Focus on the user interface and client-side:

  • Frameworks: Next.js, React, Vue, etc. TailwindCSS for CSS framework, etc.

  • Component Libraries: 21st.dev, Shadcn, DaisyUI, etc.

  • Data Flow: How state is managed and how data is fetched from the backend.

  • Styling & Theming: Brand guidelines, color schemes, typography rules.

Giving Cursor these design details helps it propose front-end structures and even generate boilerplate code consistent with your layout.

🧪 testing_instructions.md

Testing is essential to reliability and maintainability:

  • Test Types: Unit, integration, end-to-end (E2E).

  • Frameworks: Jest, Pytest, Cypress, etc.

  • CI/CD Integration: How tests run in your pipeline (GitHub Actions, GitLab CI).

  • Coverage Goals: How much of your codebase should be tested.

With these guidelines, Cursor can generate test scaffolding that aligns with your chosen tools and coverage requirements.

🖼 UI Mockups

If you’re exporting wireframes or high-fidelity screens from Figma (or any other design tool), you can attach them directly to your prompts in Cursor Compose. This way, the AI has a visual reference for what the UI should look like and can tailor its code suggestions accordingly.

  • File Placement: Store your PNG exports (e.g., login_screen.png, dashboard_mockup.png) in the instructions/ folder.

  • Attaching in Cursor Compose: When writing your prompt, reference your mockup file explicitly so Cursor can factor it in. For example:

“Start building feature 1 carousel on @front_end_instructions.md, use the className="basis-1/3" for each carousel item. Reference UI mock up @dashboard_mockup.png for the placement in the dashboard. Think Step by Step”

  • Keep Images Updated: If you tweak your design in Figma, remember to re-export the PNG and re-attach it. This ensures Cursor sees the most accurate, up-to-date reference for your UI.

By including these PNGs in your Cursor Compose prompts, you help the AI generate code that aligns with your exact visual layout and user experience goals.

Armed with these instruction documents, you’ve given both yourself and Cursor a clear framework to follow. Next up, you’ll see how to feed this information into your coding environment so you can start building your project with AI guidance at every step.

📌 Tip: Whenever you update one file—say, you rename a database field—it may need to be reflected elsewhere (like in your backend or front-end instructions). By keeping your instructions consistent across these files, you’ll make sure Cursor and any other contributors always have the most accurate, up-to-date roadmap.

🤖 Interacting with Cursor

Now that you’ve created your instruction documents—it’s time to put them to work in Cursor. This section covers how to structure prompts and optimize your workflow for both straightforward code generation and deeper debugging sessions.

🧐 What Model Do I Use?

Cursor supports a large variety of models and even allows you to run your own model for some features.

As of January 2025, I’ve prefer Claude 3.5 Sonnet and found it provides the highest quality code generation for my use cases.

🏷 Bringing Instructions into Cursor

Cursor offers two main interfaces for AI interaction: Composer and Chat. Each one can consume your prompts differently:

  1. Cursor Compose

    • Ideal for generating or refactoring code.

    • You can attach specific Markdown files (like instructions.md or database_instructions.md) and reference them in your prompt.

    • Perfect for step-by-step feature implementation or larger codebase changes.

  2. Cursor Chat

    • Functions like a conversation: helpful for debugging, clarifying questions, or exploring multiple approaches interactively.

    • You can paste snippets of your instructions here or reference them as needed to give the AI context.

💡 Prompt Strategies for Cursor

When you prompt Cursor, clarity and context are key. Here are some best practices:

🔖 Reference Relevant Instruction Docs

  • Attach specific sections from your instructions documents.

  • Identify the specific part of your instructions to build.

  • Step by Step thinking through the implementation (adding this to the prompt helps improve output in chat client).

  • For example:

“Start building feature 2 on the @backend_instructions.md, where the scraped data is processed and stored into the Supabase database. Refer to @database_instructions.md for the schema we need to process the data into and store it. Think step by step through implementation”

This ensures Cursor understands the exact requirements and how you want those requirements built out.

📃 Summaries and Checkpoints

  • After Cursor suggests code, review each suggestion, and approve one by one. It is best to review each change and comment on which changes need improvement rather than blindly accepting all proposed changes.

In the event Cursor has overlooked important constraints or design choices you outlined you can prompt around implementing the specific missed requirements.

🔍 When to Use Composer vs. Chat

Cursor Compose is best for implementation or large-scale refactors. You provide:

  • A code file or multiple files.

  • Relevant instruction docs.

  • A structured prompt outlining what you want changed or added.

Cursor Chat works better as your live debugging buddy. If something isn’t compiling or a certain function is misbehaving:

“I keep getting this error in the terminal when running the python script: @terminal output 10-50, help me debug and fix this issue step by step.“

Chat will propose fixes or alternative approaches, and you can keep iterating until it’s resolved.

🔧 Example Flow

  1. Planning a Feature: Attach your instructions.md or relevant sub-file to Cursor Compose, specifying which feature you want to build first.

  2. Generating Code: Let Cursor generate the logic.

  3. Debugging: Switch to Cursor Chat if you hit a roadblock or want interactive Q&A.

  4. Refinement: Return to Compose with updated instructions or code files to refine or extend the solution.

📌 Tip: Maintain Context

Cursor’s output depends heavily on the context it’s given. Make sure to:

  • Attach or paste only the relevant instruction docs or code sections.

  • Update your prompts with the latest insights or changes.

  • Re-run the prompt if you add new info or instructions partway through.

By keeping Cursor “in the loop,” you’ll get outputs that consistently align with your project’s requirements.

With this interaction workflow in place, you can start building out your project feature by feature, using Cursor’s AI to move fast while maintaining quality. In the next section, we’ll dive into practical code and prompt examples, showcasing real-world snippets of how this process looks in action.

🧩 Code & Prompt Examples

Now that you know how to interact with Cursor and leverage your instruction documents, it’s time to see these concepts in action. In this section, we’ll walk through some practical scenarios—from generating new features to troubleshooting bugs—to illustrate how you can seamlessly integrate AI into your coding workflow.

⚙️ Generating a New Feature From Instructions

Let’s say you’re ready to implement a “Create Quote” feature in your backend. You have a solid outline in backend_instructions.md describing the endpoint behavior and authentication flow, plus a schema in database_instructions.md detailing how quotes should be stored.

Example Prompt in Cursor Compose

Let's complete feature 2 in @backend_instructions.md -- Please generate a new Fast API endpoint called /create-quote that:
1. Accepts JSON containing project details and user info.
2. Validates the data against our "quotes" schema in database_instructions.md.
3. Inserts the new quote into the "quotes" table/collection. Reference @database_instructions.md for database configuration.
4. Returns a 201 status with the newly created quote data.

Remember, follow the authentication rules in @backend_instructions.md (use our JWT-based auth).
Think step by step through the implementation

What Happens Next?

1. Context Check: Cursor reads the instructions on both the backend and database instruction files.

2. Generation: It outputs an Fast API endpoint based on the requirement.

3. Review & Refine: If you notice requirements are missing, you can ask Cursor compose to add those — such as if it forgot error handling.

4. Testing: Once generated, if something doesn’t work as expected, you should move to Chat for interactive debugging.

🐛 Debugging in Real Time

Let’s say the CREATE QUOTE endpoint runs into a runtime error. You jump into Cursor Chat:

When sending a POST request to the /create-quote endpoint I keep getting a 500 error, here's the logs @terminal line 20-55

Think step by step through the resolution of this bug, remember it is supposed to respond with the API message outlined in feature 2 of @backend_instructions.md

Make sure to use Command Enter when running the prompt in Cursor Chat, this will contextualize the question around your whole codebase.

Keep working through the bug with Cursor Chat until you are satisfied it’s resolved and then move back to Composer to implement tests before building the next feature.

🧪 Test Generation

Now that the CREATE QUOTE endpoint is running well. You want to create unit tests for it. Let’s go to Cursor Composer and enter:

Let's now test the /create-quote endpoint using a mock as described in @testing_instructions.md -- make sure this test has verbose logging for debugging purposes
Think step by step

In your testing instructions, specify that Cursor should provide a test suite covering both success and failure scenarios—using mocks for any external dependencies. This ensures your logic is thoroughly vetted and also makes debugging easier.

With the tests in place, you can quickly validate each new feature before moving on to the next item on your roadmap.

🚀 Conclusion & Future Outlook

As you’ve seen, integrating AI into your development workflow—from brainstorming with GPT-o1, sketching architecture with ChatGPT Voice Mode, organizing detailed instruction documents, to interacting with Cursor—creates a streamlined, context-aware process that accelerates both productivity and code quality.

Key Takeaways

  • Holistic Context: By documenting the details of your project, you ensure that AI tools like Cursor make informed, context-rich decisions.

  • Iterative Workflow: Whether it’s generating new features, refining existing code, or debugging on the fly, the combination of Compose and Chat keeps your development agile.

  • Testing: Integrating automated tests ensures that your endpoints and components not only work but are also easier to troubleshoot.

  • Visual Guidance: UI mockups attached directly in your Cursor Compose prompts help align generated code with your intended user interface.

🔭 Looking Ahead

AI technology is evolving at breakneck speed—and so is its role in software development. As AI continues to reshape our workflows, I’ll keep you posted on the latest strategies I use to maximize developer productivity.

If there’s interest, I’ll also dive into my project, DeployMate, which auto-generates CI/CD pipelines for me to streamline builds and deployments.

Happy coding, and see you in the next update!

-Justin

Cofounder & Head of Technology @ BetterFutureLabs

Reply

or to participate.