Article

BEST PRACTICES: Key Understandings for AI-Assisted Coding

This article is part two in my AI coding series ahead of this week's anticipated pre-o4 model announcement. In the first article I looked at the overall AI coding landscape and why it's exciting. I also have specific app reviews for Cursor and Repo Prompt with more on the way! As…

This article is part two in my AI coding series ahead of this week's anticipated pre-o4 model announcement. In the first article I looked at the overall AI coding landscape and why it's exciting. I also have specific app reviews for Cursor and Repo Prompt with more on the way!

As with any tool, the key is knowing when to use it. Eaach app in this space excels in specific contexts. I'll touch on those and encourage readers to test these out for themselves. Investing a few hours to experiment with various prompts is well worth it.

This applies not just to the apps, but to the models themselves. Each has strengths, weaknesses, quirks, and places they excel. While the SOTA is improving rapidly and I see a lot of convergence in model capability generally, the more I use the individual models the more I can recognize when a task is well-suited for a specific model. Don't sleep on how crucial it is to recognize when to enable model features like reasoning, web search, and function calling.

For the best results and cost efficiency, use the settings that allow you to plug in your own API keys. If you haven't done this at all I highly recommend OpenRouter which allows you to access essentially all public models via a single API Key. I have individual keys setup with OpenAI, Anthropoic, Gemini, Misral, and OpenRouter.

Version Control is the final key element. I don't know git so I use Github Desktop. It might feel tedious, but it's essential to configure this immediately after creating the folder. I keep the app open and regularly make commits to the project. Do it more than you think you should and write helpful commit messages. This helps you and future AI. Don't skip this step. You will have to rollback or cherrypick or remember the steps you were working during debugging and having this available in small chunks will save you.

Document. Document. Document. I heavily rely on Cursor rules, style guides, CLAUDE.md, and README.md. I also have the AI provide inline documentation and logging so that a future human/AI can revisit specific sections as the codebase grows. After a feature is implemented I always update the key files like CLAUDE.md and README.md to reflect the changes and give me a leg up for my next coding session by reworking the TODOs.

Now onto the apps. Check out my reviews of Cursor & Windsurf or Repo Prompt. Claude Code & more following shortly!

Continue reading

View all

April 14, 2025

CURSOR REVIEW: Debugging with AI Precision

Cursor rightfully leads the AI-assisted coding space. I use it daily, particularly when debugging active web apps built with frameworks like Next.js. Cursor's strength lies in terminal interactions, drastically reducing debugging time. A notable project was a…

Read more →

April 10, 2025

The Apps for Coding before o3 (April 2025)

In anticipation of OpenAI's release of the full o3 model later this week. I thought I'd take a few minutes to reflect on the current state of coding with AI and mention a few of the practical solutions I've been testing (and had success with) over the past ~6…

Read more →