Using Cursor as a non-developer
With a growing prevalence of various AI-powered productivity tools, it can be difficult to figure out which ones are actually “worth it” and which ones are just tacking on AI as an additional feature without any real thought or benefit from it.
But if there is one area where AI can definitely level up productivity and efforts, it’s definitely in code-assisted development.
Now, i’d argue that i’m fairly tech-savvy but i’ve never really put any major effort into learning software development, beyond some attempts at learning Python and SQL for various things in my career. But i have found myself often wanting to “fix” things or improve upon them in the various tools and programs i use, from work-related tools to even social web pages or apps.
I’ll find an issue with something and find myself frustrated that it’s not working like i want it to, and sometimes i may drop it or provide feedback to the developer(s) where possible. But it doesn’t still solve that itch in me to try and get it fixed.
Until recently though i’ve not been able to do much about it, but the rise of ChatGPT and other AI tools have made it far easier than ever to figure things out and then use the AI tools to provide fixes or improvements in a lot of cases.
Learning lessons
A recent project i started up involved using Claude’s AI chat to create a script that integrates Zendesk’s API with our work tools to identify and list tickets in relation to various bits of data. This started from just chatting with Claude about how i could show tickets related to specific Zendesk users, and just grew from there. It was definitely not an easy approach initially, and because i’m not as well versed in how this kind of script could work, how to work with the existing data, frontend and such, it took me a long time to “figure it out”.
Because Claude also has a conversation history limit, and because the AI goes through the whole conversation at times to understand context, it took me multiple conversations and many days to figure it out and get it to work, but we got there in the end!


But this particular approach wasn’t ideal for me, as i kept having to provide my files and any additional html i wanted repeatedly, which would eat up the conversation history and get to my limits much more quickly.
I came across Cursor a little while ago on my own, but it wasn’t until very recently that i read about a colleague’s post about a project using Cursor to assist in his Obsidian workflow that i was inspired to start using it.
I’ve since then started to use Cursor for a couple of different projects, but the one i’ve learned the most on is one to create a dashboard for stats. This involved a large amount of work, far more than i thought it would be, but it has been an educational one in many ways.
To start with, i’ve learned very quickly the benefits that this offers but also much of its limitations and how to work around things.
The way Cursor works if you’re just starting out with it is fairly straight forward and looks much in the same way that a lot of modern IDEs (Integrated Development Environment) do, namely that they provide you with a sidebar for your folders and files, a tab menu for your open files, a code editor, compiler/interpreter and such. But where it goes the extra mile is the incorporation of both an AI Chat and the recent Composer, that allows you talk to the AI and chat about your code and files.

While Cursor has the Chat feature, i’ve never gotten around to using it much, because the Composer, unlike the Chat, can do multiple file edits and creation. I also found that the Chat forced me to reference my files far more than the Composer which can search for things on its own if you ask it to find something.
I’ve also found that many of the great tips found online around Cursor usage, specially on the usual places like Stack Overflow and Reddit, are accurate and helped to get me more comfortable with using it. Here are a few key things i learned during my time in using it.
Provide a running log of changes
As my project was growing and growing in size so did the volume of questions that i was throwing at the chat, not to mention the copious amounts of copy/pastes i did from console logs in the browser and other things i needed it to know. This resulted in my having to create new chats repeatedly, but the AI doesn’t have any knowledge of previous chats. So if i had start from scratch i had to teach it all the things that i had been telling it before, which quickly becomes not just annoying but eating up valuable time and effort.
So i created a Markdown/text file i called debug_log, and for every major change or when i felt we might get close to conversations history limits, i’d ask it to store our recent conversations, findings, failures and next steps to complete in there. This way i could just continue to refer to the growing debug_log, and avoid starting over from fresh. But more than that, whenever i felt that it was hallucinating or making mistakes (sometimes repeatedly so), i’d ask it to refer to the debug_log and we’d start work again from accurate information.
You are not always right, be ready to get challenged
One of the things that became quite quickly apparent to me is that the AI will take your word as the absolute truth and just run with what you ask it to do, even if it’s detrimental to what you are trying to do.
For example, i set out to use a specific Zendesk API endpoint to gather ticket data from and it was largely working. But i wanted some more specific data, and without thinking through the current implementation, i asked it to do use the newly suggested approach which was using a different endpoint. All of a sudden my data wasn’t correct and i wasn’t seeing the information i expected to see. When i ask the AI to check back on recent changes and conversation history (only within the current chat) it was quick to identify it was a mistake and reverted the change.
But this proved that it would at times just willy nilly make changes that might not be ideal, and as someone who barely knows what’s going on half the times, this can be very foolhardy at best. So in my debug_log, i made sure to note at the top that i wanted the AI to be factual, analytical, and rely on historic information, changes and the debug log before making decisions. I asked it very specifically to challenge my assumptions and requests if it does not align with the goals we’ve already set out in the debug_log. Below you’ll find an example of it pushing back on my assumptions, and it seeking confirmation which is a major improvement over it simply going ahead and making changes simply because you asked it to.

Rejections and checkpoints
This goes hand in hand with the above, but when starting out, i feel like having a written documentation of how you envision the project is a great starting point that you can ask the AI to reference. This will be much more different than your initial prompts, as it can serve as an starting inspiration or guiding light, but to have something like my debug_log to serve as the more canonical truth of ongoings in the project.
I did learn the value in using short prompts, as this avoids not just confusing the AI but also things not being addressed by the AI in your responses. This, in addition to reference files, i’ve found to be the best way forward in dealing with the AI models.
But in some instances i also found that the initial things it suggests for me are not exactly what i wanted, and this can (and does, due to my inexperience) stem from the way you phrase your prompts. So responding to the chat asking if this does work with your laid out plan or initial prompt is a great way to confirm if this actually does what it’s meant to do.
But more than that, is that you can also interrupt or reject any changes the AI puts forward. For each code change that it does, at the top of the file it shows, such as in the screenshot below, you can click on the X or checkmark to accept the change or reject it. If you’re uncertain about something, you can reject it and continue chatting.

At the bottom of the screenshot though you’ll also note that for changes involving multiple files you can either reject or accept files manually or accept/reject all changes. While you can at any time continue chatting with the AI you should be comfortable with rejecting it’s changes and continue the chat until you feel comfortable. You can then refer it back to the previous changes it suggested and ask it to implement it. Because you’ll have the entire conversation available to you, referring to things you discussed in the past works just fine.
Changes you also make, whether it’s a singular file change or multiple files, will create a Checkpoint which you can always return to if things didn’t work out or you want to revert specific changes. This is something i’ve used a number of times now and definitely not something to be afraid of to change. Remember that just because you restored a checkpoint in the past does not mean you’ve lost the history of changes you’ve made since and can continue chatting with the AI to get things to work.
Learn git/version control
During my own work i ended up using a ton of different approaches and even pivoted from different ways i was creating my dashboard, i learned very quickly the value of using Git and committing my changes. While for me, i ended up committing to my “live” app on Heroku, this doesn’t mean you need to do the same but simply using Github works just fine.
Thankfully though there are many wonderful guides online on learning git and how to avoid common issues and make it work for you. While this is not absolutely necessary, knowing how to have somewhere to store your changes and work as a sort of “save file” is critical. I’ve had to on occasions revert to previous git commits simply because i made a few too many mistakes.
One fun, which became later a huge factor in fixing something, discovery i made is that the AI can also check out any previous git commits you made. This also extends to branches as well, which means that you don’t have to do the (IMO) tedious work of running the commands and checking things out for yourself.
Make it work for you
A number of times the AI would make changes and something would break or not work like i expected it to. I have quickly learned the value of asking the AI to add logging to things to ensure they work and for testing purposes, so sometimes in order to find out why things break, i have to check the logs. Initially this involved me manually checking heroku logs and console logs, but over time i found that simply asking the AI to check the logs itself is the better solution!
The AI may also sometimes ask for specific files or functions, but it does have the capacity to search for files and run terminal commands on its own, so don’t forget that if it becomes lazy and asks you to check things out, simply remind it that it has those functions and to do it.
Asking the AI to check logs often would result in it identifying and trying to fix things on its own, with it often going from checking logs > fixing things > checking logs again > fixing things with the only input from me being to confirm the terminal commands it needs to run.
I could go on for much longer about the number of projects i’ve worked on using AI-assisted code and i’ll likely post about it in the near future but i think it’s safe to say that while “vibe-coding” or AI-assisted coding (whatever you want to call it) is challenging and has its quirks and pitfalls, it is in the ability to get from idea to a completed project with very little coding knowledge necessary that is the strength of these tools.
It won’t go away any time soon, so getting to grips with it early on is an important skill to learn in my opinion. With more employers and companies trying to push for AI assisted work and productivity goals, at least knowing how things work gives you an edge, even if you don’t end up using it at all time, much like so many other skils.