<aside> 🔥

Prompting is asking the right question

</aside>

What will this blog cover?

  1. Why you should use GPT to code?
  2. How do I use it to generate most of the starter code for new web-dev projects?

I also wanted to cover ML training, but this is getting too long, so maybe later.

Starting with why?

A team of 10x Engineers is better than a single 10x Engineer. Because the work can be distributed (in other words, the lead can delegate some tasks to others and work on a single task themself)

Delegation is the key. And delegating to someone smart is the only way to make it work.

Before GPTs, computers helped in coding via autocomplete and Language Server Protocol (LSP). Now, with GPT, we can generate way more code, but it’s dumb. Plain GPT is like an average high school programmer. Getting it to a Jr. Dev or peer level is where “prompt engineering” becomes essential.

Think about your best pair programming partner. You speak the same language. It’s like they complete your line of code. You can start, and they will end it. That’s because they know a lot about you. How you think, how you code, what your go-to approaches towards solving problems, how you like your code, what upsets you, what checks are in place, etc.

Now, imagine you are curating that information. It’s a bit of an upfront cost in terms of effort (but I am pretty sure soon, there will be a memory thing in Cursor, which will learn how you code and provide personalized outputs from the get-go). This is where your prompts start getting long. Like really long.

While doing this, keep in mind the “Principle of Asymmetry.” I learned this in my Data Visualization class. The whole point of software is to bring asymmetry between effort and results during user interaction.

Okay, so up till now, we have covered the following things:

<aside> 📌

Humans are great surgical coders, they can change ten lines, and introduce a whole new feature, or increase throughput by 10x or solve a significant bug. LLMs are not that. Or I haven’t yet found out how to make them do this. The closest thing I have found is the Cursor TabTabTab thingy. The idea is somewhat similar. If you change something in one location, and the next thing that needs to be updated doesn’t require any new information (i.e., zero entropy change), LLMs should complete it. If it’s not, then the tooling needs to change.

</aside>

<aside> 📌

My tooling still needs to be fully developed. I love the Cursor Tab thingy and will build something like that. But for now, I don’t know how to do it. Also, because I just started using LLMs heavily 3 months back, there’s still a lot to learn. However, one thing I am sure of is that LLMs are currently handicapped by tooling. They are smart, and we just don’t know how to elicit it.

</aside>

<aside> đź’ˇ

WebDev