Remember when Siri could barely set a timer without mishearing you?
Now, you can ask ChatGPT to write a novel, debug your code, and plan your vacation—all in one conversation.
Behind this leap is a new kind of intelligence: large language models (LLMs). But measuring how “smart” they are isn’t so simple. The bar keeps moving—and so must we.
In the past, chatbots mostly answered by looking up facts.
For example, if you asked, “What is the capital of France?” it would simply reply “Paris” because it remembered that from its training.
Now, if you ask, “If a pizza is cut into 8 slices and 3 are eaten, how many are left?”, the model doesn’t just recall a fact—it calculates the answer.
This is procedural knowledge, internalizing how to solve problems.
DeepSeek R1
But now, new training methods help these models get even better. Not just because they’ve seen more data, but because they’re learning to think in code.
Training AI models on programming languages like Python supercharges their logical reasoning.
When AI studies code, it learns learns two things more effectively:
Logic: “If the discount is 20%, multiply the price by 0.8".” Easy.
Patterns: Understanding how variables work, like price, discount, or expenses
And that coding logic also helps it in non-coding tasks, like in storytelling (sequencing events with clarity) and budgeting (crunching numbers with precision).
Still, using AI isn’t always smooth sailing.
Sometimes you ask a simple question and get a weirdly off-base answer. So you rephrase. Then again. and again. and…welcome to the infinite loop of AI miscommunication.
That’s why “Prompt Engineering” is becoming a thing.
It’s the art of asking questions in a way the AI actually understands. Otherwise, you’re stuck in an endless game of AI telephone.
Clear inputs = Better outputs.
But prompt engineering sounds fancy. And for some, that’s the problem !!
Source: Fireship
When people hear they need to master a bunch of advanced techniques just to get useful answers from AI, it can feel… discouraging. Especially for beginners. Nobody wants homework before the fun starts.
So, what to do? Well, keep reading..
You see, there are no secret cheat codes to talk to AI. No need to memorize a list of 20 “expert prompts.”
Sure, deep-diving into how LLMs work is great if you’re curious. But most people don’t need to go that far.
Just start using AI for stuff you care about. Ask it real questions. Give it real tasks. That’s how you get good. By tinkering.
It’s like learning to ride a bike. Reading a manual might help a bit, but you won’t really get it until you start pedaling.
The more you experiment, the more you’ll pick up on what works and what flops.
Eventually, you’ll get a feel for when the model nails it… and when it confidently spits out something totally wrong. (which happens, it’s called hallucination—and it’s normal.)
With time, you’ll spot its sweet spots. Like:
Feed it a big chunk of code plus project context, and some LLM can generate an entire file—or multiple—without breaking a sweat.
For medical Q&A, some LLMs lands close to the right diagnosis about 80% of the time. (Not a doctor, but not useless either.)
Some LLMs are beast at explaining complex engineering topics with clean, digestible examples.
So don’t overthink it. Play around. That’s the best way to unlock the real magic.
When it comes to giving background info to AI, the rule is simple: if you think you’ve given enough, give 10x more.
Most people start with a short question and a bit of context. But chatGPT isn’t a mind reader—it won’t magically infer what you meant unless you spell it out.
Think of it like onboarding a new teammate. A really smart one… who also overthinks everything and occasionally gets lost in the weeds. You want to load it up with details so it doesn’t veer off track.
Give it the who, what, why, and how. What’s the goal? What’s already been tried? What are the constraints?
As you get more comfortable, you’ll notice a pattern: at the start, prompts are long and specific. But as the conversation builds, you can go shorter and still get great responses—because the model is already up to speed from the earlier context.
That back-and-forth is where the real magic happens. So front-load the details, and let the dialogue evolve from there.
Specificity wins. Always.
Instead of saying, “Give me a unique marketing plan,” say, “Create a plan for a small downtown coffee shop targeting morning commuters—with fresh pastries and peak-hour discounts.”
See the difference? The more vivid the brief, the better the output.
Want even sharper results?
Drop in examples of what “good” looks like.
Outline the steps you want it to follow.
Correct it mid-way if it drifts.
And don’t be shy—this thing doesn’t take anything personally. You can ask for infinite revisions without the awkward “feedback sandwich.”
Try:
“Give me 3 tone variations.”
“List 30 ideas instead of 5.”
“Combine idea #8 with #3, but make it more Gen Z.”
You’re the editor. Steer the ship. Push it in new directions, remix what works, and let it surprise you. That back-and-forth is where the weirdly brilliant stuff starts to emerge.
A lot of people struggle with prompting because they treat AI like a fast-food counter—place order, get result, move on.
But high-quality AI prompts aren’t one-liners. They’re blueprints of your own thought process.
You see, AI tends to give bland, generic answers. Your brain, on the other hand, is chaotic, specific, and weird in all the best ways.
The best prompters don’t just ask better questions. They know exactly how they think before they ask anything at all.
One of my friends used to pack for trips by throwing random stuff into a suitcase and hoping for the best. Total squirrel energy. But pro travelers mentally walk through the whole trip. What happens when they land? What’s the weather? What will bedtime look like? That little internal simulation ensures they don’t forget essentials—like a charger or a lucky pair of socks.
AI Image
Prompting is the same. Great prompts are mental rehearsals turned into text.
Start by asking yourself:
How would I solve this?
What steps would I take, in what order?
What nuance might the AI miss?
What am I assuming without saying?
What makes my approach unique—not just “expert,” but specifically me?
This forces clarity in your own thinking, which makes the AI way more useful.
If you treat AI like a vending machine, you’ll get vending machine results. Instead, treat it like a collaborator. Test it. Push it. Refine your thinking through the conversation.
When you prompt with intent, the combo of human insight and machine speed becomes a creativity cheat code.
The future isn’t man vs. machine.
It’s man + machine, co-creating the next big thing.
Cheers,
Teng Yan & Ravi