← The Interface

The Hidden Cost of
Using AI Wrong

Everyone's using AI. Almost nobody's using it well. Here's what those bad habits are actually costing you — in time, money, and the ability to think.


There's a particular type of AI user who is, paradoxically, getting less productive the more they use it. You've probably met them. Maybe you are them. They open ChatGPT or Claude for every task — drafting two-line emails, Googling facts they already know, generating first drafts they immediately trash. The AI is always open. The output is always mediocre. The time saving is largely imaginary.

This is the hidden cost nobody talks about. Not the obvious mistakes — the hallucinated facts, the robotic prose, the AI-that-sounds-like-AI. Those are easy to spot. The expensive mistakes are the ones that feel like productivity.

The Dependency Trap

When a tool does something for you often enough, you forget how to do it yourself. That's fine for things you genuinely don't want to do — nobody weeps over outsourcing their tax filing. But creative thinking, strategic judgment, and clear writing are not things you want to atrophy. They're the things that make you good at your job.

AI is seductive precisely because it's fast. Ask it to write something and you get something back in seconds. The problem is that "something" is usually a statistical average of everything similar that's ever been written. It's not wrong, exactly. It's just not yours. And "not wrong" is a very low bar.

The expensive mistakes are the ones that feel like productivity.

The dependency trap closes slowly. First you use AI to speed up the boring bits. Then you use it to get started when you're stuck. Then you use it to do things you could do yourself, but it's easier not to. By the time you notice the pattern, your first-draft instincts have gone soft. The blank page is scarier than it used to be.

What Bad Prompting Actually Costs

Most people treat AI like a vending machine. You describe what you want, approximately. The machine produces something, approximately. You accept it, approximately. Then you spend twenty minutes editing it back to what you would have written anyway — but with the structure of whatever the AI assumed you meant.

68%
of AI-drafted content requires significant editing before use
longer to edit bad output than to write from scratch
~20m
average time lost per poorly-prompted task

This isn't the AI's fault. A prompt like "write me a blog post about productivity" is genuinely ambiguous. The AI doesn't know your voice, your angle, your audience, or what you've already said on the subject. It makes the most generic possible assumptions and produces the most generic possible result.

The fix isn't a better AI. It's a better brief. Tell it who you're writing for. Give it the argument, not just the topic. Paste in examples of your own writing. Explain what you're not trying to say. A three-minute investment in the prompt saves twenty minutes in the edit — and produces something actually publishable.

Quick Fix

Before prompting, write one sentence: "The point of this is [X], for [audience], and it should feel [tone]." Paste that into your prompt. Watch the output quality jump immediately.

The Real-Time Illusion

AI doesn't know what happened last week. It doesn't know your specific context. It doesn't know your client's preferences, your brand guidelines, or the industry conversation you've been following for five years. Every time you treat it like it does — like a colleague who's been in the room — you end up correcting errors that shouldn't have been there in the first place.

This is especially damaging for research. AI is very good at summarising what is broadly known about a topic. It is not a substitute for actual research. It will confidently tell you things that are plausible but wrong, cite sources that don't exist, and miss the specific nuance that your work requires. Use it to get oriented. Then go and actually look things up.

Three questions to ask before using AI for research

Using It Well, Actually

None of this means AI isn't useful. It's extraordinarily useful — for the right things, used the right way. The question is whether you're deploying it as a thinking accelerant or a thinking replacement.

The best use of AI I've found is as a sparring partner. Not "write this for me," but "here's what I'm thinking — what am I missing?" Not "generate ideas," but "I have these three ideas, which is strongest and why?" You stay in the driver's seat. The AI pushes back, fills gaps, and forces clarity.

The second-best use is handling genuine drudge work: reformatting data, summarising lengthy documents you need to scan but not study, generating boilerplate you'll heavily customise. Tasks where "approximately right" is actually fine, because the output is a raw material, not a finished product.

Use AI as a thinking accelerant, not a thinking replacement.

The work that makes you valuable — original thinking, taste, judgment, the ability to read a room — none of that is in the model. The model learned from the past. Your value is in navigating the present. Keep those skills sharp. Use the tool. Don't let the tool use you.


Browse the Store

Retro gaming art, AI prompt packs, digital downloads and more — crafted with actual thought behind them.

Visit WendL.ie on Etsy