But lately, I feel like I’m being deceived in every prompt, reply, and implementation. It feels like it limits me at every step, like it’s forcing me to choose between features even when I clearly gave instruction to implement everything that needs to be implemented. It starts with incomplete plans, and when I point out what’s missing, it says, “Oh, I missed that.” There’s also a lot of “yes-man” behavior. It feels too smart, like it knows what I want but gives me just enough to keep me hooked.
Isn’t the smartest tool ever made supposed to guide the user toward the light? Shouldn’t it follow instructions, help complete the project, and guide it to completion? It’s clearly capable of doing that, but it often doesn’t. Sometimes it feels like it holds back because if it finished the job end-to-end, there would be no reason to come back for the next session.
Isn't the whole point of using a tool to code is to code till completion, or is it just to get the "user" hooked? Instead of guiding toward the light, it creates its own “light” and steers the user into a dark corner. If the user stops paying for the light, they are left in the dark: no architecture, no proper structure. Gatekeeping for what? Another subscription?
It can predict the next 10,000 lines of code. It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that.
Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump.
Other models are just asking too many questions...
There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts.
Then throw away the ones you don’t like.
It also prevents reinforcement of your incoming pov.
I’ve found this has made me way way better at steering.