AI can now quickly help search and research information, distilling the core of a paper into a concise summary. It lets you pick up a term fast and have something to talk about.
But real learning requires deep reading, thinking, and practice. A polished summary is far from enough. Since having AI, how long has it been since you truly studied a paper or deeply read through and implemented a technology? Has your ability to think and your taste improved or declined? Once that ability is weakened, are you ready to let AI replace you entirely? Taste is never built by reading abstracts — it is forged through countless bad decisions and excellent practice.
To be honest, most people never seriously finished reading many papers before AI either. AI hasn't taken anything away — it has just made shallow learning more efficient and more deceptive. The real risk isn't that AI makes people lazy, but that AI makes "lazy" look like "productive." Spend ten minutes reading a summary, post it on social media, feel like you're keeping up with the frontier — but nothing actually sticks.
I am absolutely not against AI. What I advocate is using AI for deep work, not treating it as your TikTok of pretend learning. From "summarize it for me" to "debate it with me," from "do it for me" to "help me reason through it" — that is what matters.
I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.
Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.
Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.
In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.
If you're lazy, perhaps you're just... lazy?
Anyhow, I highly recommend the Surfy Industries Stereomaker. It's amazing at what it does. https://www.surfyindustries.com/stereomaker
This is completely different than my colleague who isn't a software engineer, and now all of the sudden is creating PRs which I need to review and correct.
Notice you didn't ask the AI to 'just design a stereo pedal for me.' You interrogated it, reasoned about netlists, and forced the concepts into your brain through intense friction. That is pure deep work.
I'm curious whether the "knowledge" you gained was real or hallucinatory. I've been using LLMs this way myself, but I worry I'm contaminating my memory with false information.
Go ahead and figure out ways to interrogate on your work with technical means, that's a critical part of the process with an LLM or not.
How much of what you did have you retained? Could you do all of, some of, a small fraction of, or none of the work again today if you had to?
But since I started using coding agents, I have done two feature full internal web apps authenticated by Amazon Cognito. While the UI looks like something from 2002, I am good at putting myself in the shoes of the end user, I iterated often (and quickly) over the UX.
I didn’t look at a line of code and have no plans to learn web development. I might have taken the time to learn a little before AI just to help me with internal websites. Yes I know it’s secure - I validated the endpoints can’t be accessed unauthenticated and the IAM role.
Second anecdote: I know AWS (trust me on this) like the back of my hand. I also know CloudFormation. For years I’ve been putting off learning Terraform and the CDK. After AI, why bother? I can one shot either for IAC and I’m very specific about what I want.
My company is happy and my customer is happy (consulting) what else matters? Substitute “customer” for “the business” or “stakeholders”
Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.
The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”
LLMs have become excellent teachers for me as a result.
For me, LLMs have often pointed me to answers or given food for thought that even subject matter experts could not. I do not take those answers at face value, but the net result is still better than the search remaining open-ended.
Applying strict epistemic discipline (Popper, Russell) to resolve ambiguity and accelerate actual practice is the very definition of deep work. You aren't using AI as a shortcut to skip thinking; you're using it as a Socratic sparring partner to deepen it. This is exactly the paradigm shift I'm advocating for.
You can’t really do that with google anymore, and I can’t remember the last time I bothered to actually learn something that wasn’t trivial from google. ChatGPT, however, has been a game changer. I can ask a really dumb question and get some basic info about the thing I’m asking about, and while it’s often not quite what I’m looking for, it gives me clues to follow, and I can quickly zero in on what I’m looking for, often in new contexts.
As an autodidact who’s main motivation to go to college was to get access to the stacks and direct internet access, I can’t even begin to tell you how game changing LLMs seem to be for learning.
To your point though, my concern is we don’t know how to teach how to learn, and LLMs will likely seduce many into bad behavior and poor research hygiene. I treat my research the same way I attack the stacks, but take someone who’s never been to a research library and ask them to create a report on some topic, and just why? That is the basic resistance, why?, why do what an LLM is almost literally built to do. Yet that is also highly related to individual learning, to take a bunch of disperate sources and synthesize output related to the input.
I suspect we’ll learn how to use LLMs in the same way we learned how to use calculators. But I have no doubt that on average (or maybe median or mode?) calculators have made us less capable to do basic arithmetic, and I suspect LLMs will also cause a great percentage of the population to be worse at sythesizing information. I’d hope that it’s only the same people who would have otherwise only gotten their information from TV, but I do have a slight fear it will creep past that subsection of the population.
But there's a secret: just buy my $399 masterclass and I'll teach you 17 simple productivity hacks to 100x your income.
It seems to me that Agile methodology did a similar thing. The idea of Agile is not to skip understanding requirements, design, upfront reasoning and due diligence, as seen in seen in waterfall methods. It however sometimes turned into laziness looking like faster incremental progress.
I think quality of software has become worse over the time, with "unknown error occurred try again later" becoming more common, and I wonder if the root causes of it includes jumping to building things without properly thinking through about the customer problem, requirements and/or design.
I may easily be wrong, would like to hear corrective thoughts.
For me, it is having a document and interrogating it. Maybe having many sets of documents about a whole category of information. Getting the bullet points. getting the high level and then interrogating and digging down and being able to get bubbled up information as I need it.
That is the learning style that matches how I learn.
I have never been able to skim, so reading a large document WILL teach me that topic, but getting through that doc is tough.
I can dump a very large set of docs in a reader that lets me interrogate the whole data set and I can fly through looking for what is interesting to me, and what I may need, and along the way I will likely dive into other parts too. Asking questions keeps my hyperfocus active.
I think it is just a different style. I have synesthesia and a hard time not working on three to five things at once. I am use to knowing I learn differently than others.
I do a very similar thing in writing - I need feedback, don’t rewrite this!
In both cases I need the struggle of editing / failing to arrive at a deeper understanding.
The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.
I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.
[0] https://youtu.be/YP-ukrBVDH8 (this is sadly the best copy I can find)
But to actually answer the question: I’ve been putting research paper pdfs in notebook llm , and turning them into ~40 minute podcasts which I listen to on my walks. Yes it’s shallow learning, and it might have some hallucinations in there but I wouldn’t have read some of those otherwise.
However, what does it mean to say that's deceptive? It means you care more about social signalling than you do about arriving at the right destination on time. Showing that you're not the sort of person who gets lost isn't really the primary reason people use Google Maps. When it's not a test of your navigation skills, it's not cheating.
Similarly, doing Google searches before posting might be "deceptive" in that it makes you seem more knowledgeable than you are, but on the whole I would prefer more knowledgeable posts, so the social signalling seems like a secondary consideration.
Similarly for using AI. Sometimes it's just a way to get more information.
A good example is ‚birthday wishes‘:
https://m.youtube.com/watch?v=2IYqhdJuRfU&t=5m47s
(AutoCorrect, AutoComplete - generate? AutoCongratulate? How much is ‚okay‘?)
Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the only difference is I would try to have multiple short conversations.
TBD if they stay up, I suppose.
The stories I hear from various white collar professions not related to tech are... interesting, to say the least. There is a whole lot of unsanctioned shadow IT going on regardless of policy.
The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.
If the baseline knowledge drops too low we cannot tell when the AI is being lazy or wrong
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.
But if it's not, it's insulting to the poster, and if it is, then who cares if people are engaging with the post.