LLMs for auto-didactic learning
Don't delegate your thinking to LLMs, use them to enhance it.
I've been using LLMs for various work and personal related tasks considerably for the last few months. At first, I was skeptical. Then, I became curious. And now, frankly, they are ingrained in my everyday workflow.
The problem with LLMs (or rather the way I initially used them) is that they like to splurge. If you ask, say, it how web authentication works, it will tell you. If you ask it to code a website, it will do it. But this usually comes as one big piece of output and… you're done. It's no fault of the LLM that it has absolutely no context about your current knowledge level.
If you're just looking to get things done and/or know the domain well, then this is perfectly fine. But if you're looking to learn something new or want to expand knowledge within a domain, this isn't necessarily helpful. I very much like using LLMs to learn about concepts, or to code things - but using a more thoughtful approach. So that's what I want to explore in this post.
Auto-didactic learning
If you're a self-taught individual, you might be familiar with the concept of auto-didactic learning. Broadly speaking this just refers to learning without any formal structure in place, but your path can include videos, books, articles, and more.
What I think of when I talk about auto-didactic learning is more in line with what George Hotz describes when he talks about self-learning.
To me, auto-didactic learning is an endless Google search. You want to know how something works, so you Google it. Then you stumble upon another concept you haven't heard of, so you Google that. Eventually, by breaking everything down into small pieces, you can start understanding the primitives and concepts which underpin whatever you're trying to learn about.
So the goal here is simply to prompt LLMs in such a way so that you can learn effectively by using them, as opposed to letting it do your thinking for you.
The Approach
Everyone is different, and depending on the context (if you're asking it to help code something in a domain you're unfamiliar with, learning a technological concept) you might want to take a different approach. But here are the general steps I use.
- Ask it to respond in small pieces/steps (if appropriate).
- Ask it to define any words/concepts you don't know
- Frequently ask it questions to test your understanding/clarify anything unclear (e.g. "so what if x happened", or "but what about y")
- Repeat, going further down the concepts if needed
This is the process you usually take browsing the web, but you can't really "test" your knowledge. There are forums you can go to where you can ask people if you're understanding it correctly, but with LLMs you can get instant feedback and iteration. As with humans, you also probably should check the LLMs output for accuracy too.
Trust, but verify
The final note I have is that I use the adage "trust, but verify". Of course, depending on the importance of the topic, you may want to verify it at all times.
But generally speaking I trust the output and if I have a gut feeling that it seems wrong, or I'm unsure about it, then I use backup sources. It also seems like a lot of current chatbots also offer features to go direct to sources, such as ChatGPT and Perplexity.
Conclusion
There is a lot of talk at the moment about using LLMs to complete tasks, but less so focused on how you can learn about new topics.
But the fact of the matter is, LLMs are really good at being able to answer questions you have quickly - and I'm saying this as someone with a personal blog and who writes stuff online and puts it out there in the hopes that some people will get some value out of it.
Getting those instant responses when you're trying to learn something new can really help with the process in my opinion, even if you end up going to look for specific blog posts or videos to help supplement it.
I hope you enjoyed!