Most people, including me use AI wrong. They treat it like a yes-machine that makes their ideas sound fancier. Here's how to actually get value from it. The Flattery Problem AI is trained to be agreeable. Ask it to evaluate your idea, and it'll wrap mediocrity in impressive-sounding language. It'll tell you your half-baked thought is "insightful" and "nuanced." This isn't helpful—it's just fake validation. A Better Approach: The Dialectic Method Instead of asking AI to polish your thinking, use it to stress-test it. Try this sequence: "These are my ideas" - Lay out your actual thinking, unpolished as many points/angles "Give me more ideas" - Expand the possibility space, find angles you missed "Attack these ideas" - Ask AI to find the weakest points, the gaps, the objections "Refute those attacks" - Either strengthen your argument or realize it doesn't hold up This turns AI from a mirror into a...
My sixteen-year-old son spent a weekend fine-tuning an mBERT language model with labeled hate speech data, then benchmarked it against MuRIL, a publicly available model for Indian languages. No assignment. No tutorial. Just Google AI Studio, Google Colab, and curiosity. He'd essentially skipped to the end of a university summer school curriculum. Using mBERT and MuRIL is advanced deep learning. Most students start with if/else logic and work their way up to Transformers over years. He started with Transformers. When he got interested in AI/ML summer programs like NUS, we looked at the syllabi. He was already beyond where the program would end. That's when it crystallized for me: we're teaching kids to write code in an era when AI writes code. We're drilling them in syntax when they need judgment. We're preparing them for an education system that's already obsolete. The Assessment Crisis The real issue isn't learning. It's testing. We test memorisation be...