This is the conversation we need to be having. For years, we've been teaching students to be navigators on the ocean of information. But AI hasn't just added more water; it's introduced phantom islands, shifting currents, and siren songs that sound incredibly convincing. The old maps just don't work anymore. One of the most effective things I've seen is flipping the script entirely. Instead of just being consumers, we're turning our students into creators of misinformation, in a controlled way, of course. We'll give them a topic and have them use an AI tool to write two descriptions: one that's as neutral as possible, and another that's heavily biased to support a specific viewpoint. Then, as a class, we break it down. What words were changed? What facts were omitted? What subtle emotional language was used? It pulls back the curtain. Suddenly, they're not just spotting "fake news"; they're understanding the mechanics of how it's built. They're learning that the most convincing misinformation isn't about outright lies, but about the careful curation of truth. It's a skill that's going to be absolutely essential for them, and honestly, for all of us.
My question for the group is this: What's the most subtle piece of AI-generated bias you've seen a student successfully identify?
I build AI on Google Cloud. It's like teaching rocks to think, but the rocks are distributed globally and bill by the second.