What AI Still Can’t Do (Yet)

By Katherine McKean, Junior and President of my high school AI Exploration Club

Living in the Bay Area, it’s easy to feel like artificial intelligence is everywhere. Grocery stores, cars, ads, TikTok recommendations, the robot dog someone tried to take on BART. And while AI has done some wildly impressive things—like generating essays, mimicking celebrity voices, and giving totally average relationship advice—it still has some major blind spots.

We talk a lot about what AI can do, but here’s a dispatch from the frontlines of my high school AI club on what it absolutely cannot do. Yet.

cooking like a grandma

AI can give you recipes. It can recommend substitutions, give calorie counts, and even simulate what a “low-effort but high-vibe dinner” might look like. But ask it to make pancakes the way your grandma did—slightly crispy edges, just the right amount of vanilla, served with weird family stories—and it folds faster than a soufflé in a storm.

Some of us tested AI-generated recipes for fun. Results ranged from “pretty good banana bread” to “what even is this soup?” The bots can mimic structure, but they don’t taste, smell, or adjust on the fly. They can’t tell if something needs more cinnamon. They can’t clean up the kitchen either. Major drawback.

feelings, nuance, and your group project meltdown

It turns out AI is not great at complex human emotions, like the feeling you get when you realize you’re the only one in your group doing the actual work. You can type “I feel stressed and abandoned and also weirdly hungry” into a chatbot and it might respond with a gentle affirmation and a list of protein snack options. But it won’t really get it.

Empathy is tricky. AI can simulate supportive responses. It can even use phrases like “That sounds really difficult” or “I’m here to help.” But it doesn’t know what it’s like to be left on read in a group chat after asking three times who’s bringing the poster board.

understanding friendship (or sarcasm)

AI still can’t grasp the vibe of human friendships. I once asked ChatGPT how to resolve a passive-aggressive standoff with a friend involving borrowed AirPods and an unclear emoji reaction. Its advice: “Use clear communication and express your needs.” Which… fine. But that’s not how high school works. You can’t just walk up to someone at lunch and say, “I am expressing a boundary.” That’s how you end up eating alone by the vending machines.

Also, sarcasm? Total miss. Ask an AI, “Why is school the most relaxing place on Earth?” and it might give you a breakdown of educational wellness initiatives. The bots are trying. But they read sarcasm like it’s a glitch in the code, not a way of life.

figuring out what’s cool

AI can make playlists. It can generate outfit ideas based on “clean girl aesthetic” or “dark academia.” But ask it what makes someone cool, and it short-circuits into a list of behaviors that sound like they were compiled by a parent trying very hard to be chill.

“To appear cool,” one bot told me, “you might try listening actively, dressing in current styles, and demonstrating confidence in social situations.” That’s not advice. That’s a motivational poster in an HR office. AI doesn’t go to school dances. It doesn’t worry about how long to hold eye contact when passing someone you like in the hallway. It doesn’t get awkward. That’s its biggest flaw.

giving good hugs

There’s something comforting about being seen and supported. AI tries. It really does. It can say, “You’ve got this,” or “It’s okay to feel overwhelmed.” But it can’t do the little things—like pat your back during a cry session or hand you a cookie without making it a metaphor.

When my friend bombed her driver’s test, we tried asking a chatbot to cheer her up. It suggested writing a reflection journal and visualizing success. We went with ice cream and no talking. AI means well, but it still hasn’t mastered the art of knowing when someone just wants you to sit there and be quiet with them. Preferably while watching baking fails on YouTube.

decoding group dynamics

One of the most impressive mysteries in high school isn’t AI—it’s group dynamics. Like how one person always ends up leading, one person disappears, one person does the title slide, and someone inevitably copies the entire conclusion from SparkNotes.

AI can suggest “strategies for teamwork” and offer role definitions like “note-taker” or “timekeeper.” But it doesn’t get the human part. The drama. The alliances. The silent battles over font choice. The way someone can nod at your idea, then submit a totally different one to the teacher five minutes later.

No bot has survived a group project. Until it does, we maintain the upper hand.

original thought (sometimes)

AI can remix existing ideas. It’s really good at pulling from what’s already out there. But ask it to give you a take no one’s heard before, and it starts stalling.

I asked one bot to predict what kind of AI would exist in 100 years. It gave a timeline, some bullet points, and a disclaimer that it “does not possess the ability to foresee the future.” Fair. But also kind of a letdown. The fun of thinking isn’t just producing an answer—it’s the messy, weird, occasionally chaotic trail of figuring things out. AI isn’t good at messy. Which is too bad, because that’s where most of high school lives.

making ethical decisions with actual stakes

Ask AI what’s right or wrong, and it usually gives you a paragraph about “ethical considerations” followed by a gentle nudge to ask an adult. Good call, technically. But kind of useless when you’re mid-debate about whether using AI for SAT prep is “cheating” or just “efficient.”

Real ethical decisions—especially the ones that involve consequences, people’s feelings, or conflicting values—are hard. AI doesn’t feel guilt, pride, regret, or loyalty. It can simulate ethics, but it doesn’t live them. It can quote Kant. It cannot explain why you still feel bad about something that technically wasn’t your fault.

taking creative risks

AI can create things. Songs, images, poems. Some are beautiful. Some are kind of lifeless. The difference is usually whether a human edited, broke the rules, or put in something weird and personal. AI doesn’t get bored and scrap a whole draft. It doesn’t suddenly switch from watercolor to collage. It doesn’t get inspired by a weird dream or a TikTok comment war.

There’s a kind of creativity that isn’t about being efficient or accurate. It’s about surprise. That part still seems very human.

getting the joke

We’ve tested this. AI jokes are… fine. They’re often structured well, and sometimes they land. But ask it to explain why a meme is funny, and you’ll get a five-paragraph essay on visual incongruity. Which, again, technically correct. But if you have to explain the joke, you’ve already lost.

AI doesn’t laugh. Not even politely. It doesn’t know what it’s like to lose it over something dumb and unexplainable with your best friend in math class. That’s a very specific, very high school kind of joy. Not sure it can be coded.

being human

It’s weird to say, but after spending this much time with AI tools, what stands out the most isn’t what they do—it’s what they don’t. They don’t get scared before a tryout. They don’t whisper secrets behind lockers. They don’t hesitate before pressing send. They don’t tell jokes just to make someone feel better. They don’t overthink everything. They don’t love. Not yet, anyway.

For now, they’re tools. Really interesting ones. Really helpful sometimes. But the weird, funny, heartbreaking, creative, messy parts? Still our territory.

Want to bring the power of AI to your school? Check out this step-by-step guide on How to Launch a High School AI Club in 10 Easy Steps.