🤯

WTF is AI?

You know it's a big deal. You see the headlines. But what does it all mean? Let's break it down with pretty graphics and minimal BS.

For humans who aren't robotsAlso funny if you're a dev3D graphics included

This is What's Happening Inside

Neural networks process data through layers of mathematical functions. Input goes in, gets transformed through multiple layers (the "neurons"), and predictions come out. It's like a very sophisticated pattern-matching machine built from millions of tiny math operations.

The Concepts, Decoded

Click any card to expand. Prepare for enlightenment (or at least mild understanding).

WTF is AI Anyway?

It's basically spicy autocorrect

AI (specifically LLMs) are massive pattern-matching machines. They learned from billions of text examples and now predict what word comes next. That's it. That's the magic. It's autocomplete on steroids with a PhD in everything.

Fun fact: GPT-4 has 1.76 TRILLION parameters. That's roughly the number of stars in 10 Milky Way galaxies. All to tell you how to make a sandwich.
â–¶ Click to expand

Neural Networks

Lasagna of math

Imagine a lasagna where each layer transforms information. Input layer receives data, hidden layers do the magic math (matrix multiplication, baby!), output layer gives you an answer. Each 'neuron' is just a fancy math function that says 'how much do I care about this input?'

The 'learning' part? Adjusting billions of tiny weights until the model stops being confidently wrong. It's like training a very enthusiastic but confused puppy.
â–¶ Click to expand

WTF are Tokens?

Words, but smaller and weird

Tokens are how AI 'sees' text. A token can be a word, part of a word, or even punctuation. 'Hello' = 1 token. 'Hello, world!' = 4 tokens. Why? Because AI doesn't read like humans - it chunks text into bite-sized pieces. Context window = how many tokens the AI can remember at once.

Claude can handle 200K tokens. That's like reading an entire novel and still remembering the first sentence. Try that after coffee.
â–¶ Click to expand

WTF do the Benchmarks Mean?

Standardized tests for robots

MMLU = Massive Multitask Language Understanding (57 subjects from math to philosophy). HumanEval = Can it code? GPQA = Graduate-level science questions. GSM8K = Grade school math (surprisingly hard for AI). These measure if the AI is actually smart or just good at sounding smart.

DeepSeek R1 scores 96.3% on HumanEval. It codes better than 96% of humans. But ask it to draw ASCII art and it has an existential crisis.
â–¶ Click to expand

Context Windows

AI's working memory

Context window = how much text the AI can 'see' at once. GPT-4 Turbo: 128K tokens. Claude: 200K. Gemini: 1M. Bigger = better memory but slower & pricier. It's like RAM for your brain. More RAM = more browser tabs before your computer starts crying.

Fun fact: Gemini's 1M token context is like reading 'War and Peace' 3 times and remembering every detail. Meanwhile, I forget why I opened the fridge.
â–¶ Click to expand

WTF is Cost Per Million Tokens?

The AI tax

Every time you use an AI, you're buying tokens. Input tokens (what you send) + Output tokens (what it responds) = your bill. GPT-4: $10-30/M tokens. Claude: $3-15/M. DeepSeek: $0.55/M (it's basically free wtf). The math: 1M tokens ≈ 750K words ≈ a novel. So you're paying ~$3-30 per novel's worth of conversation.

DeepSeek R1 is so cheap it costs less than a coffee to have a 100-page conversation. The future is weird.
â–¶ Click to expand

Parameters

Size matters (kind of)

Parameters = the 'knobs' the AI adjusts during training. More parameters ≠ always better (see: DeepSeek R1 with 'only' 671B params beating models with trillions). It's like saying a 10GB app is better than a 1GB app. Sometimes the smaller one just works smarter. Quality > quantity.

GPT-4 allegedly has 1.76T parameters. That's more 'settings' than there are people on Earth. And yet it still can't count how many 'r's are in 'strawberry'.
â–¶ Click to expand
🎓

Still Confused? That's Normal.

AI is complicated. These visualizations are simplified. But now you know enough to sound smart at parties. Check out the leaderboards to see which models are actually good at this stuff.

Check the Leaderboards →