The Rise Of Artificial Intelligence: What You Actually Need To Know
Artificial Intelligence isn't coming — it's already here, quietly embedded in everything from your email filters to the chip factories designing the next generation of hardware. But what actually is AI, and why should you care?
What is AI, Really?
At its core, AI is just pattern recognition at scale. A model is trained on enormous amounts of data, and it learns to predict what comes next — whether that's the next word in a sentence, the next pixel in an image, or the next move in a chess game.
There are a few major branches worth knowing:
| Type | What it does | Example |
|---|---|---|
| Narrow AI | One specific task | Chess engines, spam filters |
| Generative AI | Creates new content | ChatGPT, Midjourney |
| Reinforcement Learning | Learns via trial and error | AlphaGo, robotics |
| Computer Vision | Understands images | Face unlock, medical scans |
How Large Language Models Work
Modern LLMs like GPT-4 or Claude are built on the Transformer architecture1. The core idea is attention — the model learns which parts of the input to focus on when producing each part of the output.
Training happens in two phases:
- Pre-training — the model reads billions of tokens of text from the internet and learns to predict the next token
- Fine-tuning (RLHF) — human raters score outputs, and the model is nudged toward responses humans prefer
Why "tokens" and not "words"?
Tokens are chunks of text — sometimes a full word, sometimes just a few characters. The word "unbelievable" might be split into un, believ, able. This lets the model handle rare words it's never seen before.
Writing Your First AI-Powered Script
Here's a minimal Python example using the Anthropic API:
import anthropic
client = anthropic.Anthropic(api_key="your-api-key")
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain transformers in 3 sentences."}
]
)
print(message.content[0].text)
Running this gives you a live response from the model. No PhD required.
The Elephant in the Room: Should We Be Worried?
Honestly? It depends on who you ask.
The Alignment Problem
As AI systems become more capable, ensuring they pursue goals that are actually good for humans becomes harder. This isn't sci-fi — it's an active research area at places like Anthropic, DeepMind, and OpenAI.
What you can do today
Learn to use these tools effectively. Prompt engineering, understanding model limitations, and knowing when not to use AI are genuinely valuable skills right now.
What's Actually Impressive (and What's Hype)
Genuinely impressive: - Code generation and debugging - Summarising long documents instantly - Multi-language translation at near-human quality - Protein structure prediction (AlphaFold changed biology forever)
Still overhyped: - "AI will replace all jobs in 5 years" — some jobs, some tasks, not all - Autonomous agents doing complex real-world tasks reliably - AI "understanding" anything — it's sophisticated pattern matching, not cognition
Closing Thoughts
AI is the most interesting technology of our generation — not because it's magic, but because it feels like magic while being entirely mechanical. The more you understand how it works, the better equipped you are to use it, critique it, and build with it.
We're still early. The best time to pay attention is now.
-
Vaswani et al., Attention Is All You Need (2017) — the paper that started the modern AI boom. ↩
Abhinav Asthana
Software Architect and writer. Exploring the intersection of minimal aesthetics and functional systems.