You've done a 20-minute language lesson. You feel good — you got the flashcards right, the quiz went well. But what happens over the next 48 hours is almost invisible until it's too late.

Without reinforcement at precisely the right moments, the words drain out of memory faster than they went in. This isn't a willpower problem. It's biology.

Hermann Ebbinghaus and the Forgetting Curve

In the 1880s, German psychologist Hermann Ebbinghaus spent years memorising lists of nonsense syllables and then testing his own recall at intervals. What he found became one of the most replicated findings in cognitive psychology: memory decay follows a predictable exponential curve.

Memory retention over time (without review)
Now
1h
8h
1d
2d
6d
31d
Approximate retention % from Ebbinghaus (1885)

Within one hour, you've already forgotten around 40% of what you just learned. After a day, that rises to 65–70%. A month later, with no review, retention can drop below 20%.

Why Most Apps Don't Solve This

The majority of language learning apps are optimised for engagement, not retention. That means daily streaks, gamification, and short lessons that feel productive but rarely push you to retrieve information under pressure.

There's a critical difference between recognition (seeing a word and thinking "oh, I know that") and recall (actually producing or understanding the word in context). Most apps train recognition almost exclusively.

Recognition is easy. Recall is the actual skill you need for conversation. The two are shockingly different in how they're stored and retrieved from memory.

Beyond that, most apps introduce new material at a constant pace regardless of whether you've actually retained previous vocabulary. The result is a perpetual beginner experience — you always feel like you're learning, but your real-world vocabulary barely grows.

The SM-2 Algorithm: Spacing When It Matters

Spaced repetition is the antidote. Instead of reviewing everything every day or reviewing on a fixed schedule, spaced repetition systems track your performance on each individual item and schedule the next review at the optimal moment — just before you would have forgotten it.

HablaDay uses SM-2, the algorithm originally developed by Piotr Wozniak in the late 1980s and published in 1994. Despite being over 30 years old, it remains one of the most effective scheduling algorithms for vocabulary learning, with extensive peer-reviewed validation.

The core insight of SM-2 is the ease factor: each word carries an individual multiplier that controls how quickly its review interval grows. A word you find easy gets reviewed less and less frequently. A word that trips you up gets reviewed more often, until you've solidified it.

How HablaDay Implements This

After every card, you rate your response on four levels: No idea / Almost / Got it / Easy. That rating feeds directly into the SM-2 algorithm to update both the ease factor and the next interval for that specific word.

Critically, cards that score "No idea" are immediately re-queued within the same session — Ebbinghaus showed that immediate re-exposure dramatically improves later retention compared to waiting until the next scheduled review.

After consistent daily use for 7 days, the spaced repetition schedule naturally adapts so that your weakest words appear daily and your strongest words might not appear for weeks — because you don't need to see them as often.

The goal is efficient learning: spending the most time on the words that need the most work, and trusting the algorithm to bring back the strong words at just the right moment to keep them in long-term memory.

The Streak Is Not the Point

One more thing worth naming directly: a 50-day streak that produces no vocabulary retention is worthless. A 7-day streak with genuine recall practice — working through the forgetting curve on 300 carefully selected words — is worth years of passive exposure.

HablaDay has a streak system, and yes, maintaining it is satisfying. But the streak is downstream of the thing that actually matters: did you do enough spaced repetition today to shift vocabulary from short-term to long-term memory? The algorithm knows. That's what it's there for.