Flashcards! Science Needs Fiction

Gabrielle Birchak/ March 13, 2026/ FLASHCARDS/ 0 comments

In Tues­day’s episode, I talked about Mari Wolf, a sci­ence fic­tion writer in the 1950s. She was also a math­e­mati­cian who worked at NASA’s JPL. As a bril­liant and cre­ative indi­vid­ual, Mari Wolf’s sto­ry reminds us that sci­ence needs sci­ence fic­tion. Sci­ence fic­tion gives us a safe place to pres­sure-test big ideas before the real world has to absorb the risk, and my absolute favorite Star Trek shows make that les­son unforgettable.

By NBC Tele­vi­sion — eBay item­pho­to front­pho­to back­press release, Pub­lic Domain, https://commons.wikimedia.org/w/index.php?curid=17205358

Today I want to bor­row four moments from Star Trek to answer a ques­tion that sounds play­ful, but it is seri­ous­ly useful.

What can sci­ence learn from sci­ence fiction?

Sci­ence fic­tion is not valu­able because it pre­dicts the future. It is valu­able because it trains the mind. It gives us a way to prac­tice think­ing about sys­tems before we build them, and to prac­tice car­ing about con­se­quences before con­se­quences have real names.

Today’s episode comes in three flashcards.

Sci­ence fic­tion helps sci­ence by stress-test­ing ideas, pre­view­ing con­se­quences, and build­ing shared language.

I am going to explain each one using some of the most absurd, mem­o­rable Star Trek sce­nar­ios I can think of, because some­times the strangest episodes are the best teachers.

Flash­card 1: Sci­ence fic­tion stress tests ideas

Engi­neers and sci­en­tists run stress tests because real­i­ty is nev­er polite.

A sys­tem does not fail only in the ways you expect. It fails in the ways you for­got to imag­ine, and some­times it fails in the ways you could not imag­ine until you saw them happen.

That is where sci­ence fic­tion shines. It cre­ates extreme sce­nar­ios that force you to ask, “What breaks first?” and “What did every­one assume would nev­er happen?”

Which brings us to one of Star Trek: The Orig­i­nal Series’ most infa­mous episodes: “Spock’s Brain.”

If you have nev­er seen it, the premise is spec­tac­u­lar­ly ridicu­lous. Spock’s brain is lit­er­al­ly stolen. Not metaphor­i­cal­ly stolen. Not psy­cho­log­i­cal­ly stolen. Phys­i­cal­ly removed from his body and car­ried off, as if a brain were a device some­one could bor­row, like a toolkit.

The Enter­prise crew is forced to con­front an impos­si­ble fail­ure mode: they have a col­league whose body is alive, but whose mind has been tak­en away, and they have to func­tion under a sit­u­a­tion that makes no sense.

And that is pre­cise­ly why it works as a teach­ing tool.

Stress test­ing is not about what is like­ly. It is about what is catastrophic.

In the real world, nobody expects the “brain” of the sys­tem to be removed in a clean, dra­mat­ic theft. How­ev­er, crit­i­cal com­po­nents do dis­ap­pear in oth­er ways. A key dataset becomes cor­rupt­ed. A vital team mem­ber leaves. A cen­tral serv­er goes down. A sup­ply chain snaps. A depen­den­cy fails. A hid­den assump­tion turns out to be wrong.

When that hap­pens, the ques­tion becomes imme­di­ate: can the sys­tem still oper­ate safely?

“Spock’s Brain” is an exag­ger­at­ed ver­sion of a real engi­neer­ing habit: you look at the part of the sys­tem you rely on most, and you imag­ine it gone.

Then you ask what hap­pens next.

Do you have redundancy?

Do you have a fallback?

Do you have a plan that does not depend on every­thing work­ing perfectly?

The episode is absurd, but it expos­es some­thing sober. Many sys­tems appear sta­ble only because no one has tried to stress them at their weak­est point.

Star Trek takes a prob­lem that would nor­mal­ly be a check­list and turns it into a cri­sis you can hear in your pulse.

Flash­card 2: Sci­ence fic­tion pre­views consequences

Sci­ence is not only about dis­cov­ery. Sci­ence is deployment.

A new idea is thrilling in a lab. It becomes com­pli­cat­ed when it enters a world filled with incen­tives, short­cuts, pow­er, fear, and ordi­nary human habits.

Sci­ence fic­tion is good at pre­view­ing con­se­quences because it refus­es to stop at “Can we?” and insists on ask­ing, “What hap­pens after?”

For that, I want to use a beloved comedic episode from the orig­i­nal series: “The Trou­ble with Tribbles.”

In that sto­ry, trib­bles are cute, fur­ry, harm­less-look­ing crea­tures. They coo. They purr. They seem like the kind of thing a tired crew mem­ber might want to hold after a long shift.

Then they reproduce.

And repro­duce.

And repro­duce.

Soon, the ship is full of them. Stor­age com­part­ments are over­flow­ing. Food sup­plies are threat­ened. The trib­bles become an eco­log­i­cal and logis­ti­cal dis­as­ter, and the episode plays it for laughs, because watch­ing a dig­ni­fied offi­cer buried under a wave of fluff is gen­uine­ly funny.

But under­neath the com­e­dy is a sharp les­son about systems.

Many prob­lems do not arrive as vil­lains. They arrive as “harm­less” variables.

They arrive as small addi­tions nobody thinks through.

They arrive as conveniences.

They arrive as a cute solu­tion to boredom.

Like the employ­ee who acci­den­tal­ly down­loads “legit look­ing soft­ware,” also known as the “after-hours spicy web­sites,” the “internet’s red-light dis­trict,” the “for­bid­den pop-up king­dom,” which actu­al­ly turns out to be mal­ware and end­ed up tak­ing down an entire net­work, (I’m look­ing at you Dis­ney), you know what I’m talk­ing about.

Some­times bore­dom can ruin an entire business.

In sci­ence, this hap­pens all the time. A new species is intro­duced, and it becomes inva­sive. A chem­i­cal seems safe until it accu­mu­lates. A tech­nol­o­gy seems ben­e­fi­cial until it becomes ubiq­ui­tous. A design choice seems minor until it shapes behav­ior at scale.

“The Trou­ble with Trib­bles” is a sto­ry about expo­nen­tial growth and unplanned cas­cades. It is about how quick­ly a sys­tem can tip when some­thing that looks small starts multiplying.

And because it is fun­ny, it slips the les­son into you with­out you tens­ing up. You laugh, then real­ize you have wit­nessed a per­fect demon­stra­tion of unin­tend­ed consequences.

Now I want to add a quick sec­ond beat, because Star Trek: The Next Gen­er­a­tion gives us a dark­er ver­sion of the same idea in “Gen­e­sis.”

In “Gen­e­sis,” some­thing goes wrong med­ical­ly on the Enter­prise, and mem­bers of the crew begin to devolve into var­i­ous forms. The ship becomes dan­ger­ous in a way that is both bizarre and fright­en­ing, and the episode leans into the hor­ror of it. The point is not that evo­lu­tion works like that. The point is that a sys­tem designed for safe­ty can quick­ly become unsafe when hid­den bio­log­i­cal assump­tions collapse.

That con­trast matters.

The trib­bles are comedy.

“Gen­e­sis” is dread.

Both are con­se­quence pre­views. They show how sys­tems can spi­ral when some­thing small becomes some­thing unstoppable.

In Star Trek, the con­se­quences arrive fast, but in real life, they arrive slow­ly, which makes them eas­i­er to ignore.

Flash­card 3: Sci­ence fic­tion builds shared language

This flash­card is qui­eter, but it might be the most impor­tant of the three.

Sci­ence depends on shared lan­guage. Peo­ple need a way to describe what they are try­ing to build. They need metaphors, mod­els, and names that help col­lab­o­ra­tors coor­di­nate. They need con­cepts that can move from one mind to anoth­er with­out col­laps­ing on the way.

With­out that, even bril­liant ideas stall.

Which brings us to one of the finest episodes of The Next Gen­er­a­tion: “Darmok.”

In “Darmok,” the Enter­prise encoun­ters a species whose lan­guage does not func­tion in the usu­al way. The uni­ver­sal trans­la­tor can trans­late the words, but it can­not deci­pher the mean­ing, because the mean­ing is car­ried through shared ref­er­ences and metaphors.

The Tamar­i­an cap­tain speaks in stories.

He says things like “Darmok and Jal­ad at Tana­gra,” and with­out cul­tur­al con­text, it sounds like non­sense. Yet for his peo­ple, the phrase car­ries a whole sit­u­a­tion, an emo­tion­al mem­o­ry, a shared map of meaning.

And as Cap­tain Picard strug­gles to com­mu­ni­cate, the episode reveals some­thing impor­tant: vocab­u­lary is not just words. Vocab­u­lary is a shared his­to­ry of images.

Sci­ence works the same way.

When sci­en­tists share a metaphor, they share a short­cut for under­stand­ing. When they share a mod­el, they share a way to rea­son. When they share a term, they agree on what matters.

That is why fields accel­er­ate when they find the cor­rect lan­guage. The lan­guage lets teams align. It allows research ques­tions to become vis­i­ble. It enables peo­ple to point at the same con­cept and know they are point­ing at the same thing.

“Darmok” makes that lit­er­al. It turns com­mu­ni­ca­tion into sur­vival. It shows that even the best tools fail when there is no shared mean­ing behind the words.

And it also offers some­thing emo­tion­al­ly hopeful.

The episode insists that shared lan­guage can be built.

It is not mag­ic. It is not auto­mat­ic. It is cre­at­ed through effort, patience, and the will­ing­ness to meet some­one else where they are. Picard and Dathon find a way to com­mu­ni­cate by build­ing a shared sto­ry in real time.

That is a beau­ti­ful metaphor for sci­ence at its best. Sci­ence is a com­mu­nal project. It advances when peo­ple share not only data, but meaning.

A shared lan­guage can be the dif­fer­ence between a bril­liant idea that dies in someone’s note­book and a clever idea that becomes a project.

Quick recap: three flash­cards you can keep

Here is what sci­ence can learn from sci­ence fiction.

First, sci­ence fic­tion stress-tests ideas. “Spock’s Brain” forces a wild fail­ure mode into the open and asks what a sys­tem does when its crit­i­cal com­po­nent is sud­den­ly gone.

Sec­ond: sci­ence fic­tion pre­views con­se­quences. “The Trou­ble with Trib­bles” turns expo­nen­tial growth into com­e­dy, and “Gen­e­sis” turns unfore­seen bio­log­i­cal col­lapse into dread. Still, both show how quick­ly sys­tems can spi­ral out of control.

Third: sci­ence fic­tion builds shared lan­guage. “Darmok” shows that words with­out shared mean­ing are noise, and that shared sto­ries can become the bridge that makes coop­er­a­tion possible.

A prac­ti­cal end­ing you can use today

The next time you watch a sci­ence fic­tion episode or read a sci­ence fic­tion sto­ry, I want you to ask three ques­tions that sci­en­tists ask, even if they do not always say them out loud.

First: What is this sto­ry stress test­ing? What does it remove, break, or exag­ger­ate to reveal the sys­tem underneath?

Sec­ond: What con­se­quences does it pre­view? Who ben­e­fits, who car­ries the risk, and what grows out of con­trol when nobody pays atten­tion early?

Third: What lan­guage does it give us? What metaphor, term, or shared image helps peo­ple talk about a new pos­si­bil­i­ty as if it were thinkable?

Those ques­tions are not only for writ­ers and view­ers. They are for any­one who lives in a world shaped by technology.

Sci­ence fic­tion does not need to be cor­rect to be use­ful. It needs to be clear about what it is testing.

And when it is clear, it becomes more than a story.

It becomes practice.

Thank you for lis­ten­ing to Math Sci­ence His­to­ry. And until next time, live long and prosper.

Share this Post

Leave a Comment