The Thanos Avatar & The Sorcerer Supreme

 

SPOILERS for Avengers: Infinity War & Endgame

 

Avengers: Infinity War left audiences with the greatest cliffhanger of cinematic history, more shocking and far more interesting to speculate about than even any of the twists in The Empire Strikes Back. The cliffhanger: How could our heroes possibly undo the mass murder of half the universe? Now the splendid Avengers: Endgame has been released and we’ve been given our answers, but while some of these answers can be taken at face value, others may be much more elaborate, and not at all obvious after a single viewing.

STRANGE
I went forward in time... to view alternate futures. To see all the possible outcomes of the coming conflict.

QUILL
How many did you see?

STRANGE
Fourteen million six hundred and five.

STARK
How many did we win?

STRANGE
One.

Only one way to win out of fourteen million, we’re to believe, but at first glance the numbers don’t seem to add up.

(1) Why didn’t Dr. Strange see any futures where he, Tony, Peter, and the Guardians managed to remove Thanos’s Infinity Gauntlet? Surely Strange could have convinced Nebula to stop Quill from interfering, or simply convinced her to never mention Gamora, and they could have removed the gauntlet while Mantis had Thanos slumbering.

(2) Were there really no futures where Thor aimed for the head?

(3) In the final confrontation on Earth, the Avengers had the combined might of Iron Man, Thor with Stormbreaker, Captain America with Mjonir, Captain Marvel, Wanda, all of Wakanda, and all of the Masters of the Mystic Arts. Why didn’t Dr. Strange see any futures where the Avengers played hot potato well enough to keep the Iron Gauntlet out of Thanos’s hands?

Also, another question from Infinity War that many might have described as a plot hole, (comically referenced by Ryan George of Screen Rant):

(4) Thanos effortlessly turned Mantis and Drax into ribbons with the reality stone. Why didn’t he turn every one of his opponents into ribbons throughout the rest of Infinity War?

Thanos likes long walks on the beach, bubbles, gazing at moons, and sometimes throwing them.

Questions (1), (2), and (4) are answered simply by this: Thanos was faking.

At Knowhere, Thanos mastered using the reality stone to fake a scene. He was able to convince Gamora, one of the most dangerous assassins of the galaxy, that she’d successfully skewered her with a dagger—but lo and behold, the Thanos she’d stabbed was an illusion. I’m guessing he didn’t stop there.

The battle at Titan wasn’t a regression for Thanos, or him going easy with the reality stone because he wanted a challenge. It was a learning opportunity. At Titan, Thanos mastered using the reality stone to fake a scene while using all three of his stones to engage in real combat. Though he felt real to punch and kick…

PETER
Magic!

PETER
More magic!

PETER
Magic with a kick!

…he was no more real than the version of Thanos on Knowhere. Just real enough to punch back. An avatar, perhaps controlled remotely by the real Thanos, who was never in any danger. He wasn’t being arrogant; he was being cautious.

Now confident in his ability with the reality stone, and now having acquired the time stone, Thanos proceeded to Earth. He dismantled Earth’s defenders with ease. After acquiring the mind stone, only one being in the universe could surprise Thanos: a wrathful Thor, with a weapon that could prove to be the Infinity Gauntlet’s match. But thanks to Thanos’s careful caution and preparation, it didn’t matter. When a Stormbreaker-skewered avatar snapped its fingers, the real Thanos was snapping his fingers elsewhere. When Thanos said, “You should’ve gone for the head”, he was simultaneously giving a legitimate piece of strategic advice (Thor’s 1,500 years old, he should know to go for the head by now) and giving false hope (going for the head wouldn’t have actually mattered). It’s archetypical Thanos: maliciousness hidden within a guise of reasonableness.

This explains why no possible engagement on Titan or Wakanda could have resulted in Thanos’s defeat. Dr. Strange may have deduced this after the first few hundred thousand futures he explored, and afterward focused his efforts not on stopping Thanos, but on undoing the Snap afterward. Early on he probably noticed Ant-Man’s idea to travel back in time, and he probably thought Scott was an idiot. But after a million futures explored, Strange was getting desperate, and he probably also noticed Tony’s reaction to the idea: not one that dismissed the idea as impossible so much as impractical. Maybe Tony just needed some time to recover from his defeat and then additional time to develop the new technology. Strange could focus on futures that kept Tony alive along with a strangely confident Bruce, and hell, Rocket could help too. There was just one problem.

Or rather: many, many problems.

You see, Tony didn’t have time to develop miraculous time travel technology. The Earth as a whole didn’t have that kind of time. The Earth was missing its Sorcerer Supreme, and the Sorcerer Supreme’s most important weapon: the time stone.

STRANGE
Dormammu, I've come to bargain!

DORMAMMU
You've come to die. Your world is now my world. Like all worlds.

STRANGE
Dormammu, I've come to bargain!

Dormammu, Galactus, Dr. Doom, you name it. The Earth can attract a lot of hostiles over the course of five years, and they would always distract Tony from doing what he needed to do. So that’s why Dr. Strange focused on for his next thirteen million explorations: Finding a snap where Thanos will just so happen to remove all the biggest, nastiest, planet-devouring threats from Earth’s corner of the galaxy.

At fourteen-million, six-hundred and five, he found it.

So now we can return to question (3). Did Tony really have to die? But that’s not really the right question. The right question is: In the grand universe of possible futures, what does the final confrontation in Endgame represent?

I can tell you it doesn’t represent the only fight that beats Thanos. Nine times out of ten, a heavy hitter like Wanda or Thor is going to keep Thanos busy while an agile flyers like Iron Man or Captain Marvel delivers the Iron Gauntlet to anyone capable of thinking with Portals, and boom, any further Snaps are out of the equation. The rest of the fight becomes a battle of attrition, trying to minimize casualties until eventually, inevitably, the Avengers beat down both Thanos and his armies.

No, this confrontation represents the most likely of confrontations with Thanos. Iron Man’s technology will always make him the best suited to keeping the Iron Gauntlet away from Thanos, and Iron Man will always prove to do whatever it takes. Why risk Thanos getting the gauntlet back, when Tony could end the conflict immediately? Why risk his allies dying, when Tony can sacrifice himself and limit the casualty count to one? If Dr. Strange had searched through forty-two million futures instead of fourteen, every extra win he discovered probably would have played out in a similar fashion. That’s because after Hulk undid Thanos’s snap, the carefully curated events of Endgame (the five years, the time travel, the rat, etc.) stopped being a reflection of Dr. Strange’s heroic search. Instead, they became a reflection of Tony’s search for heroism.

You thought I’d end this post with a picture of Iron Man, didn’t you.

 

 

Which is the best voting scheme?

To become the most powerful human on the planet, you really only need the support of about 7% of Americans.

I have a friend who filled out his entire 2016 general election ballot except for one spot, right at the very top: the spot for President of the United States. It wasn’t that he felt both candidates were equal in measure, and it wasn’t that he didn’t want to. He just couldn’t bear to. When it came time to bubble in the circle next to “Hillary Clinton,” he could feel a sickly unease growing in his gut. Endorsing either Clinton or Trump was too loathsome a prospect for him to contend. Anyone he might have voted for—including the likes of both Bernie Sanders and John Kasich—wasn’t on the ballot. But why was that? And why were the two most historically unliked candidates of any modern American presidential election the only two with a realistic chance of winning?

The answer lies with the American voting system, which only allows one vote for one candidate per ballot—a system known as Plurality, or First-past-the-post. When you’ve only got the one vote, you can’t waste it, and when third-placing parties are seen as a waste of a vote, voters will flock to the top two parties. This is where our 7% figure starts to come into play: To win the presidency, a candidate needs only win the primary of whichever party is stronger that year. Once they’ve won the primary, the infernal machine of partisanship will starts its engines, herding voters who despise the opposition (because the other party is obviously full of ignorance, incompetence, or sheer insanity). The general election narrows the field of two candidates down to one; it’s the primaries that narrow from two-hundred and fifty million Americans down to two. Thus, a politician like Donald Trump could find success with his 44.9% of the Republican primary’s votes, of which an embarrassing 14.8% of eligible voters participated in, for a grand product of less than 7%.

Admittedly, this estimation glosses over the influence of swing voters, but the fact remains that the proportion of key voters is far too low—all thanks to Plurality voting, the worst among many options. There’s Borda count, Nanson’s method, majority judgement, two-round systems, and a myriad other schemes, each with its idiosyncratic upsides, downsides, staunch supporters, and vociferous critics. This article briefly covers the ones I think most important to know, while arguing for the one I think best. I hope I can convince you, but anything I argue unconvincingly, I assure you, will already have articles written on it for both sides. Google (or Google Scholar) is your friend!

Forget voting, trial by combat’s where it’s at.

What should a voting scheme ideally achieve? Roughly speaking, a perfect voting scheme would elect the candidate most preferred by the most people. But there are two major problems with this: One, rarely do “most people” prefer the “most preferred” candidate—and if that sounds confusing, don’t worry, I’ve got a superhero-centric example coming up to explain it. And two—the bigger problem—most people lie.

% of voters True candidate preferences
55% Captain America (10/10) > Spiderman (7/10) > Iron Man (1/10)
45% Iron Man (9/10) > Spiderman (7/10) > Captain America (2/10)

In the above example, Captain America and Iron Man are people’s top choices, but Spiderman is everyone’s second favorite, universally ranked at a respectable-but-not-superb 7 out of 10. Who should win the election, Cap’ or Spidey? I would say Spidey. Others might say Cap’. But Spidey should at the very least have a shot at winning—and in Plurality voting, he sure as shots doesn’t.

So that’s problem number one: A good voting scheme needs to consider more than just voters’ top choices if compromise choices like Spiderman (preferred by “most people”) are to ever have a chance. On the other hand, a voting scheme that only ever elects compromise choices would be just as bad; if Spiderman had been ranked at a universal 4/10 instead of 7/10, we would want Captain America (the “most preferred” candidate) to win instead.

Then why not just let voters rate all candidates on a scale from 0 to 10 and elect the candidate with the highest total number of points? That’s Range voting (also known as Score voting), and that’s where problem number two comes in: People lie.

Range voting ballot

I could give Iron Man a 9 and Spidey a 7 to reflect my true preferences. Or I could give both of them a 10 to boost their chances of winning relative to Captain America, who I (in this hypothetical) loathe. This kind of behavior is known as “tactical voting” and is on full display in Plurality voting: Many liberals, for instance, vote Democrat instead of for more progressive parties because they know only Democrats and Republicans stand a chance of winning. This creates a feedback loop whereby the more people who tactically vote, the stronger the primary two parties become, the more people tactically vote, and that’s how a freedom of two-hundred and fifty million choices becomes constricted down to two. With regards to Range voting, this means most people will give either a 0 or a 10 to all candidates (essentially reducing the scheme to a needlessly complex version of one called Approval voting).

If people can’t be trusted to rate candidates sincerely, what if they were forced to rank them? Many voting schemes are built off of ordered rankings, and I’ll cover two of the most commonly discussed ones: Condorcet, and Instant-Runoff Voting (IRV).

% of voters True candidate preferences
46% Black Widow > Hawkeye > Nick Fury
9% Hawkeye > Black Widow > Nick Fury
45% Nick Fury > Hawkeye > Black Widow

Condorcet takes the idea of Round-Robin tournaments and applies it to voting, positing that the best candidate is the one that would win against the most other candidates in imaginary head-to-head contests. In the example above, 55% of voters prefer Black Widow to Nick Fury, so she gets one “win” there. The same 55% of voters prefer Hawkeye to Nick Fury, so he gets one win there, and 54% of voters prefer Hawkeye to Black Widow, so he gets a second win:

Hawkeye > Nick Fury
Hawkeye > Black Widow
Black Widow > Nick Fury

And Hawkeye wins the election. That’s good news for our humble archer, but there’s a major drawback: Moderates always win Condorcet competitions. Taking us back to the real world, for a moment, who would a Hillary Clinton hater rank higher between her or some random Joe Schmoe moderate? And who would a Donald Trump hater rank higher between him and the Joe Schmoe? Joe Schmoe will win by virtue of being the least hated, even if he is bereft of virtues himself.

Also, Spidey would stand no chance of winning against Cap’ in a Condorcet method either. And that’s not cool.

Alright, alright, I admit it. I’m biased. Spiderman’s totally my favorite.

Instant-Runoff Voting proceeds by eliminating the least favorite candidate, then doing a recount with that candidate struck from all ballots, then eliminating the next least favorite candidate, recounting, and so on until only one candidate remains. This scheme is used in various provinces and municipalities around the world, and most notably elects Australia’s House of Representatives. And though it’s better than Plurality, it still puts too much emphasis on first place votes, and thus still encourages the same kind of tactical voting that leads to two candidate dominance.

% of voters True candidate preferences
34 Hulk > Star-Lord > Thor
17 Star-Lord > Hulk > Thor
49 Thor > Star-Lord > Hulk

In this example, Star-Lord is eliminated first. The 17% of ballots that ranked him first now support Hulk as first, giving Hulk a 51% majority and a win over Thor—despite the fact that 66% of voters would have preferred Star-Lord, and are now incentivized to tactically vote for him instead of Thor.

Also, Spidey still doesn’t stand a shot of winning.

As it turns out, any preferential ranking system will lead to tactical voting, as proven by the difficult-to-pronounce Gibbard–Satterthwaite theorem. But if ranking systems are all flawed, what’s the alternative?

Approval voting.

Approval voting ballot

Of all the options I’ve encountered since beginning my research on this topic, Approval voting—now my favorite option—had always been by far the easiest option for me to dismiss. It’s simple a scheme: You vote for as many candidates as you want but neither rank nor rate them, and the candidate listed on the most ballots is the one who wins. Simple, and yet unintuitive: To better reflect the preferences of a population, a perfect voting scheme should collect more information about individuals’ preferences, not less. Voting for Thor and Star-Lord doesn’t feel enough; I want to vote Thor higher than Star-Lord. I want my ballot to reflect my desires: for Thor to have the best chance of winning, and Star-Lord the second best.

Unfortunately, I was born in a universe where I can’t always get what I want. Approval voting makes the effect of my ballot clear: Either I’m supporting Star-Lord or I’m not. Ranking systems lead to tactical voting, which leads to aggregation of power, which imbalances the scales. With Approval voting, every candidate stands a chance.

Even Spidey.

The question of whether Spiderman would defeat Captain America, in that original example, depends. It depends on how many voters choose to support only their top candidate. Maybe Cap’ wins one election, which will lead to disenchanted Iron Man voters approving Spidey next time and giving him the next win. Then if Spidey governs poorly, some proportion of both Cap’ and Iron Man voters will cease to approve of Spidey, and the pendulum will swing back, and what we see is something missing, in a way, from all the other schemes: an equilibrium.

And my friend? With the two-party stranglehold on, he’d be able to vote for Bernie Sanders, or John Kasich, or whoever else—as would the rest of the nation. He’d have had a voice despite not being one of the approximately 15% of voters who catapulted Clinton and Trump out of the primaries.

Maybe you can come up with an even better system. Maybe play around with existing schemes, tweak the rules, experiment with the parameters. Don’t take my word as gospel; find your own favorite. No matter what you choose, it’ll be better than what we have now in America.

Writing code is like…

The two are related, both by theory and—depending on the job—sometimes in practice, but no matter how much they cross paths, and no matter how much they might technically both rely on zeroes and ones, coding is NOT math.

“I don’t think I’d be good at coding, I never liked math” makes about as much sense as “I don’t think I could play the guitar, I was never good at dancing”, and yet I hear that sentiment often when recommending the field. The demand for software developers is high, the supply is low, the pay’s good, and it’s not too hard to learn the basics well enough for an entry-level position. And still there’s this resistance to it, as reflexive as heartburn, that rises from this notion that coding is all about managing numbers that stream down spreadsheets like the symbols in The Matrix. It’s the machine that minces the numbers; the programmer instead strings together words, albeit words with highly contextualized meanings. Although, writing code isn’t quite like writing prose either.

It’s more like editing.

matrix-3109795_1280

This might look cool, but it has nothing to do with coding.

Whether it’s a book, a movie, or even a Youtube video, every piece of media has a purpose—to amuse, to educate, to shock or inspire, but always, invariably, to maintain the audience’s attention. Every scene, every line should serve this ultimate goal. Sometimes, though, the author screws up: Readers stumble over poorly constructed sentences, give up out of confusion, rage unexpectedly at the unfulfilled promise of abandoned plotlines, or simply lose interest out of boredom. Computers are the same. Readers stop reading; programs crash. Readers complain loudly; programs behave wildly. The diligently editing author must fix the grammar of his sentences; the programmer, her syntax. Authors reorder and reorganize concepts for clarity, trim unneeded plotlines or expand upon those accidentally left unfinished, and speed up pacing as necessary. Programmers do the same—except with plotlines that spell the fate of data instead of characters.

But there’s one big difference: The programmer is not writing a stand-alone story. She’s writing a piece of a much larger tapestry, like an episode of a series, or a movie in the Marvel universe. She can’t just write for the reader (the program), but for other writers: Other script writers will have to pick up the plot threads one episode leaves dangling, and they’ll need to be able to understand the intent behind every plot or character choice. Nobody likes forced retcons to cover plotholes, and no programmer likes messy code that can’t be understood or reused. Unlike what Hollywood might have audiences believing about the geeky, hacker lifestyle, most coding done today is deeply, intrinsically collaborative.

The MCU has never had to retcon anything. Left: Terrence Howard. Right: Edward Norton.

In either case, the writer must maintain multiple threads alive in his or her head, gradually growing each and interweaving them, layer by layer, lest the resulting whole collapse upon its weakest leg. Perhaps the same could be said of a mathematician writing proofs, and I’m in the wrong for disparaging the comparison. But the comparison to editing rings truest to me, though, like any analogy, it’s an imperfect comparison at best—because programming is its own thing. You don’t need to be good at math to be good at programming, because you don’t need to be good at anything else to be good at programming. Don’t worry about whether you’ll be good at it, just try it.

Go code.

 

I’ll leave you with this:

 

Writing code is like…

…playing Scrabble in a language you never learned to write.

…or building a plane with money enough for a kite.

Writing code is like…

…writing recipes even a drunkard could follow.

…or teaching a parrot to pass for human, solo.

Writing code is like…

…building a bike for users of any shape or size.

…or playing Twenty Questions with any number of lies.

Writing code is like…

…teaching a cat the consequences of cruelty.

…or writing a poem without knowledge of beauty.

Writing code is like…

…drawing a flowchart for every occasion.

…or drawing by dictation, a dozen words at a time.

There Are No “Best” Films

With the year’s end approaching, talk of best movie of the year has begun. And with it, the familiar patterns emerge: People who aren’t film critics or obsessive cinephiles rank popular films as their favorites because it’s hard to favor films they never saw. Critics, who watch as many films in a week as the regular bloke watches in a year, understandably tire of too-common story arcs, character types, and other tropes faster than the general public tires of them, which causes critics to favor innovative, unusual, or in-the-eyes-of-anyone-else just plain weird films. And while this might cause perennial bickering in between the hoi polloi and the snobs, it’s not the pattern I think should change.

snobby cat

This cat is a snob. Don’t be a snob.

I want people to stop discussing “best movies” altogether. Specifically, I want the “best” part dropped.

How we use language influences how we think—though I don’t mean in the way George Orwell imagined. In his acclaimed novel 1984, a dialect of English called Newspeak was designed by the novel’s totalitarian regime to constrict vocabulary and thus restrict the thoughts of its citizens. The idea was that people would be unable or less likely to consider concepts they had no words for. It’s an intriguing notion, and one I’ve always thought was ridiculous, even as a kid. We come up with words for new concepts—and new words for old concepts—every day. In the real world, illegal drugs have a long and ignominious history of synonyms generated precisely because of their illegality (imagine a totalitarian regime trying to censor words like “snow” and names like “Mary Jane” just to censor cocaine and marijuana, respectively). I find it more believable that it’s other people’s daily use of language that affects how we think rather than anything intrinsic about language itself. The politically charged phrases “pro-life” and “pro-choice” may have shaped how we think about that particular debate, but if they have, it’s only because we as English speakers agreed to adopt them as pithy slogans in lieu of actual, specific arguments.

Obviously, politics and totalitarian regimes are matters of far more importance than ranking favorite movies—but I figure me and my friends will be debating our favorite films of the year for all our remaining years, so hell, why not do it right? The phrase “best movie of the year” is nonsense. Film is an artform too complex, too multi-faceted, and rewarding in too many different ways for any one film to be the overall “best” in any given year.

Take the year 2016. The film that made me laugh most was Shane Black’s slapstick neo-noir Nice Guys. It was a perfect summer film (barring the fact it was released in May), but it wasn’t the most fun film I saw that year; that title belongs to Park Chan-wook’s delightfully clever crime-romance The Handmaiden. On the other end of the spectrum, Manchester by the Sea was the least fun film I saw that year, but also the most harrowing, a story of grief I appreciate having seen but will never see again. Depressing as it was, however, Manchester didn’t make me cry; my tearjerkers of 2016 were Fences (thanks to August Wilson’s stellar screenplay and Denzel Washington’s equally stellar performance), Hacksaw Ridge (easily the worst movie in this list, but elevated by its subject matter, the real life hero Desmond Doss), and Hidden Figures (which is the only film I’ve ever seen in my life that made me cry tears of joy).

To call a film “best” is to say it measures up to more than any other, but it assumes we have a universal metric with which to measure. Fans can measure films by how often they revisit their favorites year after year—by which metric Pirates of the Caribbean would be my favorite film of all time, and you know what, I’m perfectly fine with that, so sue me. Others might measure films by how deeply they explore the human condition, and while that sounds snobby as all hell, I think every story (even The Hangover) has at least something (however trite) to say about humans and how we behave in the most stressful, adventurous, or ridiculous of scenarios. And those who actually work on films might measure them by innovations brought to the craft—as they did last year, when the Academy gave its “Best Picture” award to an indie film that combined a novel three-act structure with the most intimate cinematography I’ve ever seen.

Alex R. Hibbert and Mahershala Ali were only two of the many outstanding cast members of 2016’s Moonlight.

So what’s the alternative? Break it down. Many award shows already distinguish between comedy and drama; why not break it down further: Most delightful. Most emotional. Most interesting. Most innovative—whatever. There are a lot of categories that could replace “best”, but which of them are best, I don’t know.

Which attributes best describe player characters?

Let’s say you had to rank every human on the planet on their proficiencies at everything. Maybe you work as a government official in a dystopian society trying to encourage competition between its denizens, or maybe you’re an alien observer gathering data on human dynamics. Such a task would be impossible, because the list of skills that could be enumerated is limitless (literally so: though my sister might be more proficient at weaving, I might be better underwater basket weaving, and she at drunken underwater basket weaving, and so on, ad infinitum). However, your dystopian dictator or alien boss allows you to choose a handful of qualities to rank, each quality representing the average proficiency at a whole category of skills. Which qualities would you choose to make your job of ranking humans easier?

Perhaps you choose wisdom, kindness, courage, and cunning, because you’re the headmaster of a school of wizardry and you want to segregate and stereotype students by personality… for some reason.

For instance, I could choose critical thinking, emotional stability, self-discipline, bravery, people skills, and athleticism. A politician with deep-set insecurities who’s also adept at inspiring followers might score highly on people skills but low in emotional intelligence. A clever student who never achieves his goals in life might score high in critical thinking but low in self-discipline, and a visionary who goes against the fold might score highest in bravery than in any other category.

In videogames, these categories are used to track the growth of players’ characters, though they focus less on interpersonal distinctions and more on the raw necessities of violence. In the hack-and-slash classic Diablo, for example, the attributes that represented a character’s proficiencies were Strength, Dexterity, Vitality, and Magic, which corresponded to the power of your attacks, your accuracy, your health, and your spellcasting. The rampantly popular World of Warcraft kept Strength, swapped Dexterity for Agility, Vitality for Stamina, and split Magic into Intelligence and Spirit. The examples are myriad, in both JRPG’s and Western RPG’s and games of other genres (like Dota 2), and every one of them can trace the roots of this system to the early 1970’s, before even the advent of personal computing.

Dungeons & Dragons, which has inspired so many videogames in so many ways, was itself inspired by miniature wargame campaign called Blackmoor. In this campaign, Dave Arneson—one of the two fathers of D&D—chose various aspects to represent individuals’ personalities, including notably Brains, Looks, Credibility, Sex, Courage and Cunning. Gary Gygax—Arneson’s friend at the time and the other father of D&D—wisely folded the trio of Looks, Credibility, and Sex into the single kid-friendlier attribute of Charisma. The final list had six: Strength, Dexterity, Constitution, Intelligence, Wisdom, and Charisma. The first three of those covered the basics of non-magical combat familiar from Diablo as well as so many other modern games, and they adequately distinguish the hardy from the quick-footed, the barbarians from the rogues. The next three, on the other hand, created a mess that the fifth and current edition of D&D (or “5e”) still deals with.

I’ve seen the difference between Intelligence and Wisdom explained through analogy: A mathematician might be intelligent, able to solve complex problems and retain ample knowledge, and yet lack the experience and empathy needed to navigate the turbulent waters of a relationship. I’ve also seen it explained pithily, as the difference between knowing how to do something versus knowing whether to do it. Unfortunately, the two qualities bleed into one another, as a lack of intelligence can lead to poor judgement, and the wisdom of accrued experience can easily cross into the domain of knowledge. Consider, for example, the turbulent waters of religion, a kind of relationship steered as much by faith as by love, and governed in 5e not by Wisdom but by Intelligence. Or consider the copious amounts of knowledge required to survive med school and then consider 5e’s medicine skill, which is governed by Wisdom, not Intelligence.

Charisma would make sense if it wasn’t for magic. While wizards and druids respectively use Intelligence and Wisdom for their spells (speaking to the difference between booksmarts and streetsmarts), paladins and sorcerers use Charisma. In the mechanics of D&D, the notion of Charisma has been overloaded to also represent the mostly unrelated notion of willpower, and then this notion of willpower was made irrelevant for half of the spellcasting classes. When defending against spells, rather than casting them, these three attributes become more confusing still. Charisma can help to deter the Calm Emotions spell, but not so for the Charm Person spell, which only Wisdom can deter. Most illusion spells fail to the Wise, except for Phantasmal Force, which fails to high Intelligence, and Seeming, which fails to Charisma. An enemy tries to banish you to another plane of existence? Roll a Charisma check. A scrying enemy tries to spy upon you from within the same plane of existence? Roll for Wisdom. Maybe these choices all have explanations, but they’re not intuitive, and they’re not the only alternative.

Here’s mine.

skyrim skills

Skyrim’s only attributes are magicka, health, and stamina. Skills level up individually.

Postscriptum:
Ironically, my favorite attribute system I’ve encountered was from a videogame in a series that has since dispensed with attributes entirely. The developers realized that players could manage their skills (archery, alchemy, speechcraft, etc.) directly and that the attributes that governed them (agility, intelligence, personality, etc.) were redundant. The same cannot be said for D&D, which is prone to requiring ability checks outside the scope of any one skill, since the scope of what you might attempt in D&D is rather larger (infinite) than in any videogame.

What is Dungeons & Dragons?

Garish costumes and crudely painted miniature models, dice rolls and hastily scrawled arithmetic, massive rulebooks and the awkward, teenaged boys that argue over them: These are the trappings often described by those who can name, but cannot define, “Dungeons & Dragons.” Even those who know better—those who have played the game or listened to it through podcasts, Twitch, or television—can struggle to define what Dungeons & Dragons really is. And that’s because Dungeons & Dragons (or “D&D”) is as difficult a game to define as it is to learn, in that it can be many things, and many conflicting things, as mercurial as the people who play it.

The socially awkward characters of The Big Bang playing the game infamous for being played by the socially awkward.

You might think of D&D as a board game, which isn’t a bad place to start, except you have to keep in mind that all of the board, the pieces and the rules are optional. The pieces represent you, the player, except it can be a you with any species and any personality that you choose. The board represents the world as imagined by the Game Master (or “GM”), the one among you who creates and controls every monster, every non-player character, and every aspect of the world that you and your party will explore—akin to a god, except also a slave, required to constantly adapt and tailor the story to fit their unpredictable players’ whims. And the game itself is a combat simulator laden with dozens of rules, except it is also an exercise in collaborative storytelling and improv, a game where the goals are made up and the rules don’t matter.

The game is, essentially, whatever you and your friends make of it. A group with a comfortable dynamic can have the best of times, but an unaccomodating GM or a belligerent player can easily incite the worst of times. Similarly, the combat within game can be as easy or as challenging, as strategic or as creative, as the GM and players want it to be. D&D is difficult, to be certain, though in ways that have nothing to do with the “winning” or “losing” of combat encounters. D&D is unenviably difficult for Game Masters, who must both challenge their players to keep them engaged but also help their players stay alive and feeling epic. And the players, who may have had no prior experience in acting or improv, have the daunting task of play-acting a newly imagined character however subtly or wildly different than their actual self. But despite all this, if you keep in mind what you’re there for—to have fun and to share pizza, I presume—you’ll do just fine.

So what does starting a new game of D&D look like? A fresh group might begin like this: The one among you who’s played before goes out and buys a Game Master’s guide to facilitate his world-building efforts, but whose contents will not be read by rest of you, the players. Each player creates a single character to represent them, choosing a class (such as Wizard, Rogue, or Bard), species (such as Human, Dwarf, or more exotic creatures like Dragonborn), and a background (such as sailor, soldier, or outcast)—whichever sounds most appealing. This can take between minutes and hours, depending on how much a player enjoys reading about the many options (there are nine races and twelve classes in the Player’s Handbook, together spanning one hundred and one pages of basic outlines, illustrations, and information that will only become relevant as characters grow stronger with experience). Then the GM describes in his own words—or with words borrowed from their guide—how the characters in your party first meet in some much-fated, little-remembered tavern, and how a dwarf by the name of Gundrin Rockseeker offers you gold in exchange for escorting his wagonful of merchandise to a nearby town, and how bellicose goblins then descend upon your escort in order to teach you the finer rules of combat. As you and your friends meet up every two weeks (or as often as your busy schedules allow), you gradually explore the mysteries of “Wave Echo Cave” and “Cragmaw Castle” and dungeons that have no name and quests that have no end, all the while your character grows in power and perhaps in personality, and you as a gamer perhaps grow in diameter as you share pizza that is as tasty as it is bad for your cholesterol.

I’ve spent hours poring over the different classes in 5e’s Player’s Handbook.

But how does the game actually work? If players decide what their characters do and say, what’s to stop a player from saying, “I stab the big bad villain through the heart” to save the day?

That’s what the dice are for. The dice, and ability score modifiers.

The fifth edition of Dungeons & Dragons uses a “D20” system, which simply means that most actions that have a chance of failure require a roll of a twenty-sided die (or “D20”) to resolve. For example, say your party is venturing through a dungeon to rescue a friend from a band of kobolds. At one point you narrowly avoid falling into a trap floor five feet long and filled with spikes. Now faced with the decision of how to continue through the corridor, you decide to simply leap across the five foot gap. You roll a D20, and your GM decides how high a roll you’ll need to make the leap—maybe a ten or higher will succeed, but a nine or lower will leave you to deal with a painfully porcupinous landing.

 

The friends of Community perilously allow their fates to be determined by the roll of a die. (“Remedial Chaos Theory”)

But now let’s say you’re playing a half-orc barbarian who’s devoted their life to bashing in the skulls of his enemies with their considerable brawn. Surely it should be easier for your character to make such a leap than your other party members, one of whom is a scrawny bookworm of a wizard who’s never even collapsed a frontal lobe with a swing of a warhammer, much less with their fist. The barbarian will most likely have a higher Strength value, which will translate to a bonus to D20 rolls that involve strength, such as long-distance leaps. Maybe you roll an eight, but with your plus three from Strength, you make the leap. Immediately after, a kobold attacks you from around the corner. You decide to grapple the kobold and toss him into the spike trap, which results in a strength contest: Both you and the kobold (or rather, the GM on behalf of the kobold) roll D20’s, with bonuses or penalties applied from the characters’ respective Strength scores, and the higher result wins the grapple.

That’s the game in a nutshell. Want to attack with a weapon? Roll a D20 and add your weapon’s bonus modifier to the roll, and compare it against the monster’s armor value to see if you hit. If you hit, roll another dice to determine the exact amount of damage dealt. Want to cast a fireball spell? This time it’s the monsters that will roll D20’s to see if they can dodge out of the way of the flame, which is harder to do the better your spellcasting modifier is. And so on.

It’s a simple enough setup, and yet powerful enough to resolve any of the infinite possible actions that might need to be resolved in a game where literally anything might happen. On top of that, it allows a sense of progression, letting your characters tangibly improve at various skills as their ability score modifiers increase over time. And the element of random chance can lead to hilarity, especially considering that any roll of twenty is considered an automatic success (regardless of ability score modifiers) and any roll of one is an automatic failure, so even the patently ridiculous becomes possible with the best of luck, and the blatantly easy becomes embarrassing with the worst of luck.

Dungeons & Dragons was only the first of its kind, as many other role-playing games have sprung up over the intervening decades since Gary Gygax’s first excursion into Forgotten Realms. And yet, forty-odd years later, D&D is still a great place to start if you want to pick up an RPG, or if you want a more imaginative way to spend a few hours with friends. I highly recommend it, if you get the chance.

The players of Critical Role pose with far more panache than I could ever muster.

Next: Which abilities best represent characters?