First published September 2020
Questions? Suggestions? E-mail me:
The vast majority of people on earth argue in a destructive fashion. Debates, especially in online spaces, are viewed as a battle of the wits in which egos are put on display and there can be only one “winner”.
Instead, we should be arguing in a constructive fashion: treating arguments as an opportunity to expand knowledge, finding points of disagreement, and collaborating towards a common truth.
I have a confession to make: I used to be a destructive arguer. When I was younger, my goal in any argument was not to learn something new, but rather to assure my superiority over what I felt to be the clear stupidity of the other side. I even used to save screenshots of debates I had on various forums and social media platforms, returning periodically to reminisce about past skirmishes in which I “owned the conservatives“.
Luckily, several years spent abroad gave me a different perspective. I realized that in the small-sided debates I used to engage in back home, my positions lacked the nuance and context of the greater world. For the first time, I began to do deep research on how to think — and argue — more clearly, drawing from concepts from philosophy, psychology, and behavioral economics.
This widened outlook led me to see arguments as a chance to build value, rather than destroy it. Instead of going through the mental anguish of battle, I now follow a collaborative approach to debate that I’d like to share in this guide in hopes that it will inspire others to argue more constructively.
Note that the content in this guide will focus on arguments about public issues, like politics and religion, as opposed to personal issues, like “you need to communicate more” or “you haven’t done the dishes in weeks”. Though there is overlap between the two, interpersonal arguments are much more complex and require more nuance than this guide can provide, plus there are already a ton of great resources out there that explore these topics more thoroughly.
“An argument should be a collaboration between two people to find the truth.”
If I had to distill this guide down to one sentence, it would be the above. Even if you forget the individual tenets and strategies this guide has to offer, as long as you are treating any given argument as a collaboration in search of truth, you can’t go wrong.
Arguing more effectively requires detaching yourself from the idea of “winning” in the traditional sense. Instead, you should declare victory when you have argued in good faith and kept an open mind.
True collaboration requires that both parties open an investigation into why they may be wrong and consider changing their beliefs. Which brings us to the three core tenets of a constructive debate mindset:
Was there ever a time in which you had a deeply-held belief about something, but slowly came to realize that you were wrong? Maybe you thought a past partner was “the one”, or you were devoted to a religious faith. Or perhaps something as simple as believing in Santa Claus.
What’s to say that couldn’t happen with the other deeply-held beliefs, given enough evidence?
Go into every debate with the mindset that you may not know everything about the topic at hand, and in fact may be wrong.
If you successfully acknowledge that you may be wrong, it follows that you must then be willing to change your mind. Having the humility to admit that your mind has been changed is one of the most honorable positions in a good faith debate.
In a war, all soldiers take an oath to fight for their own side, no matter what amount they agree with its principles. Eliezer Yudkowsky once observed that in political debates, arguments were treated like soldiers: “Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back.“
Because most people go into debate with a war-like mentality, they feel they must fly the flag for all points that their side supports, regardless of how much they actually agree with them.
The red state gun-owner must be pro-religion, anti-abortion, anti-drugs, anti-tax, and skeptical of gender issues.
The blue state Subaru-owner must be anti-religion, pro-abortion, pro-drugs, pro-tax(ing-the-rich), and concerned about gender issues.
Most annoying is that given the societal expectations for this divide, being for or against one issue immediately assigns you to a “side” in the views of everyone involved. Breaking out of this Arguments as Soldiers mindset involves two steps:
1. Do not be afraid to agree with the arguments of the other side when they strike you as reasonable, and critique the arguments of your own side when they strike you as unreasonable (better yet, try not to have a side).
2. On the flip side, avoid stereotyping your debate partner based on one opinion. If you are engaging with someone in debate for the first time, assume that they agree with you on every other position than the one they are defending, until proven otherwise.
Brace yourself, Star Wars references incoming:
In a given debate, almost everyone thinks they are a member of the Jedi order, fighting for all that is virtuous and good in the universe. Yet for every Jedi, there’s a Sith out there who thinks that the Jedi are evil and wrong and that they are actually the ones fighting for virtue and good. Remember that this person might even be you.
Of course, you are not a full agent of good, and your debate partner is not an agent of evil, or vice versa. You are simply citizens of the galaxy who happen to be operating with different sets of information. Look at the situation from a different perspective: if you were raised with Sith beliefs from childhood, don’t you think you might believe the exact same things a Sith would?
In debate, your goal should not be to strike down the side of evil with all your hatred, but rather work together with them to uncover the true facts about the universe, and in doing so perhaps change both your perspectives.
It would be great if choosing to pursue the path of arguing constructively was just a matter of changing your mindset overnight, but as Carl Sagan once said: “If you wish to bake an apple pie from scratch, you must first invent the universe.”
In the same vein, if you wish to improve the constructiveness of the debates you engage in, you must first spend time re-inventing your entire mind.
This is because our mind is constantly working against us, plagued by ancient errors from the times in which we lived in caves and hunted woolly mammoths. These errors work against us in the form of cognitive biases and logical fallacies, which hinder our ability to clearly see reality and engage in sound debate.
Cognitive biases are limits and mistakes in human judgement that prevent someone from acting rationally. They are present in every aspect of human life, and in tense situations like arguments, they tend to appear more often as emotions are heightened and the brain gets overloaded.
Common examples that relate to debates are confirmation bias, or the tendency of humans to seek out information that confirms existing beliefs, and ingroup bias, or the tendency to agree more strongly with people that appear to be part of our “tribe”, but there are over 100 identified biases, and it’s worth reading through the Wikipedia article on the most common cognitive biases so you can recognize when they might be clouding your thinking.
In part caused by cognitive biases, logical fallacies are errors in argument that give off an air of decisiveness, despite making points that don’t hold up to logical scrutiny. While these are often used unintentionally, due to bias, carelessness, or ignorance, unfortunately, they can also be wielded intentionally by a shrewd debate partner.
Common examples in debate include the false dilemma fallacy: “you’re either with us or against us”, and the slippery slope fallacy: “if we allow the gays to marry, what’s next: plants?” Just like cognitive biases, there are a large number of identified logical fallacies, and it’s worth it to review the entire list, so you can spot them in your own arguments and in those of others.
To the untrained eye, a debate might look like two or more parties trading argumentative points back and forth. But interestingly, these points can almost perfectly be classified into a few categories. Understanding these categories, and why some types of arguments are better than others, is crucial for learning how we and those whom we engage with in debate might shape their points. In a brilliant post called Varieties of Argumentative Experience, Scott Alexander does just this, illuminating and labeling practically every part of a debate. The post itself is basically required reading, but is long-ish, so I will summarize here.
In general, the lower on the pyramid you are, the worse debate you’re having. The goal should be to start as high as possible and continue to work your way towards the top.
Debates on twitter and other forms of social media are almost guaranteed to never rise above the lower dotted line, as these platform don’t allow for more nuanced debate. Everything above the higher dotted line is our gold standard: two intelligent, charitable, and versed debaters can successfully maintain a debate at this level until some form of resolution.
The blue side represents the discussion surrounding facts and the red side represents the discussion surrounding the philosophy behind them: how the arguments must fit together before one side is right or wrong.
The meta-debate is represented outside of our pyramid of debate as a Sphinx, because it “guards” the debate itself.
Most of what people do in disagreements about political or social issues is just debate about the debate, without actually engaging in the debate. This can come in many forms:
- discussion about the debate itself: “OK, this is actually getting unproductive. I’m leaving” or “This is really the wrong platform to be having this debate on”
- sensitivity concerns: “Wow, you seem angry” or the converse “Stop tone policing”
- painting with broad strokes: “It’s not worth it to debate environmentalists. You’ll never convince them”
Though these concerns may not necessarily be wrong, and may often be necessary to maintain good debate norms, we over-value them, and meta-debate comments can often crowd out actual productive debate.
Moving inside the pyramid from the bottom up, the majority of comments inside most debates itself can be qualified as social shaming. Examples are: “I can’t believe it’s 2020 and we’re still listening to white males on this issue” or “Just another purple-haired SJW snowflake who thinks all disagreement is oppression.” Fans of logical fallacies may recognize this as an ad hominem fallacy, but social shaming goes above and beyond as it is an intentional ad hominem used to frame the “other side”, by virtue of their status or tribal affiliation, as being completely unworthy of participating in the conversation. Social shaming should be avoided entirely. Remember, what’s at stake are the issues, not the people debating them.
The next tier up are gotchas: short, catchy arguments that make great soundbites, but are usually either irrelevant to the argument at hand or based on a logical fallacy. Example: “If you hate America so much, why don’t you just leave?” This is clearly a fallacy. One could hate America but want to stay and make it a better place, or hate America but think that all other countries are worse, or simply just not want to incur the costs of moving. If you find yourself cheering on a short, pithy statement that paints the other side in broad strokes, you’re probably falling victim to a gotcha.
Spotting a single fact is a sign that your debate has passed the first dotted line and is at least minimally productive. While more commendable than shaming or trying to trick someone, single facts don’t add much to a debate. Even examples that are technically true, like “The UK has gun control, and the murder rate there is only a quarter of the USA’s” or “Hillary Clinton is awful, she handled her emails in a scandalously incompetent manner and tried to cover it up” fall victim to logical fallacies, like correlation not implying causation (the US murder rate might be higher due to factors outside of gun control) and the fallacy of composition (just because the Hillary Clinton did something bad, doesn’t necessarily mean she might be a bad candidate).
Moving up the pyramid, a single study is always better than a single fact, because they at least provide a source where a competent third party looked into the issue and reached a conclusion. However, studies are not infallible. At the click of a button, one can find a study to represent almost any point of view. Study conclusions can misrepresent reality based on experimenter bias, p-hacking, etc.
The final level of fact-based debate is a good faith survey of evidence. This requires a lot of work, as it involves a deep dive into a position: reading the most relevant studies on both sides, examining each study’s potential biases, and reporting back. An example of what that might look like is: “I just reviewed several studies, and it seems that this level of gun control would cause 500 fewer murders a year, but also prevent 50 law-abiding gun owners from defending themselves. Overall I think that would be worth it.” The word good faith here is very important: it’s easy to cherry pick studies that support your position, but performing a good faith survey means looking at many relevant studies on the topic, picking only the ones with the strongest scientific rigor, and concluding from there.
We now move to the red side of the pyramid graphic, which represents the philosophical side of the debate:
Isolated demands for rigor are typically quick attempts to demand that an argument satisfies such strict requirements that it is almost impossible to comply with, especially in the context of a real-time debate. An example might be saying: “You can’t be an atheist if you can’t prove God doesn’t exist.” On its surface, this sounds like a catchy argument, but if you turn it around, it doesn’t make sense: “You can’t tell me Bigfoot isn’t real without proving it doesn’t exist” is something we’d never accept, for instance. Whenever someone is forcing you to comply with invented standards they wouldn’t apply to their own arguments, this is usually an isolated demand for rigor.
Next up is disputing definitions, also known as an argument about semantics. Debates that reach this point can languish and falter because they immediately become about philosophical semantics, rather than the argument itself. If you start hearing arguments like: “Abortion is just state-sanctioned murder” or “Capitalism is terrorism”, you’re in for a bad time and you should attempt to elevate the level of debate immediately. The article 37 Ways That Words Can Be Wrong is a great primer on why semantics is a deathtrap for debates.
Clarifying is when people try to figure out what their opponent’s position is: “Are you opposed to laws saying that convicted felons can’t get guns? What about laws saying that there has to be a waiting period?” Clarifying someone’s position is generally fine, because there are often so many misconceptions about what people actually believe, but can quickly devolve into ad hominem: questions like “so you’re saying that rape is good and we should have more of it?” are all-too-common and should obviously be avoided.
Almost to the top: operationalizing happens when both parties know exactly what their positions are, what the terms within them mean, and what the exact issue under question is. These typically resolve to one goalpost, for example: “If the US were to raise the national minimum wage to $15, the average poor person would be better off.” An argument is operationalized when every part of it has either been reduced to a factual question with a real answer, or when it’s obvious exactly what kind of non-factual disagreement is going on (like a conflict in values). Typically, all that’s left to resolve the disagreement is just a good faith survey of evidence.
The top level of the pyramid is occupied by high level generators of disagreement. These arise when everyone involved knows exactly what’s being argued, agrees on what the facts and evidence say, but there is still a vague reason why disagreement still exists. Usually these boil down to conflicts about values, ethics, or philosophy. For example:
“Capital punishment might decrease crime, but I draw the line at intentionally killing people. I don’t want to live in a society that does that, no matter what its reasons.”
You could try arguing this further, but it gets difficult as these sorts of value judgments are often based on far-reaching cultural norms, or hundreds of past experiences with similar issues. In any case, any argument that’s gotten to this point has been well-argued and both parties should be happy with the result.
A changed mindset and a better understanding of how a debate breaks down get us most of the way towards a constructive debate, but there are still a number of specific strategies we can employ to make sure a debate stays as constructive as possible:
No one believes anything 100%. When you put forth an argument you are less sure about, debating in good faith includes letting your opponent know exactly how strongly you feel about a certain argument.
You might say something like, “I also have a feeling that arming teachers would reduce gun violence in schools, but I’m less sure about that than [other belief you have]”. If you’re really feeling saucy, you can assign a numerical value to their beliefs, like: “I think there’s about an 70% probability that arming teachers would reduce gun violence in schools.”
The crux is the point where you and your opponent’s argument intersect. Though finding this point may sound simple, it doesn’t actually happen in most debates: the two sides just talk past each other, each attacking the other’s strawman. In the following example, two people reach disagreement:
Person A: Apples grow on trees.
Person B: No they don’t.
In this situation, many arguers simply stop here and assume that the crux of their argument is whether apples grow on trees or not, and then continue to loudly shout contradictions at each other:
Person A: Yes they do!
Person B: No they don’t.
However, we can solve this with a technique known as double cruxing, where both parties abstract their arguments by one level and find a falsifiable fact that, if proven true, would cause them to change their beliefs.
Person A: I would change my belief if we examined all the trees in the world and found that none of them bore apples.
Person B: I would change my belief if we found a single tree from which apples grew.
Sometimes, it’s that simple, but other times, we might have to double crux again:
Person A: They do grow on trees! Look at that tree over there, it has apples on it.
Person B: That’s not a tree, it’s just an unusually large fern.
A-ha! We see that the crux of the disagreement was not actually about whether apples grow on trees, but about each person’s definition of tree (you may recognize this as disputing definitions from our pyramid above). Both need to double crux again:
Person A: I would change my belief that this proves my point if this tree were found out to actually be a fern.
Person B: I would change my belief if we found out that this fern were actually a tree.
Note that this may go down a rabbit hole (for example, say Person A tried to resolve by looking the word tree up in a dictionary, but Person B was unhappy about the reliability of the dictionary. The crux, now seemingly about dictionary reliability, has been abstracted by another level.) It’s important to recognize when this is happening and get the debate back on track.
Echoing means restating someone’s point back to them to make sure you understood it correctly. This is more necessary for synchronous debates where information flows quickly.
“Just to make sure I’m clear, you’re saying that you’re opposed to rent control primarily because it just causes landlords to push rent increase costs onto new renters?”
Not only does this make them feel heard, it also allows them the chance to clarify their beliefs.
Always leave your partner a line of retreat. No one wants to lose face, and giving someone no option to easily bow out of a debate can lead to explosive consequences. This probably goes without saying, but a simple way to do this is just to keep your debates polite and treat those who disagree with you with respect, no matter how much you may believe they are wrong.
Arguing in good faith does not mean becoming a completely rational being, devoid of emotion. It does, however, mean introspecting on which emotions may be affecting your points and doing your best to remain objective. Taking note of your debate partner’s emotions and how they may be affecting his or her arguments is similarly valuable.
If I had to pick the most important technique on this list, steelmanning would be it.
Earlier in this guide I’ve referenced the strawman logical fallacy, where one party in a debate intentionally exaggerates or misrepresents another party’s position, then attacks it, in order to make their argument look stronger.
As you can probably guess, a steelman is the exact opposite. Instead of taking on a weaker version of your opponent’s argument, help the entire debate out by thinking of the best and most charitable version of your opponent’s argument, then repeat it back to them to see if it makes sense. Once you are both in agreement, resume the debate again.
Because we are so programmed to discount any arguments that oppose our point of view, steelmanning can be difficult for the budding arguer, but can be improved with practice. Here’s the specific checklist:
- Listen to the argument and take extra time to think critically about what the person might be saying. Consider not just their words, but their background, beliefs, and understanding of the issue.
- Make a mental list of all possible interpretations of the argument, sorting by most rational to most irrational. Select the most rational argument.
- If the argument doesn’t make sense, try to reconstruct it as charitably as possible, giving all favor in any ambiguous sections to your opponent.
- If the argument has been reconstructed, but still doesn’t seem as strong as it should be, brainstorm ways in which their argument could be amended to be even stronger.
Here an example of how this could go in a very common situation where someone gives a short, pithy argument that could be easily misinterpreted:
Person A: Defund the police!
Person B [Strawman] Wow, OK, so you think we should remove all funding from the police and leave our community completely defenseless against criminals?
Person A: Defund the police!
Person B [Steelman] OK, my first reaction is that I don’t quite understand what that means. I suppose the best interpretation of this argument is that we would remove funding from the police and invest it into mental health first responders, social programs, criminal justice reform, and other longer-term initiatives that would reduce criminality and keep our community safe without the need for a higher police budget. Does that vibe with you?
Your goal should be to get so good at steelmanning all types of arguments that you can pass the so-called Ideological Turing Test. To pass this test, you should be able to argue so persuasively and passionately for the other side that your text alone would pass for an argument proposed by someone who opposes your position.
Apart from the specific strategies, I’ve collected a number of tips I’ve employed in the pursuit of more fruitful debate.
The rise of social media has undoubtedly led to the rise of destructive arguments. Instead of looking another person in the eyes and feeling their expressions and emotion, we find ourselves trading blows with a profile photo and a bio, our arguments reduced to 280 characters.
The closer you can come to arguing in person, the better chance both partners have of empathizing with each other and seeing each other as human beings. The more toxic and hostile the argument, the more this is recommended.
It is important to note, however, that arguing in person comes with its own set of issues and strategies, as it makes all arguments synchronous, meaning that both parties have less time to think about and formulate arguments or look up evidence.
If you can’t argue in person, platforms to use, from most to least effective, are: video chat > phone > postal mail > long-form social media (Facebook/forums) > short-form social media (Twitter).
Arguments in a public setting, like social media or a forum, are a performance. Not only is your own ego at stake, so is your reputation with the audience. In public arguments, you will have to fight with every ounce of your will to treat arguments as a collaboration, even if it means “conceding” your point and losing face.
If you think an argument will be polemic, have it in private (preferably in person, but if not, over private message). Otherwise, if you must engage in public debate, keep in mind that your conduct has the ability to influence anyone who might view it. Treating these debates as constructively as possible sets the example for how debate can be conducted, even to a silent viewer.
Debates about personal identity, like race, gender, religion, or sexual orientation, can easily become inflammatory. When people feel that something so close to them is in question, they often lose sight of reason and argue instead from an emotional perspective. While it’s not prudent advice to avoid identity debates entirely, anyone participating in such a debate should exercise twice the care that they would in a normal debate, and be willing to walk away if it should derail.
Parkinson’s law of triviality holds that insignificant, low-stakes issues tend to inspire inflamed and disproportionate amounts of debate compared to their trivial nature. Recognize when you’re in one and be willing to call it out and shift the debate to something more important.
If we return to our key quote, “an argument should be a collaboration between two people to find the truth”, I’ve found that many people, with the excuse of saving time or saving face, stop one step short of collaboration and instead arrive at cooperation. Examples of cooperation are agreeing to disagree (which, per Aumann’s agreement theorem, is illogical), agreeing to compromise, or slowly drifting away from the original source of the disagreement to find something minor that both sides agree on, then declaring victory.
While these are certainly preferable to conflict, settling for cooperation means that neither party actually has a chance to update their prior beliefs to get them closer to the truth. Think about a debate in which someone said “I guess we agree to disagree” early on. Did either side actually learn anything?
Try as you might, some people are not yet capable of engaging in constructive debate, and there’s nothing you can do to sway them. If your debate partner refuses to engage in good faith debate, consider walking away. Our brains are highly subsceptible to the “sunk cost fallacy“, which makes us want to continue something we’ve invested time and energy into. This is, of course, one of those times were it might actually make sense to “agree to disagree”, help that person save face, and slink away into the night.
Share Your Experience
Sharing your experience both humanizes you and lets your debate partner know why you might be biased in some direction. Examples: “I was raised in an orphanage and grew up in foster care, so…” or “My parents were evangelical Christians, so…”
This may sound callous, but if you’ve tried every strategy in this guide and your debate partner continues to operate at the bottom of the pyramid (for example, continuing to spout inflammatory ad hominem attacks), sometimes the best way to empathize with them is to pretend they have an undiagnosed mental illness.
When in an altered state, the very nature of reality shifts and warps, and the affected person has no way to coming to grips with the reality most people identify with. What if your debate partner were experiencing this (or maybe they are)? Don’t you think you might treat them with more compassion?
The intention of this guide is to compile all the great writings about constructive debate available online and into a no-nonsense reference that can be shared with aspiring good faith debaters and prospective debate partners. If you enjoyed it, I hope you’ll consider spreading these ideas and helping us live in a more constructive world by bookmarking it as a reference and passing it along. Feel free to use the icon to link to specific concepts in the guide.
If you have suggestions for improvements to the guide, let me know: .
Aside from the sources cited in the text, writings from Gleb Tsipursky, Richard Acton, and Paul Graham provided valuable resources in shaping my thoughts on better debate. Also, Jeff Ammons used his diagram magic to reformat Scott Alexander’s argumentative pyramid into a more readable version.