I find the below list to be useful, so I thought I would post it. This list includes short abstracts of all of the wiki items and a few other topics on less wrong. I grouped the items into some rough categories just to break up the list. I tried to put the right items into the right categories, but there may be some items that can be in multiple categories or that would be better off in a different category. The wiki page from which I got all the items is here.

The categories are:

Property Attribution

Epistemic

Instrumental

Positions

Property Attribution

Barriers, biases, fallacies, impediments and problems

  • Affective death spiral - positive attributes of a theory, person, or organization combine with the Halo effect in a feedback loop, resulting in the subject of the affective death spiral being held in higher and higher regard.
  • Anthropomorphism - the error of attributing distinctly human characteristics to nonhuman processes.
  • Bystander effect - a social psychological phenomenon in which individuals are less likely to offer help in an emergency situation when other people are present.
  • Connotation - emotional association with a word. You need to be careful that you are not conveying different connotation, then you mean to.
  • Correspondence bias (also known as the fundamental attribution error) - is the tendency to overestimate the contribution of lasting traits and dispositions in determining people's behavior, as compared to situational effects.
  • Death Spirals and the Cult Attractor - Cultishness is an empirical attractor in human groups, roughly an affective death spiral, plus peer pressure and outcasting behavior, plus (quite often) defensiveness around something believed to have been perfected
  • Detached lever fallacy –the assumption that something simple for one system will be simple for others. This assumption neglects to take into account that something may only be simple because of complicated underlying machinery which is triggered by a simple action like pulling a lever. Adding this lever to something else won’t allow the action to occur because the underlying complicated machinery is not there.
  • Giant cheesecake fallacy- occurs when an argument leaps directly from capability to actuality, without considering the necessary intermediate of motive. An example of the fallacy might be: a sufficiently powerful Artificial Intelligence could overwhelm any human resistance and wipe out humanity. (Belief without evidence: the AI would decide to do so.) Therefore we should not build AI.
  • Halo effect – specific type of confirmation bias, wherein positive feelings in one area cause ambiguous or neutral traits to be viewed positively.
  • Illusion of transparency - misleading impression that your words convey more to others than they really do.
  • Inferential distance - a gap between the background knowledge and epistemology of a person trying to explain an idea, and the background knowledge and epistemology of the person trying to understand it.
  • Information cascade - occurs when people signal that they have information about something, but actually based their judgment on other people's signals, resulting in a self-reinforcing community opinion that does not necessarily reflect reality.
  • Mind projection fallacy - occurs when someone thinks that the way they see the world reflects the way the world really is, going as far as assuming the real existence of imagined objects.
  • Other-optimizing - a failure mode in which a person vastly overestimates their ability to optimize someone else's life, usually as a result of underestimating the differences between themselves and others, for example through the typical mind fallacy.
  • Peak-end rule - we do not judge our experiences on the net pleasantness of unpleasantness or on how long the experience lasted, but instead on how they were at their peak (pleasant or unpleasant) and how they ended.
  • Stereotype - a fixed, over generalized belief about a particular group or class of people.
  • Typical mind fallacy - the mistake of making biased and overconfident conclusions about other people's experience based on your own personal experience; the mistake of assuming that other people are more like you than they actually are.

Techniques/Concepts

  • ADBOC - Agree Denotationally, But Object Connotatively
  • Alien Values - There are no rules requiring minds to value life, liberty or the pursuit of happiness. An alien will have, in all probability, alien values. If an "alien" isn't evolved, the range of possible values increases even more, allowing such absurdities as a Paperclip maximizer. Creatures with alien values might as well value only non-sentient life, or they might spend all their time building heaps of prime numbers of rocks.
  • Chronophone – is a parable that is meant to convey the idea that it’s really hard to get somewhere when you don't already know your destination. If there were some simple cognitive policy you could follow to spark moral and technological revolutions, without your home culture having advance knowledge of the destination, you could execute that cognitive policy today.
  • Empathic inference – is every-day common mind-reading. It’s an inference made about other person’s mental states using your own brain as reference, by making your brain feel or think in the same way as the other person you can emulate their mental state and predict their reactions.
  • Epistemic luck - you would have different beliefs if certain events in your life were different. How should you react to this fact?
  • Future - If it hasn't happened yet but is going to, then it's part of the future. Checking whether or not something is going to happen is notoriously difficult. Luckily, the field of heuristics and biases has given us some insights into what can go wrong. Namely, one problem is that the future elicits far mode, which isn't about truth-seeking or gritty details.
  • Mental models - a hypothetical form of representation of knowledge in human mind. Mental models form to approximately describe dynamics of observed situations, and reuse parts of existing models to represent novel situations
  • Mind design space - refers to the configuration space of possible minds. As humans living in a human world, we can safely make all sorts of assumptions about the minds around us without even realizing it. Each human might have their own unique personal qualities, so it might naively seem that there's nothing you can say about people you don't know. But there's actually quite a lot you can say (with high or very high probability) about a random human: that they have standard emotions like happiness, sadness, and anger; standard senses like sight, vision, and hearing; that they speak a language; and no doubt any number of other subtle features that are even harder to quickly explain in words. These things are the specific results of adaptation pressures in the ancestral environment and can't be expected to be shared by a random alien or AI. That is, humans are packed into a tiny dot in the configuration space: there is vast range over of other ways a mind can be.
  • Near/far thinking - Near and far are two modes (or a spectrum of modes) in which we can think about things. We choose which mode to think about something is based on its distance from us, or on the level of detail we need. This property of human mind is studied in construal level theory.
    • NEAR: All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits.
    • FAR: Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits
  • No-Nonsense Metaethics - A sequence by lukeprog that explains and defends a naturalistic approach to metaethics and what he calls pluralistic moral reductionism. We know that people can mean different things, but use the same word, e.g. sound can mean auditory experience or acoustic vibrations in the air. Pluralistic moral reductionism is the idea that we do the same thing when we talk about what it moral.
  • Only the vulnerable are heroes - “Vulnerability is our most accurate measurement of courage.” – Brené Brown To be as heroic as a man stopping a group of would-be thieves from robbing a store. Superman has to be defending the world from someone powerful enough to harm and possibly even kill him, such as Darkseid.

Epistemic

Barriers, biases, fallacies, impediments and problems

  • Absurdity heuristic – is a mental shortcut where highly untypical situations are classified as absurd or impossible. Where you don't expect intuition to construct an adequate model of reality, classifying an idea as impossible may be overconfident.
  • Affect heuristic - a mental shortcut that makes use of current emotions to make decisions and solve problems quickly and efficiently.
  • Arguing by analogy – is arguing that since things are alike in some ways, they will probably be alike in others. While careful application of argument by analogy can be a powerful tool, there are limits to the method after which it breaks down.
  • Arguing by definition – is arguing that something is part of a class because it fits the definition of that class. It is recommended to avoid this wherever possible and instead treat words as labels that cannot capture the rich cognitive content that actually constitutes its meaning. As Feynman said: “You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing -- that's what counts.” It is better to keep the focus on the facts of the matter and try to understand what your interlocutor is trying to communicate, then to get lost in a pointless discussion of definitions, bearing nothing.
  • Arguments as soldiers – is a problematic scenario where arguments are treated like war or battle. Arguments get treated as soldiers, weapons to be used to defend your side of the debate, and to attack the other side. They are no longer instruments of the truth.
  • Availability heuristic – a mental shortcut that treats easily recalled information as important or at least more important than alternative solutions which are not as readily recalled
  • Belief as cheering - People can bind themselves as a group by believing "crazy" things together. Then among outsiders they could show the same pride in their crazy belief as they would show wearing "crazy" group clothes among outsiders. The belief is more like a banner saying "GO BLUES". It isn't a statement of fact, or an attempt to persuade; it doesn't have to be convincing—it's a cheer.
  • Beware of Deepities - A deepity is a proposition that seems both important and true—and profound—but that achieves this effect by being ambiguous. An example is "love is a word". One interpretation is that “love”, the word, is a word and this is trivially true. The second interpretation is that love is nothing more than a verbal construct. This interpretation is false, but if it were true would be profound. The "deepity" seems profound due to a conflation of the two interpretations. People see the trivial but true interpretation and then think that there must be some kind of truth to the false but profound one.
  • Bias - is a systematic deviation from rationality committed by our cognition. They are specific, predictable error patterns in the human mind.
  • Burdensome details - Adding more details to a theory may make it sound more plausible to human ears because of the representativeness heuristic, even as the story becomes normatively less probable, as burdensome details drive the probability of the conjunction down (this is known as conjunction fallacy). Any detail you add has to be pinned down by a sufficient amount of evidence; all the details you make no claim about can be summed over.
  • Compartmentalization - a tendency to restrict application of a generally-applicable skill, such as scientific method, only to select few contexts. More generally, the concept refers to not following a piece of knowledge to its logical conclusion, or not taking it seriously.
  • Conformity bias - a tendency to behave similarly to the others in a group, even if doing so goes against your own judgment.
  • Conjunction fallacy – involves the assumption that specific conditions are more probable than more general ones.
  • Contagion heuristic - leads people to avoid contact with people or objects viewed as "contaminated" by previous contact with someone or something viewed as bad—or, less often, to seek contact with objects that have been in contact with people or things considered good.
  • Costs of rationality - Becoming more epistemically rational can only guarantee one thing: what you believe will include more of the truth. Knowing that truth might help you achieve your goals, or cause you to become a pariah. Be sure that you really want to know the truth before you commit to finding it; otherwise, you may flinch from it.
  • Defensibility - arguing that a policy is defensible rather than optimal or that it has some benefit compared to the null action rather than the best benefit of any action.
  • Fake simplicity – if you have a simple answer to a complex problem then it is probably a case whereby your beliefs appear to match the evidence much more strongly than they actually do. “Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong.” —H. L. Mencken
  • Fallacy of gray also known as Continuum fallacy –is the false belief that because nothing is certain, everything is equally uncertain. It does not take into account that some things are more certain than others.
  • False dilemma - occurs when only two options are considered, when there may in fact be many.
  • Filtered evidence – is evidence that was selected for the purpose of proving (disproving) a hypothesis. Filtered evidence may be highly misleading, but can still be useful, if considered with care.
  • Generalization from fictional evidence – logical fallacy that consists of drawing real-world conclusions based on statements invented and selected for the purpose of writing fiction.
  • Groupthink - tendency of humans to tend to agree with each other, and hold back objections or dissent even when the group is wrong.
  • Hindsight bias – is the tendency to overestimate the foreseeability of events that have actually happened.
  • Information hazard – is a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.
  • In-group bias - preferential treatment of people and ideas associated with your own group.
  • Mind-killer - a name given to topics (such as politics) that tend to produce extremely biased discussions. Another cause of mind-killers is social taboo. Negative connotations are associated with some topics, thus creating a strong bias supported by signaling drives that makes non-negative characterization of these topics appear absurd.
  • Motivated cognition – is the unconscious tendency of individuals to fit their processing of information to conclusions that suit some end or goal.
  • Motivated skepticism also known as disconfirmation bias - the mistake of applying more skepticism to claims that you don't like (or intuitively disbelieve), than to claims that you do like
  • Narrative fallacy – is a vulnerability to over interpretation and our predilection for compact stories over raw truths.
  • Overconfidence - the state of being more certain than is justified, given your priors and the evidence available.
  • Planning fallacy - predictions about how much time will be needed to complete a future task display an optimistic bias (underestimate the time needed).
  • Politics is the Mind-Killer – Politics is not a good area for rational debate. It is often about status and power plays where arguments are soldiers rather than tools to get closer to the truth.
  • Positive bias - tendency to test hypotheses with positive rather than negative examples, thus risking to miss obvious disconfirming tests.
  • Priming - psychological phenomenon that consists in early stimulus influencing later thoughts and behavior.
  • Privileging the hypothesis – is singling out a particular hypothesis for attention when there is insufficient evidence already in hand to justify such special attention.
  • Problem of verifying rationality – is the single largest problem for those desiring to create methods of systematically training for increased epistemic and instrumental rationality - how to verify that the training actually worked.
  • Rationalization – starts from a conclusion, and then works backward to arrive at arguments apparently favouring that conclusion. Rationalization argues for a side already selected. The term is misleading as it is the very opposite and antithesis of rationality, as if lying were called "truthization".
  • Reason as memetic immune disorder.- is problem that when you are rational you deem your conclusions more valuable than those of non-rational people. This can end up being a problem as you are less likely to update your beliefs when they are opposed. This adds the risk that if you make a one false belief and then rationally deduce a plethora of others from it you will be less likely to update any erronous conclusions.
  • Representativeness heuristic –a mental shortcut where people judge the probability or frequency of a hypothesis by considering how much the hypothesis resembles available data as opposed to using a Bayesian calculation.
  • Scales of justice fallacy - the error of using a simple polarized scheme for deciding a complex issue: each piece of evidence about the question is individually categorized as supporting exactly one of the two opposing positions.
  • Scope insensitivity – a phenomenon related to the representativeness heuristic where subjects based their willingness-to-pay mostly on a mental image rather than the effect on a desired outcome. An environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds, even though in fact the former measure is two orders of magnitude more effective.
  • Self-deception - state of preserving a wrong belief, often facilitated by denying or rationalizing away the relevance, significance, or importance of opposing evidence and logical arguments.
  • Status quo bias - people tend to avoid changing the established behavior or beliefs unless the pressure to change is sufficiently strong.
  • Sunk cost fallacy - Letting past investment (of time, energy, money, or any other resource) interfere with decision-making in the present in deleterious ways.
  • The top 1% fallacy - related to not taking into account the idea that a small sample size is not always reflective of a whole population and that sample populations with certain characteristics, e.g. made up of repeat job seekers, are not reflective of the whole population.
  • Underconfidence - the state of being more uncertain than is justified, given your priors and the evidence you are aware of.
  • Wrong Questions - A question about your map that wouldn’t make sense if you had a more accurate map.

Techniques/Concepts

  • Absolute certainty – equivalent of Bayesian probability of 1. Losing an epistemic bet made with absolute certainty corresponds to receiving infinite negative payoff, according to the logarithmic proper scoring rule.
  • Adaptation executors - Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers. Our taste buds do not find lettuce delicious and cheeseburgers distasteful once we are fed a diet too high in calories and too low in micronutrients. Tastebuds are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor. Evolution operates on too slow a timescale to re-adapt to adapt to a new conditions (such as a diet).
  • Adversarial process - a form of truth-seeking or conflict resolution in which identifiable factions hold one-sided positions.
  • Altruism - Actions undertaken for the benefit of other people. If you do something to feel good about helping people, or even to be a better person in some spiritual sense, it isn't truly altruism.
  • Amount of evidence - to a Bayesian, evidence is a quantitative concept. The more complicated or a priori improbable a hypothesis is, the more evidence you need just to justify it, or even just single it out of the amongst the mass of competing theories.
  • Anti-epistemology- is bad explicit beliefs about rules of reasoning, usually developed in the course of protecting an existing false belief - false beliefs are opposed not only by true beliefs (that must then be obscured in turn) but also by good rules of systematic reasoning (which must then be denied). The explicit defense of fallacy as a general rule of reasoning is anti-epistemology.
  • Antiprediction - is a statement of confidence in an event that sounds startling, but actually isn't far from a maxentropy prior. For example, if someone thinks that our state of knowledge implies strong ignorance about the speed of some process X on a logarithmic scale from nanoseconds to centuries, they may make the startling-sounding statement that X is very unlikely to take 'one to three years'.
  • Applause light - is an empty statement which evokes positive affect without providing new information
  • Artificial general intelligence – is a machine capable of behaving intelligently over many domains.
  • Bayesian - Bayesian probability theory is the math of epistemic rationality, Bayesian decision theory is the math of instrumental rationality.
  • Aumann's agreement theorem – roughly speaking, says that two agents acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesians, share common priors, and have common knowledge of each other's current probability assignments, then they must have equal probability assignments.
  • Bayesian decision theory – is a decision theory which is informed by Bayesian probability. It is a statistical system that tries to quantify the tradeoff between various decisions, making use of probabilities and costs.
  • Bayesian probability - represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials. An event with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times." The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10.
  • Bayes' theorem - A law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate.
  • Belief - the mental state in which an individual holds a proposition to be true. Beliefs are often metaphorically referred to as maps, and are considered valid to the extent that they correctly correspond to the truth. A person's knowledge is a subset of their beliefs, namely the beliefs that are also true and justified. Beliefs can be second-order, concerning propositions about other beliefs.
  • Belief as attire – is a example of an improper belief promoted by identification with a group or other signaling concerns, not by how well it reflects the territory.
  • Belief in belief - Where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. Were you to really believe and not just believe in belief, the consequences of error would be much more severe. When someone makes up excuses in advance, it would seem to require that belief, and belief in belief, have become unsynchronized.
  • Belief update - what you do to your beliefs, opinions and cognitive structure when new evidence comes along.
  • Bite the bullet - is to accept the consequences of a hard choice, or unintuitive conclusions of a formal reasoning procedure.
  • Black swan – is a high-impact event that is hard to predict (but not necessarily of low probability). It is also an event that is not accounted for in a model and therefore causes the model to break down when it occurs.
  • Cached thought – is an answer that was arrived at by recalling a previously-computed conclusion, rather than performing the reasoning from scratch.
  • Causal Decision Theory – a branch of decision theory which advises an agent to take actions that maximizes the causal consequences on the probability of desired outcomes
  • Causality - refers to the relationship between an event (the cause) and a second event (the effect), where the second event is a direct consequence of the first.
  • Church-Turing thesis - states the equivalence between the mathematical concepts of algorithm or computation and Turing-Machine. It asserts that if some calculation is effectively carried out by an algorithm, then there exists a Turing machines which will compute that calculation.
  • Coherent Aggregated Volition - is one of Ben Goertzel's responses to Eliezer Yudkowsky's Coherent Extrapolated Volition, the other being Coherent Blended Volition. CAV would be a combination of the goals and beliefs of humanity at the present time.
  • Coherent Blended Volition - Coherent Blended Volition is a recent concept coined in a 2012 paper by Ben Goertzel with the aim to clarify his Coherent Aggregated Volition idea. This clarifications follows the author's attempt to develop a comprehensive alternative to Coherent Extrapolated Volition.
  • Coherent Extrapolated Volition – is a term developed by Eliezer Yudkowsky while discussing Friendly AI development. It’s meant as an argument that it would not be sufficient to explicitly program our desires and motivations into an AI. Instead, we should find a way to program it in a way that it would act in our best interests – what we want it to do and not what we tell it to.
  • Color politics - the words "Blues" and "Greens" are often used to refer to two opposing political factions. Politics commonly involves an adversarial process, where factions usually identify with political positions, and use arguments as soldiers to defend their side. The dichotomies presented by the opposing sides are often false dilemmas, which can be shown by presenting third options.
  • Common knowledge - n the context of Aumann's agreement theorem, a fact is part of the common knowledge of a group of agents when they all know it, they all know that they all know it, and so on ad infinitum.
  • Conceptual metaphor – are neurally-implemented mappings between concrete domains of discourse (often related to our body and perception) and more abstract domains. These are a well-known source of bias and are often exploited in the Dark Arts. An example is “argument is war”.
  • Configuration space - is an isomorphism between the attributes of something, and its position on a multidimensional graph. Theoretically, the attributes and precise position on the graph should contain the same information. In practice, the concept usually appears as a suffix, as in "walletspace", where "walletspace" refers to the configuration space of all possible wallets, arranged by similarity. Walletspace would intersect with leatherspace, and the set of leather wallets is a subset of both walletspace and leatherspace, which are both subsets of thingspace.
  • Conservation of expected evidence - a theorem that says: "for every expectation of evidence, there is an equal and opposite expectation of counterevidence". 0 = (P(H|E)-P(H))*P(E) + (P(H|~E)-P(H))*P(~E)
  • Control theory - a control system is a device that keeps a variable at a certain value, despite only knowing what the current value of the variable is. An example is a cruise control, which maintains a certain speed, but only measures the current speed, and knows nothing of the system that produces that speed (wind, car weight, grade).
  • Corrupted hardware - our brains do not always allow us to act the way we should. Corrupted hardware refers to those behaviors and thoughts that act for ancestrally relevant purposes rather than for stated moralities and preferences.
  • Counterfactual mugging - is a thought experiment for testing and differentiating decision theories, stated as follows:
  • Counter man syndrome - wherein a person behind a counter comes to believe that they know things they don't know, because, after all, they're the person behind the counter. So they can't just answer a question with "I don't know"... and thus they make something up, without really paying attention to the fact that they're making it up. Pretty soon, they don't know the difference between the facts and their made up stories
  • Cox's theorem says, roughly, that if your beliefs at any given time take the form of an assignment of a numerical "plausibility score" to every proposition, and if they satisfy a few plausible axioms, then your plausibilities must effectively be probabilities obeying the usual laws of probability theory, and your updating procedure must be the one implied by Bayes' theorem.
  • Crisis of faith - a combined technique for recognizing and eradicating the whole systems of mutually-supporting false beliefs. The technique involves systematic application of introspection, with the express intent to check the reliability of beliefs independently of the other beliefs that support them in the mind. The technique might be useful for the victims of affective death spirals, or any other systematic confusions, especially those supported by anti-epistemology.
  • Cryonics - is the practice of preserving people who are dying in liquid nitrogen soon after their heart stops. The idea is that most of your brain's information content is still intact right after you've "died". If humans invent molecular nanotechnology or brain emulation techniques, it may be possible to reconstruct the consciousness of cryopreserved patients.
  • Curiosity - The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance. If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. The glory of glorious mystery is to be solved, after which it ceases to be mystery. Be wary of those who speak of being open-minded and modestly confess their ignorance. There is a time to confess your ignorance and a time to relinquish your ignorance. —Twelve Virtues of Rationality
  • Dangerous knowledge - Intelligence, in order to be useful, must be used for something other than defeating itself.
  • Dangling Node - A label for something that isn't "actually real".
  • Death - First you're there, and then you're not there, and they can't change you from being not there to being there, because there's nothing there to be changed from being not there to being there. That's death. Cryonicists use the concept of information-theoretic death, which is what happens when the information needed to reconstruct you even in principle is no longer present. Anything less, to them, is just a flesh wound.
  • Debiasing - The process of overcoming bias. It takes serious study to gain meaningful benefits, half-hearted attempts may accomplish nothing, and partial knowledge of bias may do more harm than good.
  • Decision theory – is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals.
  • Defying the data - Sometimes, the results of an experiment contradict what we have strong theoretical reason to believe. But experiments can go wrong, for various reasons. So if our theory is strong enough, we should in some cases defy the data: know that there has to be something wrong with the result, even without offering ideas on what it might be.
  • Disagreement - Aumann's agreement theorem can be informally interpreted as suggesting that if two people are honest seekers of truth, and both believe each other to be honest, then they should update on each other's opinions and quickly reach agreement. The very fact that a person believes something is Rational evidence that that something is true, and so this fact should be taken into account when forming your belief. Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong.
  • Doubt- The proper purpose of a doubt is to destroy its target belief if and only if it is false. The mere feeling of crushing uncertainty is not virtuous unto an aspiring rationalist; probability theory is the law that says we must be uncertain to the exact extent to which the evidence merits uncertainty.
  • Dunning–Kruger effect - is a cognitive bias wherein unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to recognize their ineptitude. Conversely, highly skilled individuals tend to underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others
  • Emulation argument for human-level AI – argument that since whole brain emulation seems feasible then human-level AI must also be feasible.
  • Epistemic hygiene - consists of practices meant to allow accurate beliefs to spread within a community and keep less accurate or biased beliefs contained. The practices are meant to serve an analogous purpose to normal hygiene and sanitation in containing disease. "Good cognitive citizenship" is another phrase that has been proposed for this concept[1].
  • Error of crowds - is the idea that under some scoring rules, the average error becomes less than the error of the average, thus making the average belief tautologically worse than a belief of a random person. Compare this to the ideas of modesty argument and wisdom of the crowd. A related idea is that a popular belief is likely to be wrong because the less popular ones couldn't maintain support if they were worse than the popular one.
  • Ethical injunction - are rules not to do something even when it's the right thing to do. (That is, you refrain "even when your brain has computed it's the right thing to do", but this will just seem like "the right thing to do".) For example, you shouldn't rob banks even if you plan to give the money to a good cause. This is to protect you from your own cleverness (especially taking bad black swan bets), and the Corrupted hardware you're running on.
  • Evidence - for a given theory is the observation of an event that is more likely to occur if the theory is true than if it is false. (The event would be evidence against the theory if it is less likely if the theory is true.)
  • Evidence of absence - evidence that allows you to conclude some phenomenon isn't there. It is often said that "absence of evidence is not evidence of absence". However, if evidence is expected, but not present, that is evidence of absence.
  • Evidential Decision Theory - a branch of decision theory which advises an agent to take actions which, conditional on it happening, maximizes the chances of the desired outcome.
  • Evolution - The brainless, mindless optimization process responsible for the production of all biological life on Earth, including human beings. Since the design signature of evolution is alien and counterintuitive, it takes some study to get to know your accidental Creator.
  • Evolution as alien god – is a thought experiment in which evolution is imagined as a god. The though experiment is meant to convey the idea that evolution doesn’t have a mind. The god in though experiment would be a tremendously powerful, unbelievably stupid, ridiculously slow, and utterly uncaring god; a god monomaniacally focused on the relative fitness of genes within a species; a god whose attention was completely separated and working at cross-purposes in rabbits and wolves.
  • Evolutionary argument for human-level AI - an argument that uses the fact that evolution produced human level intelligence to argue for the feasibility of human-level AI.
  • Evolutionary psychology - the idea of evolution as the idiot designer of humans - that our brains are not consistently well-designed - is a key element of many of the explanations of human errors that appear on this website.
  • Existential risk – is a risk posing permanent large negative consequences to humanity which can never be undone.
  • Expected value - The expected value or expectation is the (weighted) average of all the possible outcomes of an event, weighed by their probability. For example, when you roll a die, the expected value is (1+2+3+4+5+6)/6 = 3.5. (Since a die doesn't even have a face that says 3.5, this illustrates that very often, the "expected value" isn't a value you actually expect.)
  • Extensibility argument for greater-than-human intelligence –is an argument that once we get to a human level AGI, extensibility would make an AGI of greater-than-human-intelligence feasible.
  • Extraordinary evidence - is evidence that turns an a priori highly unlikely event into an a posteriori likely event.
  • Free-floating belief – is a belief that both doesn't follow from observations and doesn't restrict which experiences to anticipate. It is both unfounded and useless.
  • Free will - means our algorithm's ability to determine our actions. People often get confused over free will because they picture themselves as being restrained rather than part of physics. Yudowsky calls this view Requiredism, but most people just view this essentially as Compatibilism.
  • Friendly artificial intelligence – is a superintelligence (i.e., a really powerful optimization process) that produces good, beneficial outcomes rather than harmful ones.
  • Fully general counterargument - an argument which can be used to discount any conclusion the arguer does not like. Being in possession of such an argument leads to irrationality because it allows the arguer to avoid updating their beliefs in the light of new evidence. Knowledge of cognitive biases can itself allow someone to form fully general counterarguments ("you're just saying that because you're exhibiting X bias").
  • Great Filter - is a proposed explanation for the Fermi Paradox. The development of intelligent life requires many steps, such as the emergence of single-celled life and the transition from unicellular to multicellular life forms. Since we have not observed intelligent life beyond our planet, there seems to be a developmental step that is so difficult and unlikely that it "filters out" nearly all civilizations before they can reach a space-faring stage.
  • Group rationality - In almost anything, individuals are inferior to groups.
  • Group selection – is an incorrect belief about evolutionary theory that a feature of the organism is there for the good of the group.
  • Heuristic - quick, intuitive strategy for reasoning or decision making, as opposed to more formal methods. Heuristics require much less time and energy to use, but sometimes go awry, producing bias.
  • Heuristics and biases - program in cognitive psychology tries to work backward from biases (experimentally reproducible human errors) to heuristics (the underlying mechanisms at work in the brain).
  • Hold Off on Proposing Solutions - "Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any." It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.
  • Hollywood rationality- What Spock does, not what actual rationalists do.
  • How an algorithm feels - Our philosophical intuitions are generated by algorithms in the human brain. To dissolve a philosophical dilemma, it often suffices to understand the cognitive algorithm that generates the appearance of the dilemma - if you understand the algorithm in sufficient detail. It is not enough to say "An algorithm does it!" - this might as well be magic. It takes a detailed step-by-step walkthrough.
  • Hypocrisy - the act of claiming to motives, morals and standards one does not possess. Informally, it refers to not living up the standards that one espouses, whether or not one sincerely believes those standards.
  • Impossibility- Careful use of language dictates that we distinguish between several senses in which something can be said to be impossible. Some things are logically impossible: you can't have a square circle or an object that is both perfectly black and perfectly not-black. Also, in our reductionist universe operating according to universal physical laws, some things are physically impossible based on our model of how things work, even they are not obviously contradictory or contrary to reason: for example, the laws of thermodynamics give us a strong guarantee that there can never be a perpetual motion machine. It can be tempting to label as impossible very difficult problems which you have no idea how to solve. But the apparent lack of a solution is not a strong guarantee that no solution can exist in the way that the laws of thermodynamics, or Godel's incompleteness results, give us proofs that something cannot be accomplished. A blank map does not correspond to a blank territory; in the absence of a proof that a problem is insolvable, you can't be confident that you're not just overlooking something that a greater intelligence would spot in an instant.
  • Improper belief – is a belief that isn't concerned with describing the territory. A proper belief, on the other hand, requires observations, gets updated upon encountering new evidence, and provides practical benefit in anticipated experience. Note that the fact that a belief just happens to be true doesn't mean you're right to have it. If you buy a lottery ticket, certain that it's a winning ticket (for no reason), and it happens to be, believing that was still a mistake. Types of improper belief discussed in the Mysterious Answers to Mysterious Questions sequence include: Free-floating belief, Belief as attire, Belief in belief and Belief as cheering
  • Incredulity - Spending emotional energy on incredulity wastes time you could be using to update. It repeatedly throws you back into the frame of the old, wrong viewpoint. It feeds your sense of righteous indignation at reality daring to contradict you.
  • Intuition pump - In summary, they are thought experiments that highlight, or pumping, certain ideas, intuitions or concepts while attenuating others so as to make some conclusion obvious and simple to reach. The intuition pump is a carefully designed persuasion tool in which you check to see if the same intuitions still get pumped when you change certain settings in a thought experiment.
  • Kolmogorov complexity - given a string, the length of the shortest possible program that prints it.
  • Lawful intelligence - The startling and counterintuitive notion - contradicting both surface appearances and all Deep Wisdom - that intelligence is a manifestation of Order rather than Chaos. Even creativity and outside-the-box thinking are essentially lawful. While this is a complete heresy according to the standard religion of Silicon Valley, there are some good mathematical reasons for believing it.
  • Least convenient possible world – is a technique for enforcing intellectual honesty, to be used when arguing against an idea. The essence of the technique is to assume that all the specific details will align with the idea against which you are arguing, i.e. to consider the idea in the context of a least convenient possible world, where every circumstance is colluding against your objections and counterarguments. This approach ensures that your objections are strong enough, running minimal risk of being rationalizations for your position.
  • Logical rudeness – is a response to criticism which insulates the responder from having to address the criticism directly. For example, ignoring all the diligent work that evolutionary biologists did to dig up previous fossils, and insisting you can only be satisfied by an actual videotape, is "logically rude" because you're ignoring evidence that someone went to a great deal of trouble to provide to you.
  • Log odds – is an alternate way of expressing probabilities, which simplifies the process of updating them with new evidence. Unfortunately, it is difficult to convert between probability and log odds. The log odds is the log of the odds ratio.
  • Magical categories - an English word which, although it sounds simple - hey, it's just one word, right? - is actually not simple, and furthermore, may be applied in a complicated way that drags in other considerations. Physical brains are not powerful enough to search all possibilities; we have to cut down the search space to possibilities that are likely to be good. Most of the "obviously bad" methods - those that would end up violating our other values, and so ranking very low in our preference ordering - do not even occur to us as possibilities.
  • Making Beliefs Pay Rent - Every question of belief should flow from a question of anticipation, and that question of anticipation should be the centre of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
  • Many-worlds interpretation - uses decoherence to explain how the universe splits into many separate branches, each of which looks like it came out of a random collapse.
  • Map and territory- Less confusing than saying "belief and reality", "map and territory" reminds us that a map of Texas is not the same thing as Texas itself. Saying "map" also dispenses with possible meanings of "belief" apart from "representations of some part of reality". Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".
  • Meme lineage – is a set of beliefs, attitudes, and practices that all share a clear common origin point. This concept also emphasizes the means of transmission of the beliefs in question. If a belief is part of a meme lineage that transmits for primarily social reasons, it may be discounted for purposes of the modesty argument.
  • Memorization - is what you're doing when you cram for a university exam. It's not
  • Modesty - admitting or boasting of flaws so as to not create perceptions of arrogance. Not to be confused with humility.
  • Most of science is actually done by induction - To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.
  • Most peoples' beliefs aren’t worth considering - Sturgeon's Law says that as a general rule, 90% of everything is garbage. Even if it is the case that 90% of everything produced by any field is garbage that does not mean one can dismiss the 10% that is quality work. Instead, it is important engage with that 10%, and use that as the standard of quality.
  • Nash equilibrium - a stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged.
  • Newcomb's problem - In Newcomb's problem, a superintelligence called Omega shows you two boxes, A and B, and offers you the choice of taking only box A, or both boxes A and B. Omega has put $1,000 in box B. If Omega thinks you will take box A only, he has put $1,000,000 in it. 
  • Nonapples - a proposed object, tool, technique, or theory which is defined only as being not like a specific, existent example of said categories. It is a type of overly-general prescription which, while of little utility, can seem useful. It involves disguising a shallow criticism as a solution, often in such a way as to make it look profound. For instance, suppose someone says, "We don't need war, we need non-violent conflict resolution." In this way a shallow criticism (war is bad) is disguised as a solution (non-violent conflict resolution, i.e, nonwar). This person is selling nonapples because "non-violent conflict resolution" isn't a method of resolving conflict nonviolently. Rather, it is a description of all conceivable methods of non-violent conflict resolution, the vast majority of which are incoherent and/or ineffective.
  • Noncentral fallacy - A rhetorical move often used in political, philosophical, and cultural arguments. "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."
  • Not technically a lie – is a statement that is literally true, but causes the listener to attain false beliefs by performing incorrect inference, is not technically a lie.
  • Occam's razor - principle commonly stated as "Entities must not be multiplied beyond necessity". When several theories are able to explain the same observations, Occam's razor suggests the simpler one is preferable.
  • Odds ratio - are an alternate way of expressing probabilities, which simplifies the process of updating them with new evidence. The odds ratio of A is P(A)/P(¬A).
  • Omega - A hypothetical super-intelligent being used in philosophical problems. Omega is most commonly used as the predictor in Newcomb's problem. In its role as predictor, Omega's predictions occur almost certainly. In some thought experiments, Omega is also taken to be super-powerful. Omega can be seen as analogous to Laplace's demon, or as the closest approximation to the Demon capable of existing in our universe.
  • Oops - Theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms. It is better to say oops quickly when you realize a mistake. The alternative is stretching out the battle with yourself over years.
  • Outside view - Taking the outside view (another name for reference class forecasting) means using an estimate based on a class of roughly similar previous cases, rather than trying to visualize the details of a process. For example, estimating the completion time of a programming project based on how long similar projects have taken in the past, rather than by drawing up a graph of tasks and their expected completion times.
  • Overcoming Bias - is a group blog on the systemic mistakes humans make, and how we can possibly correct them.
  • Paperclip maximizer – is an AI that has been created to maximize the number of paperclips in the universe. It is a hypothetical unfriendly artificial intelligence.
  • Pascal's mugging – is a thought-experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighed by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards.
  • Password - The answer you guess instead of actually understanding the problem.
  • Philosophical zombie - a hypothetical entity that looks and behaves exactly like a human (often stipulated to be atom-by-atom identical to a human) but is not actually conscious: they are often said lack qualia or phenomena consciousness.
  • Phlogiston - the 18 century's answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright "fire" stuff? Why does the wood transform into ash? To both questions, the 18th-century chemists answered, "phlogiston"....and that was it, you see, that was their answer: "Phlogiston." —Fake Causality
  • Possibility - words in natural language carry connotations that may become misleading when the words get applied with technical precision. While it's not technically a lie to say that it's possible to win a lottery, the statement is deceptive. It's much more precise, for communication of the actual fact through connotation, to say that it’s impossible to win the lottery. This is an example of antiprediction.
  • Possible world - is one that is internally consistent, even if it is counterfactual.
  • Prediction market - speculative markets created for the purpose of making predictions. Assets are created whose final cash value is tied to a particular event or parameter. The current market prices can then be interpreted as predictions of the probability of the event or the expected value of the parameter.
  • Priors - refer generically to the beliefs an agent holds regarding a fact, hypothesis or consequence, before being presented with evidence.
  • Probability is in the Mind - Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
  • Probability theory - a field of mathematics which studies random variables and processes.
  • Rationality - the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases.
  • Rational evidence - the broadest possible sense of evidence, the Bayesian sense. Rational evidence about a hypothesis H is any observation which has a different likelihood depending on whether H holds in reality or not. Rational evidence is distinguished from narrower forms of evidence, such as scientific evidence or legal evidence. For a belief to be scientific, you should be able to do repeatable experiments to verify the belief. For evidence to be admissible in court, it must e.g. be a personal observation rather than hearsay.
  • Rationalist taboo - a technique for fighting muddles in discussions. By prohibiting the use of a certain word and all the words synonymous to it, people are forced to elucidate the specific contextual meaning they want to express, thus removing ambiguity otherwise present in a single word. Mainstream philosophy has a parallel procedure called "unpacking" where doubtful terms need to be expanded out.
  • Rationality and Philosophy - A sequence by lukeprog examining the implications of rationality and cognitive science for philosophical method.
  • Rationality as martial art - A metaphor for rationality as the martial art of mind; training brains in the same fashion as muscles. The metaphor is intended to have complex connotations, rather than being strictly positive. Do modern-day martial arts suffer from being insufficiently tested in realistic fighting, and do attempts at rationality training run into the same problem?
  • Reversal test - a technique for fighting status quo bias in judgments about the preferred value of a continuous parameter. If one deems the change of the parameter in one direction to be undesirable, the reversal test is to check that either the change of that parameter in the opposite direction (away from status quo) is deemed desirable, or that there are strong reasons to expect that the current value of the parameter is (at least locally) the optimal one.
  • Reductionism - a disbelief that the higher levels of simplified multilevel models are out there in the territory, that concepts constructed by mind in themselves play a role in the behavior of reality. This doesn't contradict the notion that the concepts used in simplified multilevel models refer to the actual clusters of configurations of reality.
  • Religion- Religion is a complex group of human activities — involving tribal affiliation, belief in belief, supernatural claims, and a range of shared group practices such as worship meetings, rites of passage, etc.
  • Reversed stupidity is not intelligence - "The world's greatest fool may say the Sun is shining, but that doesn't make it dark out.".
  • Science - a method for developing true beliefs about the world. It works by developing hypotheses about the world, creating experiments that would allow the hypotheses to be tested, and running the experiments. By having people publish their falsifiable predictions and their experimental results, science protects itself from individuals deceiving themselves or others.
  • Scoring rule - a scoring rule is a measure of performance of probabilistic predictions - made under uncertainty.
  • Seeing with Fresh Eyes - A sequence on the incredibly difficult feat of getting your brain to actually think about something, instead of instantly stopping on the first thought that comes to mind.
  • Semantic stopsign – is a meaningless generic explanation that creates an illusion of giving an answer, without actually explaining anything.
  • Shannon information - The Shannon entropy is a measure of the average information content one is missing when one does not know the value of the random variable.
  • Shut up and multiply- the ability to trust the math even when it feels wrong
  • Signaling - "a method of conveying information among not-necessarily-trustworthy parties by performing an action which is more likely or less costly if the information is true than if it is not true".
  • Solomonoff induction - A formalized version of Occam's razor based on Kolmogorov complexity.
  • Sound argument - an argument that is valid and whose premises are all true. In other words, the premises are true and the conclusion necessarily follows from them, making the conclusion true as well.
  • Spaced repetition - is a technique for building long-term knowledge efficiently. It works by showing you a flash card just before a computer model predicts you will have forgotten it. Anki is Less Wrong's spaced repetition software of choice
  • Statistical bias - "Bias" as used in the field of statistics refers to directional error in an estimator. Statistical bias is error you cannot correct by repeating the experiment many times and averaging together the results.
  • Steel man - A term for the opposite of a Straw Man
  • Superstimulus - an exaggerated version of a stimulus to which there is an existing response tendency, or any stimulus that elicits a response more strongly than the stimulus for which it evolved.
  • Surprise - Recognizing a fact that disagrees with your intuition as surprising is an important step in updating your worldview.
  • Sympathetic magic - Humans seem to naturally generate a series of concepts known as sympathetic magic, a host of theories and practices which have certain principles in common, two of which are of overriding importance: the Law of Contagion holds that two things which have interacted, or were once part of a single entity, retain their connection and can exert influence over each other; the Law of Similarity holds that things which are similar or treated the same establish a connection and can affect each other.
  • Tapping Out - The appropriate way to signal that you've said all you wanted to say on a particular topic, and that you're ending your participation in a conversation lest you start saying things that are less worthwhile. It doesn't mean accepting defeat or claiming victory and it doesn't mean you get the last word. It just means that you don't expect your further comments in a thread to be worthwhile, because you've already made all the points you wanted to, or because you find yourself getting too emotionally invested, or for any other reason you find suitable.
  • Technical explanation - A technical explanation is an explanation of a phenomenon that makes you anticipate certain experiences. A proper technical explanation controls anticipation strictly, weighting your priors and evidence precisely to create the justified amount of uncertainty. Technical explanations are contrasted with verbal explanations, which give the impression of understanding without actually producing the proper expectation.
  • Teleology - The study of things that happen for the sake of their future consequences. The fallacious meaning of it is that events are the result of future events. The non-fallacious meaning is that it is the study of things that happen because of their intended results, where the intention existed in an actual mind in the prior past, and so was causally able to bring about the event by planning and acting.
  • The map is not the territory – the idea that our perception of the world is being generated by our brain and can be considered as a 'map' of reality written in neural patterns. Reality exists outside our mind but we can construct models of this 'territory' based on what we glimpse through our senses.
  • Third option - is a way to break a false dilemma, showing that neither of the suggested solutions is a good idea.
  • Traditional rationality - "Traditional Rationality" refers to the tradition passed down by reading Richard Feynman's "Surely You're Joking", Thomas Kuhn's "The Structure of Scientific Revolutions", Martin Gardner's "Science: Good, Bad, and Bogus", Karl Popper on falsifiability, or other non-technical material on rationality. Traditional Rationality is a very large improvement over nothing at all, and very different from Hollywood rationality; people who grew up on this belief system are definitely fellow travelers, and where most of our recruits come from. But you can do even better by adding math, science, formal epistemic and instrumental rationality; experimental psychology, cognitive science, deliberate practice, in short, all the technical stuff.There's also some popular tropes of Traditional Rationality that actually seem flawed once you start comparing them to a Bayesian standard - for example, the idea that you ought to give up an idea once definite evidence has been provided against it, but you're allowed to believe until then, if you want to. Contrast to the stricter idea of there being a certain exact probability which it is correct to assign, continually updated in the light of new evidence.
  • Trivial inconvenience - inconveniences that take few resources to counteract but have a disproportionate impact on people deciding whether to take a course of action.
  • Truth - the correspondence between and one's beliefs about reality and reality.
  • Tsuyoku naritai - the will to transcendence. Japanese: "I want to become stronger."
  • Twelve virtues of rationality
    1. Curiosity – the burning itch
    2. Relenquishment – “That which can be destroyed by the truth should be.” -P. C. Hodgell
    3. Lightness – follow the evidence wherever it leads
    4. Evenness – resist selective skepticism; use reason, not rationalization
    5. Argument – do not avoid arguing; strive for exact honesty; fairness does not mean balancing yourself evenly between propositions
    6. Empiricism – knowledge is rooted in empiricism and its fruit is prediction; argue what experiences to anticipate, not which beliefs to profess
    7. Simplicity – is virtuous in belief, design, planning, and justification; ideally: nothing left to take away, not nothing left to add
    8. Humility – take actions, anticipate errors; do not boast of modesty; no one achieves perfection
    9. Perfectionism – seek the answer that is *perfectly* right – do not settle for less
    10. Precision – the narrowest statements slice deepest; don’t walk but dance to the truth
    11. Scholarship – absorb the powers of science
    12. [The void] (the nameless virtue) – “More than anything, you must think of carrying your map through to reflecting the territory.”
  • Understanding - is more than just memorization of detached facts; it requires ability to see the implications across a variety of possible contexts.
  • Universal law - the idea that everything in reality always behaves according to the same uniform physical laws; there are no exceptions and no alternatives.
  • Unsupervised universe - a thought experiment developed to counter undue optimism, not just the sort due to explicit theology, but in particular a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong. It involves a benevolent god, a simulated universe, e.g. Conway's Game of Life and asking the mathematical question of what would happen according to the standard Life rules given certain initial conditions - so that even God cannot control the answer to the question; although, of course, God always intervenes in the actual Life universe.
  • Valid argument - An argument is valid when it contains no logical fallacies
  • Valley of bad rationality - It has been observed that when someone is just starting to learn rationality, they appear to be worse off than they were before. Others, with more experience at rationality, claim that after you learn more about rationality, you will be better off than you were before you started. The period before this improvement is known as "the valley of bad rationality".
  • Wisdom of the crowd – is the collective opinion of a group of individuals rather than that of a single expert. A large group's aggregated answers to questions involving quantity estimation, general world knowledge, and spatial reasoning has generally been found to be as good as, and often better than, the answer given by any of the individuals within the group.
  • Words can be wrong – There are many ways that words can be wrong it is for this reason that we should avoid arguing by definition. Instead, to facilitate communication we can taboo and reduce: we can replace the symbol with the substance and talk about facts and anticipations, not definitions.

Instrumental

Barriers, biases, fallacies, impediments and problems

  • Akrasia - the state of acting against one's better judgment. Note that, for example, if you are procrastinating because it's not in your best interest to complete the task you are delaying, it is not a case of akrasia.
  • Alief - an independent source of emotional reaction which can coexist with a contradictory belief. For example, the fear felt when a monster jumps out of the darkness in a scary movie is based on the alief that the monster is about to attack you, even though you believe that it cannot.
  • Effort Shock - the unpleasant discovery of how hard it is to accomplish something.

Techniques/Concepts

  • Ambient decision theory - A variant of updateless decision theory that uses first order logic instead of mathematical intuition module (MIM), emphasizing the way an agent can control which mathematical structure a fixed definition defines, an aspect of UDT separate from its own emphasis on not making the mistake of updating away things one can still acausally control.
  • Ask, Guess and Tell culture - The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no". The two basic rules of Guess Culture: 1) Ask for things if, and *only* if, you're confident the person will say "yes". 2)  Interpret requests as expectations of "yes", and, when possible, avoid saying "no".The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect  you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.
  • Burch's law – “I think people should have a right to be stupid and, if they have that right, the market's going to respond by supplying as much stupidity as can be sold.” —Greg Burch A corollary of Burch's Law is that any bias should be regarded as a potential vulnerability whereby the market can trick one into buying something one doesn't really want.
  • Challenging the Difficult - A sequence on how to do things that are difficult or "impossible".
  • Cognitive style - Certain cognitive styles might tend to produce more accurate results. A common distinction between cognitive styles is that of foxes vs. hedgehogs. Hedgehogs view the world through the lens of a single defining idea and foxes draw on a wide variety of experiences and for whom the world cannot be boiled down to a single idea. Foxes tend to be better calibrated and more accurate.
  • Consequentialism - the ethical theory that people should choose the action that will result in the best outcome.
  • Crocker's rules - By declaring commitment to Crocker's rules, one authorizes other debaters to optimize their messages for information, even when this entails that emotional feelings will be disregarded. This means that you have accepted full responsibility for the operation of your own mind, so that if you're offended, it's your own fault.
  • Dark arts - refers to rhetorical techniques crafted to exploit human cognitive biases in order to persuade, deceive, or otherwise manipulate a person into irrationally accepting beliefs perpetuated by the practitioner of the Arts. Use of the dark arts is especially common in sales and similar situations (known as hard sell in the sales business) and promotion of political and religious views.
  • Egalitarianism - the idea that everyone should be considered equal. Equal in merit, equal in opportunity, equal in morality, and equal in achievement. Dismissing egalitarianism is not opposed to humility, even though from thesignaling perspective it seems to be opposed to modesty.
  • Expected utility - the expected value in terms of the utility produced by an action. It is the sum of the utility of each of its possible consequences, individually weighted by their respective probability of occurrence. rational decision maker will, when presented with a choice, take the action with the greatest expected utility.
  • Explaining vs. explaining away – Explaining something does not subtract from its beauty. It in fact heightens it. Through understanding it, you gain greater awareness of it. Through understanding it, you are more likely to notice its similarities and interrelationships with others things. Through understanding it, you become able to see it not only on one level, but on multiple. In regards to the delusions which people are emotionally attached to, that which can be destroyed by the truth should be.
  • Fuzzies - A hypothetical measurement unit for "warm fuzzy feeling" one gets from believing that one has done good. Unlike utils, fuzzies can be earned through psychological tricks without regard for efficiency. For this reason, it may be a good idea to separate the concerns for actually doing good, for which one might need to shut up and multiply, and for earning fuzzies, to get psychological comfort.
  • Game theory - attempts to mathematically model interactions between individuals.
  • Generalizing from One Example - an incorrect generalisation when you only have direct first-person knowledge of one mind, psyche or social circle and you treat it as typical even in the face of contrary evidence.
  • Goodhart’s law - states that once a certain indicator of success is made a target of a social or economic policy, it will lose the information content that would qualify it to play such a role. People and institutions try to achieve their explicitly stated targets in the easiest way possible, often obeying the letter of the law. This is often done in way that the designers of the law did not anticipate or want. For example, the soviet factories which when given targets on the basis of numbers of nails produced many tiny useless nails and when given targets on basis of weight produced a few giant nails.
  • Hedonism- refers to a set of philosophies which hold that the highest goal is to maximize pleasure, or more precisely pleasure minus pain.
  • Humans Are Not Automatically Strategic - most courses of action are extremely ineffective and most of the time there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective. When this is coupled with the fact that people tend to spend a lot less effort on planning how to go about a reaching a goal rather than just trying to achieve it you end up with the conclusion that humans are not automatically strategic.
  • Human universal - Donald E. Brown has compiled a list of over a hundred human universals - traits found in every culture ever studied, most of them so universal that anthropologists don't even bother to note them explicitly.
  • Instrumental value - a value pursued for the purpose of achieving other values. Values which are pursued for their own sake are called terminal values.
  • Intellectual roles - Group rationality may be improved when members of the group take on specific intellectual roles. While these roles may be incomplete on their own, each embodies an aspect of proper rationality. If certain roles are biased against, purposefully adopting them might reduce bias.
  • Lonely Dissenters suffer social disapproval, but are required - Asch's conformity experiment showed that the presence of a single dissenter tremendously reduced the incidence of "conforming" wrong answers.
  • Loss Aversion - is risk aversion's evil twin. A loss-averse agent tends to avoid uncertain gambles, not because every unit of money brings him a bit less utility, but because he weighs losses more heavily than gains, always treating his current level of money as somehow special.
  • Luminosity - reflective awareness. A luminous mental state is one that you have and know that you have. It could be an emotion, a belief or alief, a disposition, a quale, a memory - anything that might happen or be stored in your brain. What's going on in your head?
  • Marginally zero-sum game also known as 'arms race' - A zero-sum game where the efforts of each player not just give them a benefit at the expense of the others, but decrease the efficacy of everyone's past and future actions, thus making everyone's actions extremely inefficient in the limit.
  • Moral Foundations theory (all moral rules in all human cultures appeal to the six moral foundations: care/harm, fairness/cheating, liberty/oppression,loyalty/betrayal, authority/subversion, sanctity/degradation). This makes other people's moralities easier to understand, and is an interesting lens through which to examine your own.
  • Moral uncertainty – is uncertainty about how to act given the diversity of moral doctrines. Moral uncertainty includes a level of uncertainty above the more usual uncertainty of what to do given incomplete information, since it deals also with uncertainty about which moral theory is right. Even with complete information about the world this kind of uncertainty would still remain
  • Paranoid debating - a group estimation game in which one player, unknown to the others, tries to subvert the group estimate.
  • Politics as charity: in terms of expected value, altruism is a reasonable motivator for voting (as opposed to common motivators like "wanting to be heard").
  • Prediction - a statement or claim that a particular event will occur in the future in more certain terms than a forecast.
  • Privileging the question - questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. Examples are: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed? The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
  • Radical honesty- a communication technique proposed by Brad Blanton in which discussion partners are not permitted to lie or deceive at all. Rather than being designed to enhance group epistemic rationality, radical honesty is designed to reduce stress and remove the layers of deceit that burden much of discourse.
  • Reflective decision theory - a term occasionally used to refer to a decision theory that would allow an agent to take actions in a way that does not trigger regret. This regret is conceptualized, according to the Causal Decision Theory, as a Reflective inconsistency, a divergence between the agent who took the action and the same agent reflecting upon it after.
  • Schelling point – is a solution that people will tend to use in the absence of communication, because it seems natural, special, or relevant to them.
  • Schelling fences and slippery slopes – a slippery slope is something that affects people's willingness or ability to oppose future policies. Slippery slopes can sometimes be avoided by establishing a "Schelling fence" - a Schelling point that the various interest groups involved - or yourself across different values and times - make a credible precommitment to defend.
  • Something to protect - The Art must have a purpose other than itself, or it collapses into infinite recursion.
  • Status - Real or perceived relative measure of social standing, which is a function of both resource control and how one is viewed by others.
  • Take joy in the merely real – If you believe that science coming to know about something places it into the dull catalogue of common things, then you're going to be disappointed in pretty much everything eventually —either it will turn out not to exist, or even worse, it will turn out to be real. Another way to think about it is that if the magical and mythical were common place they would be merely real. If dragons were common, but zebras were a rare legendary creature then there's a certain sort of person who would ignore dragons, who would never bother to look at dragons, and chase after rumors of zebras. The grass is always greener on the other side of reality. If we cannot take joy in the merely real, our lives shall be empty indeed.
  • The Science of Winning at Life - A sequence by lukeprog that summarizes scientifically-backed advice for "winning" at everyday life: in one's productivity, in one's relationships, in one's emotions, etc. Each post concludes with footnotes and a long list of references from the academic literature.
  • Timeless decision theory - a decision theory, which in slogan form, says that agents should decide as if they are determining the output of the abstract computation that they implement. This theory was developed in response to the view that rationality should be about winning (that is, about agents achieving their desired ends) rather than about behaving in a manner that we would intuitively label as rational.
  • Unfriendly artificial intelligence - is an artificial general intelligence capable of causing great harm to humanity, and having goals that make it useful for the AI to do so. The AI's goals don't need to be antagonistic to humanity's goals for it to be Unfriendly; there are strong reasons to expect that almost any powerful AGI not explicitly programmed to be benevolent to humans is lethal.
  • Updateless decision theory – a decision theory in which we give up the idea of doing Bayesian reasoning to obtain a posterior distribution etc. and instead just choose the action (or more generally, the probability distribution over actions) that will maximize the unconditional expected utility.
  • Ugh field - Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have. We call it an "ugh field". The ugh field forms a self-shadowing blind spot covering an area desperately in need of optimization.
  • Utilitarianism - A moral philosophy that says that what matters is the sum of everyone's welfare, or the "greatest good for the greatest number".
  • Utility - how much a certain outcome satisfies an agent’s preferences.
  • Utility function - assigns numerical values ("utilities") to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities. These do not work very well in practice for individual humans
  • Wanting and liking - The reward system consists of three major components:
    • Liking: The 'hedonic impact' of reward, comprised of (1) neural processes that may or may not be conscious and (2) the conscious experience of pleasure.
    • Wanting: Motivation for reward, comprised of (1) processes of 'incentive salience' that may or may not be conscious and (2) conscious desires.
    • Learning: Associations, representations, and predictions about future rewards, comprised of (1) explicit predictions and (2) implicit knowledge and associative conditioning (e.g. Pavlovian associations).

Positions

  • Beliefs require observations - To form accurate beliefs about something, you really do have to observe it. This can be viewed as a special case of the second law of thermodynamics, in fact, since "knowledge" is correlation of belief with reality, which is mutual information, which is a form of negentropy.
  • Complexity of value - the thesis that human values have high Kolmogorov complexity and so cannot be summed up or compressed into a few simple rules. It includes the idea of fragility of value which is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable.
  • Egan's law - "It all adds up to normality." — Greg Egan. The purpose of a theory is to add up to observed reality, rather than something else. Science sets out to answer the question "What adds up to normality?" and the answer turns out to be Quantum mechanics adds up to normality. A weaker extension of this principle applies to ethical and meta-ethical debates, which generally ought to end up explaining why you shouldn't eat babies, rather than why you should.
  • Emotion - Contrary to the stereotype, rationality doesn't mean denying emotion. When emotion is appropriate to the reality of the situation, it should be embraced; only when emotion isn't appropriate should it be suppressed.
  • Futility of chaos - A complex of related ideas having to do with the impossibility of generating useful work from entropy — a position which holds against the ideas that e.g: Our artistic creativity stems from the noisiness of human neurons, randomized algorithms can exhibit performance inherently superior to deterministic algorithms and the human brain is a chaotic system and this explains its power; non-chaotic systems cannot exhibit intelligence.
  • General knowledge - Interdisciplinary, generally applicable knowledge is rarely taught explicitly. Yet it's important to have at least basic knowledge of many areas (as opposed to deep narrowly specialized knowledge), and to apply it to thinking about everything.
  • Hope - Persisting in clutching to a hope may be disastrous. Be ready to admit you lost, update on the data that says you did.
  • Humility – “To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.” —Twelve Virtues of Rationality Not to be confused with social modesty, or motivated skepticism (aka disconfirmation bias).
  • I don't know - in real life, you are constantly making decisions under uncertainty: the null plan is still a plan, refusing to choose is itself a choice, and by your choices, you implicitly take bets at some odds, whether or not you explicitly conceive of yourself as doing so.
  • Litany of Gendlin – “What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it.” —Eugene Gendlin
  • Litany of Tarski – “If the box contains a diamond, I desire to believe that the box contains a diamond; If the box does not contain a diamond, I desire to believe that the box does not contain a diamond; Let me not become attached to beliefs I may not want. “ —The Meditation on Curiosity
  • Lottery - A tax on people who are bad at math. Also, a waste of hope. You will not win the lottery.
  • Magic - What seems to humans like a simple explanation, sometimes isn't at all. In our own naturalistic, reductionist universe, there is always a simpler explanation. Any complicated thing that happens, happens because there is some physical mechanism behind it, even if you don't know the mechanism yourself (which is most of the time). There is no magic.
  • Modesty argument - the claim that when two or more rational agents have common knowledge of a disagreement over the likelihood of an issue of simple fact, they should each adjust their probability estimates in the direction of the others'. This process should continue until the two agents are in full agreement. Inspired by Aumann's agreement theorem.
  • No safe defense - Authorities can be trusted exactly as much as a rational evaluation of the evidence deems them trustworthy, no more and no less. There's no one you can trust absolutely; the full force of your skepticism must be applied to everything.
  • Offense - It is hypothesized that the emotion of offense appears when one perceives an attempt to gain status.
  • Slowness of evolution- The tremendously slow timescale of evolution, especially for creating new complex machinery (as opposed to selecting on existing variance), is why the behavior of evolved organisms is often better interpreted in terms of what did in fact work yesterday, rather than what will work in the future.
  • Stupidity of evolution - Evolution can only access a very limited area in the design space, and can only search for the new designs very slowly, for a variety of reasons. The wonder of evolution is not how intelligently it works, but that an accidentally occurring optimizer without a brain works at all.
New Comment
4 comments, sorted by Click to highlight new comments since:

I would not call this rudimentary! This is excellent. I'll be using this.

Didn't someone also do this for each post in the sequences a while back?

Do you mean the article summaries?

A very good overview. I think this could be made into a wiki overview page of it own.

One nitpick: I'm not sure whether these are 'lesswrong topics' which I'd more associate with general areas of interest being discussed on this forum (see What topics are appropriate for LessWrong?). I'd call your overview e.g. 'aspects of rationality' (though rationality is overused I think here it applies). Or 'lesswrong canon on rationality'. Or maybe 'categorization of the sequences'. Do you think this could express your intention?

I made this into a wiki page that's called Less Wrong Canon on Rationality