DPiepgrass

Worried that typical commenters at LW care way less than I expected about good epistemic practice. Hoping I'm wrong.

Software developer and EA with interests including programming language design, international auxiliary languages, rationalism, climate science and the psychology of its denial.

Looking for someone similar to myself to be my new best friend:

❖ Close friendship, preferably sharing a house ❖ Rationalist-appreciating epistemology; a love of accuracy and precision to the extent it is useful or important (but not excessively pedantic) ❖ Geeky, curious, and interested in improving the world ❖ Liberal/humanist values, such as a dislike of extreme inequality based on minor or irrelevant differences in starting points, and a like for ideas that may lead to solving such inequality. (OTOH, minor inequalities are certainly necessary and acceptable, and a high floor is clearly better than a low ceiling: an "equality" in which all are impoverished would be very bad) ❖ A love of freedom ❖ Utilitarian/consequentialist-leaning; preferably negative utilitarian ❖ High openness to experience: tolerance of ambiguity, low dogmatism, unconventionality, and again, intellectual curiosity ❖ I'm a nudist and would like someone who can participate at least sometimes ❖ Agnostic, atheist, or at least feeling doubts

Wiki Contributions

Comments

I guess you could try it and see if you reach wrong conclusions, but that only works isn't so wired up with shortcuts that you cannot (or are much less likely to) discover your mistakes.

I've been puzzling over why EY's efforts to show the dangers of AGI (most notably this) have been unconvincing enough so that other experts (e.g. Paul Christiano) and, in my experience, typical rationalists have not adopted p(doom) > 90% like EY, or even > 50%. I was unconvinced because he simply didn't present a chain of reasoning that shows what he's trying to show. Rational thinking is a lot like math: a single mistake in a chain of reasoning can invalidate the whole conclusion. Failure to generate a complete chain of reasoning is a sign that the thinking isn't rational. And failure to communicate a complete chain of reasoning, as in this case, should fail to convince people (except if the audience can mentally reconstruct the missing information).

I read all six "tomes" of Rationality: A-Z and I don't recall EY ever writing about the importance of having a solid and complete chain (or graph) of reasoning―but here is a post about the value of shortcuts (if you can pardon the strawman; I'm using the word "shortcut" as a shortcut). There's no denying that shortcuts can have value, but only if it leads to winning, which for most of us including EY includes having true beliefs, which in turn requires an ability to generate solid and complete chains of reasoning. If you used shortcuts to generate it, that's great insofar as it generates correct results, but mightn't shortcuts make your reasoning less reliable than it first appears? When it comes to AI safety, EY's most important cause, I've seen a shortcut-laden approach (in his communication, if not his reasoning) and wasn't convinced, so I'd like to see him take it slower and give us a more rigorous and clear case for AI doom ― one that either clearly justifies a very high near-term catastrophic risk assessment, or admits that it doesn't.

I think EY must have a mental system that is far above average, but from afar it seems not good enough.

On the other hand, I've learned a lot about rationality from EY that I didn't already know, and perhaps many of the ideas he came up with are a product of this exact process of identifying necessary cognitive work and casting off the rest. Notable if true! But in my field I, too, have had various unique ideas that no one else ever presented, and I came about it from a different angle: I'm always looking for the (subjectively) "best" solutions to problems. Early in my career, getting the work done was never enough, I wanted my code to be elegant and beautiful and fast and generalized too. Seems like I'd never accept the first version, I'd always find flaws and change it immediately after, maybe more than once. My approach (which I guess earns the boring label 'perfectionism') wasn't fast, but I think it built up a lot of good intuitions that many other developers just don't have. Likewise in life in general, I developed nuanced thinking and rationalist-like intuitions without ever hearing about rationalism. So I am fairly satisfied with plain-old perfectionism―reaching conclusions faster would've been great, but I'm uncertain whether I could've or would've found a process of doing that such that my conclusions would've been as correct. (I also recommend always thinking a lot, but maybe that goes without saying around here)

I'm reminded of a great video about two ways of thinking about math problems: a slick way that finds a generalized solution, and a more meandering, exploratory way way that looks at many specific cases and examples. The slick solutions tend to get way more attention, but slower processes are way more common when no one is looking, and famous early mathematicians haven't shied away from long and even tedious work. I feel like EY's saying "make it slick and fast!" and to be fair, I probably should've worked harder at developing Slick Thinking, but my slow non-slick methods also worked pretty well.

Speaking for myself: I don't prefer to be alone or tend to hide information about myself. Quite the opposite; I like to have company but rare is the company that likes to have me, and I like sharing, though it's rare that someone cares to hear it. It's true that I "try to be independent" and "form my own opinions", but I think that part of your paragraph is easy to overlook because it doesn't sound like what the word "avoidant" ought to mean. (And my philosophy is that people with good epistemics tend to reach similar conclusions, so our independence doesn't necessarily imply a tendency to end up alone in our own school of thought, let alone prefer it that way.)

Now if I were in Scott's position? I find social media enemies terrifying and would want to hide as much as possible from them. And Scott's desire for his name not to be broadcast? He's explained it as related to his profession, and I don't see why I should disbelieve that. Yet Scott also schedules regular meetups where strangers can come, which doesn't sound "avoidant". More broadly, labeling famous-ish people who talk frequently online as "avoidant" doesn't sound right.

Also, "schizoid" as in schizophrenia? By reputation, rationalists are more likely to be autistic, which tends not to co-occur with schizophrenia, and the ACX survey is correlated with this reputation. (Could say more but I think this suffices.)

Scott tried hard to avoid getting into the race/IQ controversy. Like, in the private email LGS shared, Scott states "I will appreciate if you NEVER TELL ANYONE I SAID THIS". Isn't this the opposite of "it's self-evidently good for the truth to be known"? And yes there's a SSC/ACX community too (not "rationalist" necessarily), but Metz wasn't talking about the community there.

My opinion as a rationalist is that I'd like the whole race/IQ issue to f**k off so we don't have to talk or think about it, but certain people like to misrepresent Scott and make unreasonable claims, which ticks me off, so I counterargue, just as I pushed a video by Shaun once when I thought somebody on ACX sounded a bit racist to me on the race/IQ topic.

Scott and myself are consequentialists. As such, it's not self-evidently good for the truth to be known. I think some taboos should be broached, but not "self-evidently" and often not by us. But if people start making BS arguments against people I like? I will call BS on that, even if doing so involves some discussion of the taboo topic. But I didn't wake up this morning having any interest in doing that.

Huh? Who defines racism as cognitive bias? I've never seen that before, so expecting Scott in particular to define it as such seems like special pleading.

What would your definition be, and why would it be better?

Scott endorses this definition:

Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.

Setting aside that it says "irrational feeling" instead of "cognitive bias", how does this "tr[y] to define racism out of existence"?

I think about it differently. When Scott does not support an idea, but discusses or allows discussion of it, it's not "making space for ideas" as much as "making space for reasonable people who have ideas, even when they are wrong". And I think making space for people to be wrong sometimes is good, important and necessary. According to his official (but confusing IMO) rules, saying untrue things is a strike against you, but insufficient for a ban.

Also, strong upvote because I can't imagine why this question should score negatively.

Scott had every opportunity to say "actually, I disagree with Murray about..." but he didn't, because he agrees with Murray

[citation needed] for those last four words. In the paragraph before the one frankybegs quoted, Scott said:

Some people wrote me to complain that I handled this in a cowardly way - I showed that the specific thing the journalist quoted wasn’t a reference to The Bell Curve, but I never answered the broader question of what I thought of the book. They demanded I come out and give my opinion openly. Well, the most direct answer is that I've never read it.

Having never read The Bell Curve, it would be uncharacteristic of him to say "I disagree with Murray about [things in The Bell Curve]", don't you think?

Strong disagree based on the "evidence" you posted for this elsewhere in this thread. It consists one-half of some dude on Twitter asserting that "Scott is a racist eugenics supporter" and retweeting other people's inflammatory rewordings of Scott, and one-half of private email from Scott saying things like

HBD is probably partially correct or at least very non-provably not-correct

It seems gratuitous for you to argue the point with such biased commentary. And what Scott actually says sounds like his judgement of ... I'm not quite sure what, since HBD is left without a definition, but it sounds a lot like the evidence he mentioned years later from 

(yes, I found the links I couldn't find earlier thanks to a quote by frankybegs from this post which―I was mistaken!―does mention Murray and The Bell Curve because he is responding to Cade Metz and other critics).

This sounds like his usual "learn to love scientific consensus" stance, but it appears you refuse to acknowledge a difference between Scott privately deferring to expert opinion, on one hand, and having "Charles Murray posters on his bedroom wall".

Almost the sum total of my knowledge of Murray's book comes from Shaun's rebuttal of it, which sounded quite reasonable to me. But Shaun argues that specific people are biased and incorrect, such as Richard Lynn and (duh) Charles Murray. Not only does Scott never cite these people, what he said about The Bell Curve was "I never read it". And why should he? Murray isn't even a geneticist!

So it seems the secret evidence matches the public evidence, does not show that "Scott thinks very highly of Murray", doesn't show that he ever did, doesn't show that he is "aligned" with Murray etc. How can Scott be a Murray fanboy without even reading Murray?

You saw this before:

I can't find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I'd be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I'm misreading them or that they otherwise shouldn't be trusted. If you have thoughts on this, please send me an email). I've vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is "I am still freaking out about this, go away go away go away". And I understand I have at least two potentially irresolveable biases on this question: one, I'm a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it.

You may just assume Scott is lying (or as you put it, "giving a maximally positive spin on his own beliefs"), but again I think you are conflating. To suppose experts in a field have expertise in that field isn't merely different from "aligning oneself" with a divisive conservative political scientist whose book one has never read ― it's really obviously different how are you not getting this??

he definitely thinks this

He definitely thinks what, exactly?

Anyway, the situation is like: X is writing a summary about author Y who has written 100 books, but pretty much ignores all those books in favor of digging up some dirt on what Y thinks about a political topic Z that Y almost never discusses (and then instead of actually mentioning any of that dirt, X says Y "aligned himself" with a famously controversial author on Z.)

It's really weird to go HOW DARE YOU when someone says something you know is true about you, and I was always unnerved by this reaction from Scott's defenders

It's not true though. Perhaps what he believes is similar to what Murray believes, but he did not "align himself" with Murray on race/IQ. Like, if an author in Alabama reads the scientific literature and quietly comes to a conclusion that humans cause global warming, it's wrong for the Alabama News to describe this as "author has a popular blog, and he has aligned himself with Al Gore and Greta Thunberg!" (which would tend to encourage Alabama folks to get out their pitchforks 😉) (Edit: to be clear, I've read SSC/ACX for years and the one and only time I saw Scott discuss race+IQ, he linked to two scientific papers, didn't mention Murray/Bell Curve, and I don't think it was the main focus of the post―which makes it hard to find it again.)

DPiepgrass1211

I agree, except for the last statement. I've found that talking to certain people with bad epistemology about epistemic concepts will, instead of teaching them concepts, teach them a rhetorical trick that (soon afterward) they will try to use against you as a "gotcha" (related)... as a result of them having a soldier mindset and knowing you have a different political opinion.

While I expect most of them won't ever mimic rationalists well, (i) mimicry per se doesn't seem important and (ii) I think there are a small fraction of people (tho not Metz) who do end up fostering a "rationalist skin" ― they talk like rationalists, but seem to be in it mostly for gotchas, snipes and sophistry.

Load More