Screwtape

I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.

Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games. 

I recognize that last description might fit more than one person.

Sequences

The LessWrong Community Census
Meetup Tips
Meetup in a box

Wiki Contributions

Comments

I think go ahead and ask people to sign things. I've done it before and it went great, and the resulting book is a great memento. You've got a good conversation starter right there with asking them their favourite sequences post.

Welcome to the US!

You are correct I added an extra 0, writing (0.3 · 0.005) + (0.001 · 0.995) when I meant (0.3 · 0.005) + (0.01 · 0.995). That's a transcription error, thank you for catching it. 

I'm not sure how you're getting 14.85% or 60.1% though? I just checked, and I think those numbers do wind up at ~13.1%, not 14.85%. 

Hrm. Maybe the slip is accidentally switching whether I'm looking for "do aliens abduct people, given Bob experienced being abducted" vs "was Bob's abduction real, given Bob experienced being abducted."

But if Bob's abduction was real, then aliens do abduct people. It would still count even if his was the only actual abduction in the history of the human race. Seems like this isn't the source of the math not working?

Thank you for checking my math and setup! This is my first time trying Bayes in front of an audience.

Yeah, I think I described P(A|B) when trying to describe the sensitivity, you are right that whether aliens actually abduct people given Bob experienced aliens abducting him is P(A|B). It's possible I need to retract the whole section and example.

Your description of P(B|A) confuses me though. If I think through the standard Bayes mammogram problem, I don't set P(B|A) as P("A specific woman gets a positive test result, given some people get a positive test") and have to figure out what the selection procedure is that the doctor uses to choose people to test. We're looking for P("A specific woman gets a positive test result, given she actually has cancer.") I think Bob gets to start knowing he experienced getting abducted, the same way the woman in the mammogram problem gets to start knowing she got a positive test. He then tries to figure out whether the abduction was aliens or some kind of hallucination, the same way the woman (or her doctor) in the mammogram problem tries to figure out whether the test result is a true positive or a false positive. 

Hrm. So, in the mammogram problem, if the sometimes the machine malfunctions in a way that gives a positive result whether or not the woman actually had cancer, then some of the time the woman will coincidentally happen to have cancer when the machine malfunctioned. I think that's just supposed to be counted as part of the probability the woman with cancer gets a positive test, i.e. the sensitivity? Translating back to Bob's circumstances, aliens are real, but Bob hallucinated?

Intuitively it makes sense to me that if someone thinks they got abducted by aliens, it's more likely they're hallucinating than that they actually got abducted by aliens. It's true that aliens actually abducting people wouldn't mean people stop having hallucinations. But adding P(B|¬A) - the rate of false positives - to P(B|A) - the rate of true positives - seems like some kind of weird double counting. What am I misunderstanding here?

I took the point of Sort By Controversial to be that these statements were bad. If they worked (which is the premise of the story) then they would cause a lot of fights and bad feeling. I usually want less fighting and bad feelings.

They might not work. I'm wary of trying too hard though?

I agree adversarial action makes this much worse.

I think Bob and Carla's problem isn't really whether Bob is lying or not. If they knew for an absolute fact Bob wasn't speaking things he knew to be factually untrue, Carla still has to sort through misunderstanding (maybe Bob's talking about a LARP?) and drug use (maybe Bob forgot whether he took LSD the way I forget whether I've had coffee sometimes?) and psychotic breaks. I wouldn't usually count any of those as "lying" in the relevant sense; Bob's wrong, but he's accurately reporting his experiences as best he can. 

I don't have a solution for the group membership case, which I think of as a special case of the reputation problem. I'm trying to point out a couple failure modes; one where you don't realize a bunch of your information actually has a single source and should be counted once, and one where you don't actually get or incorporate reputation information at all.

I'm not an AI safety specialist, but I get the sense that a lot of extra skillsets became useful over the last few years. What kind of positions would be interesting to you?

MIRI was looking for technical writers recently. Robert Miles makes youtube videos. Someone made the P(Doom) question well known enough to be mentioned in the senate. I hope there's a few good contract lawyers looking over OpenAI right now. AISafety.Info is a collection of on-ramps, but it also takes ongoing web development and content writing work. Most organizations need operations teams and accountants no matter what they do.

You might also be surprised how much engineering and physics is a passable starting point. Again, this isn't my field, but if you haven't already done so it might be worth reading a couple recent ML papers and seeing if they make sense to you, or better yet if it looks like you see an idea for improvement or a next step you could jump in or try. 

Put your own oxygen mask on though. Especially if you don't have a cunning idea and can't find a way to get started, grab a regular job and get good at that. 

Sorry I don't have a better answer.

I'm not sure I'm following your actual objection. Is your point that this algorithm is wrong and won't update towards the right probabilities even if you keep feeding it new pieces of evidence, that the explanations and numbers for these pieces of evidence don't make sense for the implied story, that you shouldn't try to do explicit probability calculations this way, or some fourth thing?

If this algorithm isn't actually equivalent to Bayes in some way, that would be really useful for someone to point out. At first glance it seems like a simpler (to me anyway) way to express how making updates works, not just on an intuitive "I guess the numbers move that direction?" way but in a way that might not get fooled by e.g. the mammogram example. 

If these explanations and numbers don't make exact sense for the implied story, that seems fine? "A train is moving from east to west at a uniform speed of 12 m/s, ten kilometers west a second train is moving west to east at a uniform speed of 15 m/s, how far will the first train have traveled when they meet?" is a fine word problem even if that's oversimplified for how trains work. 

If you don't think it's worth doing explicit probability calculations this way, even to practice and try and get better or as a way to train the habit of how the numbers should move, that seems like a different objection and one you would have with any guide to Bayes. That's not to say you shouldn't raise the objection, but that doesn't seem like an objection that someone did the math wrong!

And of course maybe I'm completely missing your point.

Yep, it's accessible. I haven't gone. 

This ties into a point I don't think I made very well in the original post, which is that doing all the work yourself and letting people feel like it's handled is tugging the ladder up at least a little bit. Imagine someone growing up in a household where their parents always cook all the meals, then they move out and abruptly realize they don't know how to fry an egg. It was always possible to watch the meal preparation, but why would they do that if they don't think ahead and realize someday they're going to have to do it themselves? 

There's a hazard in taking care of a problem too completely and too seamlessly, especially if you might someday stop. The American government is not what most people would call complete and seamless, but it has managed to let people not really pay attention to how it works most of the time.

Chief Bob's hearings are in your neighborhood, involve your neighbors, and you're expected to go and watch the proceedings because everyone else is. I'm not saying that's better overall- policy debates are not onesided.

Load More