gilch

As of June 2024, I have signed no contracts or agreements whose existence I cannot mention.

Sequences

An Apprentice Experiment in Python Programming
Inefficient Markets

Wiki Contributions

Load More

Comments

gilch177

I feel like this has come up before, but I'm not finding the post. You don't need the stick-on mirrors to eliminate the blind spot. I don't know why pointing side mirrors straight back is still so popular, but that's not the only way it's taught. I have since learned to set mine much wider.

This article explains the technique. (See the video.)

In a nutshell, while in the diver's seat, tilt your head to the left until it's almost touching your window, then from that perspective point it straight back so you can just see the side of your car. (You might need a similar adjustment for the passenger's side, but those are often already wide-angle.) Now from normal position, you can see your former "blind spot". When you need to see straight back in your side mirror (like when backing out), just tilt your head again. Remember that you also have a center mirror. You should be able to see passing cars in your center mirror, and then in your side mirror, then in your peripheral vision without ever turning your head or completely losing sight of them.

gilch227
  • It's not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
  • Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you're using logarithms).
  • Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don't say, "I don't know." You know a little.
  • A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam's razor.)
    • The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
    • Solomonoff's Lightsaber is the right way to think about this.
  • More direct evidence can "screen off" indirect evidence. If it's along the same causal chain, you're not allowed to count it twice.
  • Many so-called "logical fallacies" are correct Bayesian inferences.
gilch270

French, but because my teacher tried to teach all of the days of the week at the same time, they still give me trouble.

They're named as the planets: Sun-day, Moon-day, Mars-day, Mercury-day, Jupiter-day, Venus-day, and Saturn-day.

It's easy to remember when you realize that the English names are just the equivalent Norse gods: Saturday, Sunday and Monday are obvious. Tyr's-day (god of combat, like Mars), Odin's-day (eloquent traveler god, like Mercury), Thor's-day (god of thunder and lightning, like Jupiter), and Freyja's-day (goddess of love, like Venus) are how we get the names Tuesday, Wednesday, Thursday, and Friday.

Answer by gilch250

While an institution's reliability and bias can shift over time, I think AP and Reuters currently fit the bill. They report the facts the most reliably of any big-name general news sources I know of, without very much analysis or opinion. Their political leaning is nearly neutral or balanced, but maybe on the left side of the line (Reuters might be slightly less biased than AP, but still on the left side).

The Wall Street Journal is a little bit less reliable on the facts, also centrist, and on the right side of the line due to their business focus. If you read this too, it may help you counterbalance AP's and Reuters' slight left bias without going to the unreliable right-wing extremist sources.

If you want only one source, The Hill is about as nonpartisan as it gets (maybe a bit less reliable on the facts than the WSJ, but still pretty good). They report on both sides of the aisle. Their focus is, in their words, "on the inner workings of Congress and the nexus of politics and business".

[Epistemic status: I looked at the Ad Fontes Media Bias Chart. Exactly how impartial their judgements are, I can't say, but they do seem to try. Media Bias/Fact Check mostly agrees with these judgements, but I don't think they're any more reliable.]

That said, even an "impartial" news source (to the extent there is such a thing) is going to give you a very distorted view of the world due to selection biases and the Overton Window. "Newsworthy" stories are, by their nature, rare occurrences, and will tend to amplify your availability bias. Don't lose sight of base rates. Our World in Data should be worth exploring for that reason. They publish what they think is important rather than what is new.

gilch210

Why is Google the biggest search engine even though it wasn't the first? It's because Google has a better signal-to-noise ratio than most search engines. PageRank cut through all the affiliate cruft when other search engines couldn't, and they've only continued to refine their algorithms.

But still, haven't you noticed that when Wikipedia comes up in a Google search, you click that first? Even when it's not the top result? I do. Sometimes it's not even the article I'm after, but its external links. And then I think to myself, "Why didn't I just search Wikipedia in the first place?". Why do we do that? Because we expect to find what we're looking for there. We've learned from experience that Wikipedia has a better signal-to-noise ratio than a Google search.

If LessWrong and Wikipedia came up in the first page of a Google search, I'd click LessWrong first. Wouldn't you? Not from any sense of community obligation (I'm a lurker), but because I expect a higher probability of good information here. LessWrong has a better signal-to-noise ratio than Wikipedia.

LessWrong doesn't specialize in recipes or maps. Likewise, there's a lot you can find through Google that's not on Wikipedia (and good luck finding it if Google can't!), but we still choose Wikipedia over Google's top hit when available. What is on LessWrong is insightful, especially in normally noisy areas of inquiry.

gilch42

I feel like these would be more effective if standardized, dated and updated. Should we also mention gag orders? Something like this?

As of June 2024, I have signed no contracts or agreements whose existence I cannot mention.
As of June 2024, I am not under any kind of gag order whose existence I cannot mention.
Last updated June 2024. I commit to updating at least annually.

Could LessWrong itself be compelled even if the user cannot? Should we include PGP signatures or something?

gilch1-1

I thought it was mostly due to the high prevalence of autism (and the social anxiety that usually comes with it) in the community. The more socially agentic rationalists are trying.

gilch20

But probably he should be better at communication e.g. realizing that people will react negatively to raising the possibility of nuking datacenters without lots of contextualizing.

Yeah, pretty sure Eliezer never recommended nuking datacenters. I don't know who you heard it from, but this distortion is slanderous and needs to stop. I can't control what everybody says elsewhere, but it shouldn't be acceptable on LessWrong, of all places.

He did talk about enforcing a global treaty backed by the threat of force (because all law is ultimately backed by violence, don't pretend otherwise). He did mention that destroying "rogue" datacenters (conventionally, by "airstrike") to enforce said treaty had to be on the table, even if the target datacenter is located in a nuclear power who might retaliate (possibly risking a nuclear exchange), because risking unfriendly AI is worse.

gilch20

The argument chain you presented (Deep Learning -> Consciousness -> AI Armageddon) is a strawman. If you sincerely think that's our position, you haven't read enough. Read more, and you'll be better received. If you don't think that, stop being unfair about what we said, and you'll be better received.

Last I checked, most of us were agnostic on the AI Consciousness question. If you think that's a key point to our Doom arguments, you haven't understood us; that step isn't necessarily required; it's not a link in the chain of argument. Maybe AI can be dangerous, even existentially so, without "having qualia". But neither are we confident that AI necessarily won't be conscious. We're not sure how it works in humans but seems to be an emergent property of brains, so why not artificial brains as well? We don't understand how the inscrutable matrices work either, so it seems like a possibility. Maybe gradient descent and evolution stumbled upon similar machinery for similar reasons. AI consciousness is mostly beside the point. Where it does come up is usually not in the AI Doom arguments, but questions about what we ethically owe AIs, as moral patients.

Deep Learning is also not required for AI Doom. Doom is a disjunctive claim; there are multiple paths for getting there. The likely-looking path at this point would go through the frontier LLM paradigm, but that isn't required for Doom. (However, it probably is required for most short timelines.)

gilch31

You are not wrong to complain. That's feedback. But this feels too vague to be actionable.

First, we may agree on more than you think. Yes, groupthink can be a problem, and gets worse over time, if not actively countered. True scientists are heretics.

But if the science symposium allows the janitor to interrupt the speakers and take all day pontificating about his crackpot perpetual motion machine, it's also of little value. It gets worse if we then allow the conspiracy theorists to feed off of each other. Experts need a protected space to converse, or we're stuck at the lowest common denominator (incoherent yelling, eventually). We unapologetically do not want trolls to feel welcome here.

Can you accept that the other extreme is bad? I'm not trying to motte-and-bailey you, but moderation is hard. The virtue lies between the extremes, but not always exactly in the center.

What I want from LessWrong is high epistemic standards. That's compatible with opposing viewpoints, but only when they try to meet our standards, not when they're making obvious mistakes in reasoning. Some of our highest-karma posts have been opposing views!

Do you have concrete examples? In each of those cases, are you confident it's because of the opposing view, or could it be their low standards?

Load More