dr_s

Wiki Contributions

Comments

dr_s42

The problem is that as usual people will worry that the NatSec guys are using the threat to try to slip us the pill of additional surveillance and censorship for political purposes - and they probably won't be entirely wrong. We keep undermining our civilizational toolset by using extreme measures for trivial partisan stuff and that reduces trust.

Answer by dr_s20

I honestly don't think ARA immediately and necessarily leads to overall loss of control. It would in a world that has also widespread robotics. What it would potentially be, however, is a cataclysmic event for the Internet and the digital world, possibly on par with a major solar flare, which is bad enough. Destruction of trust, cryptography, banking system belly up, IoT devices and basically all systems possibly compromised. We'd look at old computers that have been disconnected from the Internet from before the event the way we do at pre-nuclear steel. That's in itself bad and dangerous enough to worry about, and far more plausible than outright extinction scenarios, which require additional steps.

dr_s20

Yeah I think the idea is "I get the point you moron, now stop speaking so loud or the game's up."

dr_s22

It's not that people won't talk about spherical policies in a vacuum, it's that the actual next step of "how does this translate into actual politics" is forbidding. Which is kind of understandable, given that we're probably not very peopley persons, so to speak, inclined to high decoupling, and politics can objectively get very stupid.

In fact my worst worry about this idea isn't that there wouldn't be consensus, it's how it would end up polarising once it's mainstream enough. Remember how COVID started as a broad "Let's keep each other safe" reaction and then immediately collapsed into idiocy as soon as worrying about pesky viruses became coded as something for liberal pansies? I expect with AI something similar might happen, not sure in what direction either (there's a certain anti-AI sentiment building up on the far left but ironically it denies entirely the existence of X-risks as a right wing delusion concocted to hype up AI more). Depending on how those chips fall, actual political action might require all sorts of compromises with annoying bedfellows.

dr_s40

I mean, if a mere acquaintance told me something like that I don't know what I'd say, but it wouldn't be an offer to "talk about it" right away - I wouldn't feel like I'd enjoy talking about it with a near stranger, so I'd expect the same applies to them. It's one of those prefab reactions that don't really hold much water upon close scrutiny.

dr_s54

I find that rather adorable

In principle it is, but I think people do need some self awareness to distinguish between "I wish to help" and "I wish to feel like a person who's helping". The former requires focusing more genuinely on the other, rather than going off a standard societal script. Otherwise, if your desire to help ends up merely forcing the supposedly "helped" person to entertain you, after a while you'll effectively be perceived as a nuisance, good intentions or not.

dr_s53

Hard agree. People might be traumatised by many things, but you don't really want to convince them they should be traumatised, or define their identity about trauma (and then possibly insist that if they swear up and down they aren't that just means they're really repressing or not admitting - this has happened to me). That only increases the suffering! If they're not traumatised, great - they dodged a bullet! It doesn't mean that e.g. sex assault is less bad - the same way shooting someone isn't any less bad just because you happened to miss their vital organs (ok, so actually the funny thing is I guess that attempted murder is punished less than actual murder... but morally speaking, I'd say how good a shot you are has no relevance).

Answer by dr_s209

The thing is, it's hard to come up with ways to package the problem. I've tried doing small data science efforts for lesser chronic problems on myself and my wife, recording the kind of biometric indicators that were likely to correlate with our issues (e.g. food diaries vs symptoms) and it's still almost impossible to suss out meaningful correlations unless it's something as basic as "eating food X causes you immediate excruciating pain". In a non laboratory setting, controlling environmental conditions is impossible. Actual rigorous datasets, if they exist at all, are mostly privacy protected. Relevant diagnostic parameters are often incredibly expensive and complex to acquire, and possibly gatekept. The knowledge aspect is almost secondary IMO (after all, in the end, lots of recommendations your doctor will give you are still little more than empirical fixes someone came up with by analysing the data, mechanistic explanations don't go very far when dealing with biology). But even the data science, which would be doable by curious individuals, is forbidding. Even entire fields of actual, legitimate academia are swamped in this sea of noisy correlations and statistical hallucinations (looking at you, nutrition science). Add to that the risk of causing harm to people even if well meaning, and the ethical and legal implications of that, and I can see why this wouldn't take off. SMTM's citizen research on obesity seems the closest I can think of, and I've heard plenty of criticism of it and its actual rigour.

dr_s00

It doesn't change much, it still applies anyway because when talking about hypothetical really powerful models, ideally we'd want them to follow very strong principles regardless of who asks. E.g. if an AI was in charge of a military obviously it wouldn't be open, but it shouldn't accept orders to commit war crimes even from a general or a president.

dr_s42

I'm not sure if those are precisely the terms of the charter, but that's besides the point. It is still "private" in the sense that there is a small group of private citizens who own the thing and decide what it should do with no political accountability to anyone else. As for the "non-profit" part, we've seen what happens to that as soon as it's in the way.

Load More