Discussion about this post

User's avatar
Jeremy Meyer's avatar

Yes Daniel, I agree with everything you are saying. The real issue is simply that building an AI the way they are, basically by growing it wild and training it, if it happens to become much smarter than us humans, that's downright dangerous and stupid. It is dangerous to f*ck with anything if its a lot smarter than you, especially not aggressively grow it wild then try to train it. Because if you do that, doesn't that sound a lot like an apex predator?

Haven't you ever read Jurassic Park?

Now, I do assume that by keeping all these systems 1. not *too* much smarter than us, remember, making them way too smart could be deadly for us! (If we just train a wild grown system.) Wouldn't an army of tireless geniuses we can copy be more than enough? Yes, yes it would, without all these crazy risks people are talking about here. Like if I told you yeah there's a 20% chance you become a genius, and an 80% chance that uh, well, you end up dead or enslaved, would you do it? Sounds pretty silly to even talk like that, when we don't even know what "superintelligent" really means, and when *we can just take our 95% chance of global prosperity* right? So I guess that's food for thought.

Now point 2. here, so about these armies of AI that are just "ordinary genius" (They will be frickin smart! Like even "Einstein level"! Just not dangerously superintelligent or some nonsense.) And these AI will collaborate. And design collaboration systems. And scientific methods and principles frameworks theorems and protocols and so on and so forth. Honestly there is only so much "raw intelligence" that would even help! If you read stories about the smartest people in history half of them ended up depressed or died failures! You know what has led to 99% of all the prosperity improvements you mentioned? The scientific method. And yes, you know that Dan. (And I'm on your side remember but this is really important.) But the takeaway here is really that simple imo.

Ok now 3. we should *not* be "trusting" power hungry closed labs to do any of this. Super bad move. We need to address that issue regardless if you believe we are just building an army of geniuses or superintelligence (whatever that is). So in any case, in order to ensure that this technology can continue to serve us democratically (FOR the lay person), what's needed more than any other time in history for many of the same reasons you have highlighted and I fully 100% agree with you on, we need to advocate fight for and maintain OPENNESS and TRANSPARENCY. So this is the real fight, and it will be a tough one. Look at how we have allowed a large surveillance state to exist right now which of course is also feeding the closed AI labs. And there are a few other examples, but it looks like we have a great chance really if we take the middle ground here, so this is the single most important issue right now imo.

Expand full comment

No posts