6 Comments

I strongly agree that the precautionary principle is a terrible basis for approaching AI. Or anything else. I have proposed an alternative: The Proactionary Principle. It is close to some other approaches such as "permissionless innovation". Specifics in my blog piece (and others on the proactionary principle):

https://maxmore.substack.com/p/existential-risk-vs-existential-opportunity

Expand full comment

"Some people in history believed wacky things. Therefore, arguments that suggest AI can someday replace human intellect are wacky. "

You need to revisit your logic.

Expand full comment

I did not say it's crazy that AI would get smart. It's already smart and in many ways superhuman at things like language. No human can speak 50 languages or 100 languages. What is said was, self bootstrapping AI, AI out of control, AI sentience is all based on nothing and generally crazy. And there are clear examples from the past, a lot of them, that people believe bonkers stuff about technology and it's highly likely they believe bonkers stuff about AI today for the same reasons.

Expand full comment

I've been following the discussion about AI safety regulation, and I'm wondering if you've heard of the Hippocratic License (HL3)? It's an open source license with an ethical duty-of-care requirement, which I think could be a promising approach to addressing some of the concerns around AI safety. This type of license addresses regulators' and public concerns about the use of AI, without requiring any new regulations.

HL3 allows licensors to set clear ethical standards that licensees must abide by in order to use their code. These standards can be customized to focus on specific areas, such as climate justice, labor rights, or preventing AI from being used for harmful purposes.

I think HL3 and similar licenses are a great method to show the world that open source AI can build in it's own safeguards without imposed regulation. It could also help to build public trust in AI, which is essential if we want to see widespread adoption of this technology.

Here is a link to learn more about HL3: [https://firstdonoharm.dev/](https://firstdonoharm.dev/)

I'd be interested to hear your thoughts on HL3 and whether you think it could be a viable approach to AI safety regulation.

Expand full comment

There is a world of difference between non-profit and for-profit use that you need to consider. Some of your points make total sense within academic research and none whatsoever on the market. Academia-to-industry data laundering is a present and ongoing crime that drives publishers off of the internet and forces paywalling and data poisoning.

Without data transparency and licensing we will be stuck with bad actors like Stability.ai that willfully conflate “publicly available” with “Public Domain” to profit off of millions of copyrighted works while recklessly releasing models that enable fraud and deepfake harassment and guarantee endless child porn supply.

Expand full comment

This is the most level headed article about AI that I have read all week. I wish I could write like you thank you.

100% on the proposal of how to regulate, we should regulate illegal behavior, not the general platforms

Expand full comment