Why the Future of Open Source AI is So Much Bigger Than Stable Diffusion 1.5 and Why It Matters to You
The release of Stable Diffusion unleashed a tremendous amount of innovation in an incredibly short period of time.
But there is a reason we've taken a step back at Stability AI and chose not to release version 1.5 as quickly as we released earlier checkpoints. We also won't stand by quietly when other groups leak the model in order to draw some quick press to themselves while trying to wash their hands of responsibility.
We’ve heard from regulators and the general public that we need to focus more strongly on security to ensure that we’re taking all the steps possible to make sure people don't use Stable Diffusion for illegal purposes or hurting people. But this isn't something that matters just to outside folks, it matters deeply to many people inside Stability and inside our community of open source collaborators. Their voices matter to us. At Stability, we see ourselves more as a classical democracy, where every vote and voice counts, rather than just a company.
In the absence of news from us, rumors started swirling about why we didn't release the next version yet. Some folks in the community worry that Stability AI has gone closed source and that we'll never release a model again. It's simply not true. We are committed to open source at our very core.
Open source is where innovation comes from and there is a long history with software like Linux that shows it's the best way to deliver value to society as a whole. While closed AI systems have seen very limited use cases, innovators and entrepreneurs have already woven Stable Diffusion into an amazing array of potentially game-changing applications for American business such as prototype synthetic brain scan images that can drive medical research, on demand interior design, incredibly powerful Hollywood style film effects, seamless textures for video games, new kinds of rapid animation that can drive tremendous new streaming content, on-the-fly animated videos and books, concept art, plugins for Figma and Photoshop, and much more. Openness works because no single company or person can imagine all the possible ways to use a brilliant new technology.
Others worry that we plan to neuter the models to the point of uselessness and try to chase every edge case. We understand that it's impossible to solve for every edge case but we don't need to do that. What we do need to do is listen to society as a whole, listen to regulators, listen to the community.
We are forming an open source committee to decide on major issues like cleaning data, NSFW policies and formal guidelines for model release. This framework has a long history in open source and it's worked tremendously well. Open source AI needs to be guided by the same democratic principles. We also announced a prize of $200,000 for deep fake detection and we will release that software open source and free of charge to help society combat this abuse of machine learning.
So when Stability AI says we have to slow down just a little it's because if we don't deal with very reasonable feedback from society and our own communities then there is a chance open source AI simply won't exist and nobody will be able to release powerful models. That's not a world we want to live in.
We believe in open source AI. Help us set the groundwork to make sure we have a firm foundation to build on, so we and everyone else can release models that matter now and in the future.
Help us make AI truly open, rather than open in name only.