Why AI Doomers Must be Stopped, Open Source is More Essential than Ever and Why Some AI Makers Hope to Create a Licensing Moat to Lock Out Competition and Grants Them the Divine Right to Rule Forever
Funnily enough, your writing complements very well a thought I had recently - and that you also may find interesting.
I get all your points (and love the passion you have for all things open and free ;-)). My supplementing input consists in providing a more scientific, less passionate look on and evaluation of the power mechanics you describe so well: after some reflection, I see a lot of analogy between the attempt to "weaponize" legislation and regulation against (but not limited to) open source and..... the data science process.
I really wrecked my had how we can be so eager in Europe to shoot our own knee caps (again), now in matters of AI..... and the surprising answer is "overfitting". The effects you describe so well concur to produce a legislative framework of which the output shows all signs of this phemenon:
- A public that doesn't think in spectrums anymore, but refuses any innovation if the slightest adverse effect MAY be attached to it
- A bureaucracy grown out of proportion, jumping too willingly on the occassion to make itself ever more indispensible by creating ever more Byzantine structures that "safeguard" the status quo.
- BigTech and BigIndustry looking happily on while the fresh air of competition is suffocated under the weight of the resulting"overlegislation", allowing them to permanently exploit and extend their sincecures.
But, as you say, all is not gloom: seeing this effect as a form of overfitting provides a way to approach things more scientifically, to describe and to measure them and also communicate about (a) the measurable negative consequences and (b) possible counter measures.
In this analogy, our socio-legal framwork is "the model". And politicians and lawmakers are the data scientists that SHOULD aim to optimize the model performance.
But it looks (especially in Europe) that they have fallen into the overfitting trap, trying to secure EVERY point on the graph with their model. And this has the well-known adverse effects of overfitting:
(a) it's making the model ever more complex, less managable, more opaque - the motivation of the "power caste" is clear: as said above, the increasing complexity is the perfect justification for a self-reinforcing increase of the very bureaucracy that (co-)founded the problem in the first place (But there are, of course, other interests are at play, as well).
(b) More importantly: the model stops to generalize ! This is THE key point - and it explains why those "operating the model" have no interest in innovation: new data points risk to expose the fragility of their creation. A creation that supposedly explains all there is ..... but not what will be. Really a classical case of "training set" vs. "validation set" - with the validation set representing the "noise" of innovation and creative ideas etc., exposing the need for the model not to be too specific and complicated, so that I can generalize also to these new data points. And so those in power don't like new data points, as these question their model.
All this being said: would you know of any social science study that looks at a measure of "legislation density" vs. "result", e.g. in terms of economic growth or well-being ? Could really be interesting to (a) verify my hypothesis and (b) use as an argument in the debate to push for the higher degree of freedom that is needed to assure a better generalization of our socio-legal framework to new data point, be them new technologies, new societal trends or challenges from external shocks.
Great piece; completely agree! There is also, I would argue, a subcategory of Gamers, and I can think of more than one prominent person in this role. These folks literally sell fear and dread. They give press interviews, speak at conferences, post on LinkedIn with the express purpose of scaring people into thinking AI is dangerous, out to get you, all hype and no help, etc, but, all their income, and some of it is substantial, comes from selling their proprietary version of AI directly to the government after they have sufficiently scared procurement half to death or cashing in on high priced advisory or speaking fees after railing about the harms of generative and the like. How these people sleep at night is beyond me.
Long read - but as usual totally worth it.
Funnily enough, your writing complements very well a thought I had recently - and that you also may find interesting.
I get all your points (and love the passion you have for all things open and free ;-)). My supplementing input consists in providing a more scientific, less passionate look on and evaluation of the power mechanics you describe so well: after some reflection, I see a lot of analogy between the attempt to "weaponize" legislation and regulation against (but not limited to) open source and..... the data science process.
I really wrecked my had how we can be so eager in Europe to shoot our own knee caps (again), now in matters of AI..... and the surprising answer is "overfitting". The effects you describe so well concur to produce a legislative framework of which the output shows all signs of this phemenon:
- A public that doesn't think in spectrums anymore, but refuses any innovation if the slightest adverse effect MAY be attached to it
- A bureaucracy grown out of proportion, jumping too willingly on the occassion to make itself ever more indispensible by creating ever more Byzantine structures that "safeguard" the status quo.
- BigTech and BigIndustry looking happily on while the fresh air of competition is suffocated under the weight of the resulting"overlegislation", allowing them to permanently exploit and extend their sincecures.
But, as you say, all is not gloom: seeing this effect as a form of overfitting provides a way to approach things more scientifically, to describe and to measure them and also communicate about (a) the measurable negative consequences and (b) possible counter measures.
In this analogy, our socio-legal framwork is "the model". And politicians and lawmakers are the data scientists that SHOULD aim to optimize the model performance.
But it looks (especially in Europe) that they have fallen into the overfitting trap, trying to secure EVERY point on the graph with their model. And this has the well-known adverse effects of overfitting:
(a) it's making the model ever more complex, less managable, more opaque - the motivation of the "power caste" is clear: as said above, the increasing complexity is the perfect justification for a self-reinforcing increase of the very bureaucracy that (co-)founded the problem in the first place (But there are, of course, other interests are at play, as well).
(b) More importantly: the model stops to generalize ! This is THE key point - and it explains why those "operating the model" have no interest in innovation: new data points risk to expose the fragility of their creation. A creation that supposedly explains all there is ..... but not what will be. Really a classical case of "training set" vs. "validation set" - with the validation set representing the "noise" of innovation and creative ideas etc., exposing the need for the model not to be too specific and complicated, so that I can generalize also to these new data points. And so those in power don't like new data points, as these question their model.
All this being said: would you know of any social science study that looks at a measure of "legislation density" vs. "result", e.g. in terms of economic growth or well-being ? Could really be interesting to (a) verify my hypothesis and (b) use as an argument in the debate to push for the higher degree of freedom that is needed to assure a better generalization of our socio-legal framework to new data point, be them new technologies, new societal trends or challenges from external shocks.
Great piece; completely agree! There is also, I would argue, a subcategory of Gamers, and I can think of more than one prominent person in this role. These folks literally sell fear and dread. They give press interviews, speak at conferences, post on LinkedIn with the express purpose of scaring people into thinking AI is dangerous, out to get you, all hype and no help, etc, but, all their income, and some of it is substantial, comes from selling their proprietary version of AI directly to the government after they have sufficiently scared procurement half to death or cashing in on high priced advisory or speaking fees after railing about the harms of generative and the like. How these people sleep at night is beyond me.