Discussion about this post

User's avatar
Michał Kubiak's avatar

Great piece, thank you :) I think some of the confusion and side-switching from tech CEOs on AI x-risks might also be explained by another option - their different market strategies. From a great paper on open source AI entitled: "OPEN (FOR BUSINESS): BIG TECH, CONCENTRATED POWER, AND THE POLITICAL ECONOMY OF OPEN AI" (very recommended read in its entirety)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807

p. 17, quote - It’s worth noting here that we’ve described two instances of lobbying through two entities tightly tied to Microsoft, seemingly in different directions. GitHub’s argument, outlined in section one [that the open source AI is good and should not be regulated], is self interested because they (and Microsoft) rely on open source development, both as a business model for the GitHub platform and as a source of training data for profitable systems like CoPilot. This makes sense for them, while OpenAI is arguing primarily that models “above a certain threshold" should not be open — a threshold that they effectively set due to resource monopolies Microsoft benefits from. So, open source exceptions are good for them. Arguing that open sourcing their powerful models is dangerous also benefits OpenAI — this claim both reasserts the power of their models and allows them to conflate resource concentration with cutting edge scientific development.

Could it be we've seen similar dynamics of playing to both sides with AI x-risk? Even if, you're right that "If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don’t anymore."

Expand full comment
Metastable's avatar

Satya Nadella in the interview from February 2023 https://www.youtube.com/watch?v=YXxiCwFT9Ms 16:55

Interviewer: And then I have to ask and I sound a little bit silly. I feel a little bit silly even contemplating it, but some very smart people ranging from Stephen Hawkins [sic!] to Elon Musk to Sam Altman, who I just saw in the hallway here, your partner at OpenAI, have raised the specter of AI somehow going wrong in a way that is lights out for humanity. You're nodding your head. You've heard this too.

Nadella: Yeah.

Interviewer: Is that a real concern? And if it is, what are we doing?

Nadella: Look, I mean, runaway AI, if it happens, it's a real problem. And so the way to sort of deal with that is to make sure it never runs away. And so that's why I look at it and say let's start with-- before we even talk about alignment and safety and all of these things that one should do with AI, let's talk about the context in which AI is used. I think about the first set of categories in which we should use these powerful models are where humans unambiguously, unquestionably are in charge. And so as long as we sort of start there, characterize these models, make these models more safe and, over time, much more explainable, then we can think about other forms of usage, but let's not have it run away.

Expand full comment
5 more comments...

No posts