I've been appreciating your writing for a while. I have to say though that handwaving AI x-risk as "science fiction fantasy" is a bit disappointing, especially as you quote someone like Hinton explicitly warning about it. Either something more substantial as to why x-risk isn't a worry or something less committal as to it's possibility would have felt more on-point
The "take out the commercial incentive from the providers of the foundation platform" - won't that just kill innovation? AI is one field where the industry has been streets ahead of academia. Unlike physics or biology, the cost to innovate in AI has been brought down drastically. The good thing about AI, the technology is fast getting open-sourced. In fact, the We Have No Moat memo, purportedly from someone in Google, alluded to the tech giants playing catch up with Opensource.
My belief, regulating AI, has to be based on outcomes. Can regulation be applied equitably across domains? I see that as challenge. The use cases of financial loss vs identity loss vs poor medical care have different outcomes and different impacts and each have to be dealt with separately.
Also, it would be intriguing to figure out how the states intend to regulate all AI code. Assuming that all tech will use AI in some form or the other, that would require both state and compute capacity beyond what is possible now.
I've been appreciating your writing for a while. I have to say though that handwaving AI x-risk as "science fiction fantasy" is a bit disappointing, especially as you quote someone like Hinton explicitly warning about it. Either something more substantial as to why x-risk isn't a worry or something less committal as to it's possibility would have felt more on-point
HI Pranay,
Good article. However, some thoughts....
The "take out the commercial incentive from the providers of the foundation platform" - won't that just kill innovation? AI is one field where the industry has been streets ahead of academia. Unlike physics or biology, the cost to innovate in AI has been brought down drastically. The good thing about AI, the technology is fast getting open-sourced. In fact, the We Have No Moat memo, purportedly from someone in Google, alluded to the tech giants playing catch up with Opensource.
My belief, regulating AI, has to be based on outcomes. Can regulation be applied equitably across domains? I see that as challenge. The use cases of financial loss vs identity loss vs poor medical care have different outcomes and different impacts and each have to be dealt with separately.
Also, it would be intriguing to figure out how the states intend to regulate all AI code. Assuming that all tech will use AI in some form or the other, that would require both state and compute capacity beyond what is possible now.
Can you please help us make sense of why Manipur is burning ?