Have we reached peak AI?

This is maybe the best article I’ve seen on AI

Have we reached peak AI?

Pretty good critique. I think there are probably some “true believers” here that would take issue with some of the techical details (please comment!) And I don’t think AI will go away, but 10 years from now we will probably look back at certain things as similar to the pets.com of the dotcom boom.

I feel like the premise of the article is flawed in that the author seems only to reference the shortcomings of GPT-3/4, yet fails to explain how these shortcomings can be “blown up” and superimposed onto all “AI”.

I can see how the argument applies specifically to OpenAI, but I think the author suffers from a fairly limited understanding of how expansive this field is, and has been for years. As an example, for more than 5 year, we have integrated AI and machine learning to improve QoL on cell phone cameras. Contra the author’s anecdotes, I have friends working at successful development firms who have suspended hiring for the next fiscal year in lieu of copilot licenses.

I also don’t really understand the criticism of Murati. She is not even a native English-speaker. Moreover I don’t think criticizing an executive for not knowing the exact composition of training data that run into the hundreds of GB is necessarily fair.

To me there are a few interesting applications of AI/ML, and I think they are very important and transformative. Most do revolve around automating away rote or error-prone work. The power of having something semi-intelligent which can effectively iterate on a model until it finds one that fits your data seems powerful to me; maybe even constitutes the original promise of the personal computer. I don’t really see how it could be any less transformative than the shift from the abbacus to the calculator. Did the calculator ratchet us up to a higher plateau of human existence and free man forever from the drudgery of work? No, of course not. But it did make light work of previously laborious calculations, thereby (in my opinion) allowing a higher general standard of mathematical understanding among people. In my mind, the promise of AI is to free us up from another layer of this very rote cognitive work.

1 Like

Yeah those are all good points. To me the best parts of the critique were about the handwavy pie-in-the-sky stuff that the execs do to increase the hype. Well that is their job and what they get rewarded for doing with the bonuses, so not surprising that they are doing it. The author is right that these interviewers should be asking the hard questions and not just letting them off the hook. (and that the media is probably complicit in the whole thing)

There have been recent improvements, but I think your comment about how this stuff has been incrementally improving over the years is good, to some extent this is all just a marketing/hype grift to re-label ML as computer intelligence and drive valuations. Like I said, we’ll still have AI/ML in 10 years and it will be amazing, but we’ll look back on this stock market bandwagon as silly.

One comparison I haven’t seen yet but came to mind is how AI will relate to the concept of “Bullshit Jobs”. I think many of the jobs AI can easily replace will be those, but why those jobs existed in the first place is a whole other discussion.

I agree with much of the article – but not for the reasons the author specifies. Most of the article focuses on the wishy-washy OpenAI PR which seems to obviously be aimed at a mix of hyping things up, concealing what they are actually are doing, and strange virtue signalling on how they are making ‘humanity better’ or making ‘people more creative’.

One of the most interesting technical questions is IMO if the training cycle of generative AI’s is stable; in the sense that there will be an ever growing amount of generated material leaking into the training sets. How will that affect training sets in a few years – or even already today?

Another one is whether we have already reached the ‘final form’ of GPT style models. As far as I know, there is not much more room to improve the training data. The latest models have already been trained on pretty much the whole internet, many books, open access papers, etc.

There are a bunch of rumours about Q*, but nothing is public yet…

Will GPT’s ever be able to reason? Is Q* a GPT?