AI bubble: time to panic? Perhaps not yet... maybe now


OMG! :fearful: Are we going to trust Petrostates with AI?!

Artificial intelligence can be bought, but natural stupidity is free.


Isn’t it self-fulfilling in a way? If AGI matches investors’ intelligence, he will get the money and make it, but if the investors are more intelligent, they won’t fund him because he cannot do it.


I think Cory Doctorow’s article “What kind of bubble is AI?” is more cogent, since it goes into why it may be hard to extract profits from supercomputer-scale “AI” (ala LLMs).


Doctorow ends:

I wonder, what, if anything, is currently in this category (i.e. its disappearance will be acutely felt)?

ChatGPT literally just helped me found the root cause of a bug (conversation: ChatGPT) and then I could fix it and make a PR (Fix `JULIA_CPU_TARGET` being propagated to workers precompiling stdlib pkgimages by KristofferC · Pull Request #54093 · JuliaLang/julia · GitHub).

So I don’t know what the “AI bubble” is. I just use the best tools available to make me as productive as I can. Any other approach seems like a losing battle.

If this thread is about market caps and valuations of companies that develop and sell AI tools then I don’t see what is so interesting to discuss. Speculators gonna speculate :person_shrugging:.


How much did you pay for this service?

Doctorow’s argument (read the article!) is that models like ChatGPT are incredibly expensive to run, and are currently being kept afloat by investor speculation that they will have big returns someday. But he argues that the fault-tolerant services where AI tools actually work don’t pay very much, whereas the high-paying applications (involving replacing expensive humans with computers, not simply augmenting humans with error-checking) are too fault-intolerant to become reality (at least, not quickly enough for the investors). Hence, a bubble.

He doesn’t mention one application that is fault-tolerant and potentially is high-paying — oppressive governments (who have deep pockets) who want to flood the web with propaganda (which doesn’t need to be accurate). It’s depressing to think that this might become the main funding source that will keep ChatGPT-scale LLMs afloat.


Which may be why Altman is romancing petrostates in the Mideast!

1 Like

If it’s sufficiently acutely felt there will be a sustainable market for it, although the product may be more expensive than today.

AGI, GenAI etc… are simply the current names for the game-levels in the game of increasing computation. Moore’s law going back at least 50 years with no pops or dips suggests we can rest assured this ship will continue for another five years (after that I dare not predict).

The issue is not whether you can increase the compute power. The question is whether you can afford to. My understanding is that these companies currently aren’t profitable, and you can’t continue making negative profits indefinitely before investors pull the plug.


True. And some investors (probably the dupes of the last bubble) will lose money. But the personel and the compute will move on to be funded by the next unlocked level. So the inside of these companies will stay the same, perhaps the logo outside will change (for a five year horizon). Ilya Sutskever will have a job.

I did, didn’t think much of it though.

That’s more or less every technology ever invented though.

Anyway, if AI in the end turns out to be too expensive to help me find bugs in code I guess that service will go away (or be expensive enough that maybe I don’t consider it worth it). Until then, I’m using the best tool available for the job.


FWIW, this already happens. At a state level, hiring 10’000 people to run a trollfarm is trivial. If anything, sophisticated AI “levels the field” and could possibly be used to combat this.


I don’t think Doctorow is referring to accurate numbers. The ChatGPT+ subscriptions are already unit-profitable*. so in the worst case OpenAI could just end the free version

*According to a friend of mine at OpenAI, so it’s possible his numbers are actually the incorrect ones, but I’m going to assign higher confidence to his understanding than mine or Doctorow’s

1 Like

One can only hope. Right now the trend seems to be going in the wrong direction, though — the internet seems more and more flooded with junk, to the point where people talk about adding before:2023 to search queries in order to improve the results, or adding “reddit” to a query to get something (hopefully) written by a human.

(The biggest weakness of Doctorow’s article is that he doesn’t run any numbers on the costs, it’s purely qualitative discussion of potential applications. I found it hard to find profitability estimates for different LLM applications by trustworthy third parties, though. The closest I could find was this anonymous blog post, which is fairly optimistic.)


Fully agree with this, in regards to code fixing / developing!

AI is really an amazing tool as a “second pair of eyes”, both spotting menial bugs, the silly ones and in some cases the real edge case ones requiring industry knowledge. I have also found that for making small benchmarkable code to test approaches, it is really amazing and generates in a minute what would take me 15 minutes to write, so that is a good speed up.

I think AI will keep to be amazing for coding, but that it will lose its uses in other industries - my gut feeling is that it is overhyped, but that it does have very high peaks, such as image recogniction for vehicles etc.

Going to be fun to see what happens!

A friend with a Tesla with the self-driving option and on the Beta software release cycle has been able to feel the difference in how his car drives with each release and it has progressively improved. I am using my language a bit loosely here, he still needs to keep his hands on the wheel. He mentioned that the current software is over 300,000 lines of code, while the new software is a LLM with 3000 lines of code. Last I checked they have not yet released that software for him. This is at least third hand information so take it as such.

I think the bigger problem is that assuming costs are fixed (or increasing) is likely very wrong. Cost for sota is increasing, but answer quality per flop is decreasing quickly due to better architectures.