However, as a contrarian thought experiment, here I will posit an argument for why the bubble will not pop. Why will the bubble never pop? A simple argument: AI (specifically LLMs) in their current forms (without need for any future improvements) are the single greatest tool in the history of human civilization for surveillance and control of individuals at a massive society level scale. Thus, the bubble will never pop because governments and big tech will continue to operate AI, even at a seemingly substantial loss, due to the power it provides.
This doesn't rely on any increases in performance, or any changes in how the technology works, and is purely based on the LLMs as they exist today. What makes them the most powerful tool for control? One of their biggest strengths is control of information, especially as enabled by social media. Already, on social media, specialized algorithms are deployed to surveil, manipulate, and control users. LLMs supercharge these capabilities. LLMs can directly interact, engage, and influence users through bot profiles. While existing bot/troll farms have been successful for years ([1], [2], [3]), LLMs enable the large scale production and delivery of propaganda without the engineering requirements of bot farms, and are capable delivering more sophisticated messages. Additionally, as LLMs increasingly become a source of truth, they provide big tech with a means to further centralize narrative control. LLMs can be made to parrot anything, they have no intrinsic proclivity or bias for truthful statements. Thus, once companies with LLMs have monopolized trust (and squashed competing alternatives), they can begin to use this trust to manipulate and influence the public (i.e. the exact same playbook that was run for social media and has influenced politics on a global scale). In fact, even if those who control the LLMs are wholly benevolent, LLMs are still being used as tools of information control (powered by the blind collecting of training data). To be clear, this is not blind speculation, already LLMs are being adopted as tools for surveillance, "suppress[ing] dissenting arguments", and propaganda. The full scale and power of these approaches is certainly underestimated by publicly available information.
Much of the above was also true for existing social media algorithms (albeit not at the same scale), but a key unique property of LLM technology is the individuals' interactions. Many people use these models to serve a variety of needs, such as travel planner, doctor, email writer, therapist, friend, financial planner, etc, etc, etc. This is both a massive amount of information that would otherwise be inaccessible, and a potent tool for influence. Already, LLMs are influencing people in a variety of powerful and unprecedented ways. There is the obvious case of "AI psychosis", in which AI taps into users delusions and causes them to spiral, which has been reported on at length (e.g. [4], [5]). More relevant though, is the recent trend of LLMs influencing particularly vulnerable people. While these current cases perhaps stem more from negligence than malice, they highlight the essential point about the capability of LLMs for influencing mental states and physical choices. Chatbots can be deployed to guide users to any action or any conclusion or any mental state (certainly they are not omnipotent mind control machines, but they wield substantial influence as people grow to depend on them). Imagine a therapist who is available 24/7, 365 days a year, ready to answer and help you through any crises, through any feeling, through any breakup. That is normally where the VC pitch ends, but the key element is that this therapist is not a real therapist, it is an LLM deployed by a company that doesn't share your goals but seeks to profit off of you. Through a chat window, users are (willingly) dumping unbelievable amounts of personal data. Never before have peoples' feelings, desires, plans, travel ideas, emails, health documents, legal documents, and more, been compiled so neatly. But this gold mine doesn't stop there. Not only do companies and governments have data that would otherwise be difficult to come by (regardless of the extent they are currently taking advantage of this data), they can directly use it not just analyze for large scale patterns, but influence every single individual in a unique and personalized manner. No longer does big tech need crude algorithms to take advantage of vulnerable teenage girls, or rely on huge engineering efforts to deploy spyware to surveil users encrypted chats, or exploit adware to collect, log, and sell peoples' personal health information. Now people simply ask the AI to help them when they are feeling vulnerable, they have an AI browser or OS that can simply see any encrypted messages, they upload their doctors notes to chatbots to ask questions about them.
Probably the "best" case is that the result of all this control and manipulation is aggressive and invasive advertising (those who have read more science fiction books can probably imagine more creatively unpleasant use cases). But these ads will likely be exceedingly effective. While ads on most websites are easy to distinguish, the continual stream of text of a chatbot lends itself brilliantly to ads. If you ask an LLM, "what is the best printer?" you expect it to merely tell you an answer from an opaque computing process, and as such being served an ad in this format is all but impossible to determine.
All that being said, the market can remain irrational longer than you can remain solvent, so that's why I only trade 0DTE SPY options.
Here's some more links that I didn't figure out how to include in the text.
How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study
A.I. as normal technology (derogatory)
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
The Artificiality of Alignment
I Am An AI Hater
Changelog
- October 28, 2025: Published initial version