The importance of diminishing returns
In graduate school, I recall a professor suggesting that the rational expectations revolution would eventually lead to much better models of the macroeconomy. I was skeptical, and in my view, that didn’t happen.
This is not because there is anything wrong with the rational expectations approach to macro, which I strong support. Rather I believe that the advances coming out of this theoretical innovation occurred very rapidly. For instance, by the time I had this discussion (around 1979), people like John Taylor and Stanley Fischer had already grafted rational expectations onto sticky wage and price models, which contributed to the New Keynesian revolution. Since that time, macro seems stuck in a rut (apart from some later innovations from the Princeton School (related to the zero lower bound issue.)
In my view, the most useful applications of a new conceptual approach tend to come quickly in highly competitive fields like economics, science and the arts.
In the past few years, I’ve had a number of interesting conversations with younger people who are involved in the field of artificial intelligence. These people know much more about AI than I do, so I would encourage readers to take the following with more than grain of salt. During the discussions, I sometimes expressed skepticism about the future pace of improvement in large language models such as ChatGPT. My argument was that there were some pretty severe diminishing returns to exposing LLMs to additional data sets.
Think about a person that reads and understood 10 well-selected books on economics, perhaps a macro and micro principles text, as well as some intermediate and advanced textbooks. If you fully absorbed this material, you would actually know quite a bit of economics. Now have them read 100 more well chosen textbooks. How much more economics would they actually know? Surely not 10 times as much. Indeed I doubt they would even know twice as much economics. I suspect the same could be said for other fields like biochemistry or accounting.
This Bloomberg article caught my eye:
OpenAI was on the cusp of a milestone. The startup finished an initial round of training in September for a massive new artificial intelligence model that it hoped would significantly surpass prior versions of the technology behind ChatGPT and move closer to its goal of powerful AI that outperforms humans. But the model, known internally as Orion, didn’t hit the company’s desired performance. Indeed, Orion fell short when trying to answer coding questions that it hadn’t been trained on. And OpenAI isn’t alone in hitting stumbling blocks recently. After years of pushing out increasingly sophisticated AI products, three of the leading AI companies are now seeing diminishing returns from their hugely expensive efforts to build newer models.
Please don’t take this as meaning I’m an AI skeptic. I believe the recent advances in LLMs are extremely impressive, and that AI will eventually transform the economy in some profound ways. Rather, my point is that the advancement to some sort of super general intelligence may happen more slowly than some of its proponents expect.
Why might I be wrong? I’m told that artificial intelligence can be boosted by methods other than just exposing the models to ever larger data sets, and that the so-called “data wall” may be surmounted by other methods of boosting intelligence. But if Bloomberg is correct, LLM development is in a bit of a lull due to the force of diminishing returns from having more data.
Is this good news or bad news? It depends on how much weight you put on risks associated with the development of ASI (artificial super intelligence.)
(1 COMMENTS)