Emerging techniques in computer science make it possible to "brain scan" large language models (LLMs), identify the plain‑English concepts that guide their reasoning, and steer them while holding other factors constant. We show that this approach can map LLM‑generated economic forecasts to concepts such as sentiment, technical analysis, and timing, and compute their relative importance without reducing performance. We also show that models can be steered to be more or less risk‑averse, optimistic, or pessimistic, enabling researchers to correct or simulate biases. The method is transparent, lightweight, and replicable for empirical research in the social sciences.
Artificial intelligence is reshaping financial markets, yet the limits to its rationality remain underexplored. This paper documents information overload in Large Language Models applied to financial analysis. Using earnings forecasts from corporate calls and market reaction predictions from news, we show that predictive accuracy follows an inverted U‑shaped pattern, where excessive context degrades performance. Larger LLMs mitigate this effect, increasing the optimal context length. Our findings underscore a fundamental limitation of AI‑driven finance: more data is not always better, necessitating empirical tuning to determine the right amount of context for each task.
Showing Emotions in Academia: What is the Cost and Who Can Afford It?