In light of all the breakthroughs of OpenAI GPT models, the newly realized capability to analyze Federal Chairmen’s speech in search of tones and phrases was brought to the surface. Why does it matter? The market is often hyper-sensitive to Jerome Powell’s statement post-FOMC, gripping onto each word. In search of the monetary policy directional path, the last speech pushed the DOW down 500 points post-Powell and Yellen’s time on the podium. Now, these speeches are essential, as the market prices are almost immediate.
The recent study by Hansen & Kazinnik attempts to train GPT based on a speech from 2010-2020. Each sentence is classified by three humans, and categorized into 5 labels:
GPT models were given the same sentences and asked to categorize each with the 5 labels. Resulting in GPT’s ability to assign eerily similar label’s to its human counterpart. Not only that, by running the data then through ChatGPT and asking why? The conversationalist AI was able to give reasoning back. Walking each researcher as to why “hawkish” or “dovish” was the case. Importantly, ChatGPT 4 performed better in regards to ChatGPT 3. The newer model comes with more capability (no s***).
As the robots get stronger, the anxiety grows. As for right now, GPT’s are not infallible and they do have an error rate. Can they fully replace the average analyst, we like to think not. Should you be worried for the future, say 5 years from now? 10? 20? It wouldn’t be ignorant to do so.