« Back to Results
Marriott Marquis, Grand Ballroom 2
Hosted By:
American Economic Association
Natural Language Processing and Its Application to Macroeconomics and Macro-Finance
Paper Session
Saturday, Jan. 4, 2020 2:30 PM - 4:30 PM (PDT)
- Chair: Nellie Liang, Brookings Institution
Financial Stability and the Fed: Evidence from Congressional Hearings
Abstract
This paper retraces how financial stability interacted with U.S. monetary policy before and during the Great Recession. Using text-mining techniques, we construct indicators for financial stability sentiment expressed during testimonies of four Federal Reserve Chairs at Congressional hearings. Including these text-based measures adds explanatory power to Taylor-rule models. In particular, negative financial stability sentiment coincided with a more accommodative monetary policy stance than implied by standard Taylor-rule factors, even in the decades before the Great Recession. These findings are consistent with a preference for monetary policy reacting to financial instability rather than acting pre-emptively to a perceived build-up of risks.Identifying Financial Crises Using Textual Data
Abstract
We use a variety of machine learning techniques on multiple sources of textual data to identify and predict financial crises. One of the challenges in financial crisis management is being able to determine whether a country is in a crisis or not, or what type of crisis a country is in, especially in real time. The onset of a crisis and its severity also has implications for real economic activity and, hence can be a valuable input into macroprudential, monetary, and fiscal policy. The academic literature and the policy realm rely mostly on expert judgment to determine crises. Consequently, the identification of crises and the buildup-phases of vulnerabilities are usually determined with hindsight. Although we can identify and forecast a good portion of various degrees of crises with traditional econometric techniques using readily available market and flow of funds data, we find that textual data helps in reducing false-positives and false-negatives in out-of-sample testing of such models, especially when the crises are considered more severe.Making Text Count: Economic Forecasting Using Newspaper Text
Abstract
We consider the best way to extract timely signals from newspaper text and use them to forecast macroeconomic variables, using three popular UK newspapers that span the political spectrum. We find that newspaper text can improve economic forecasts both in absolute and marginal terms. We introduce a powerful new method of making text count in forecasts that combines counts of terms with sophisticated supervised machine learning techniques. This method improves forecasts of macroeconomic variables including GDP, CPI, and unemployment, including compared to existing text-based methods. Forecast improvements occur when it matters most, during periods of stress. While we find that simple metrics go a long way in extracting signal, supervised machine learning methods go further and are likely to be transferable to other text analysis problems.Discussant(s)
Martin Cihak
,
International Monetary Fund
Jonathan Rose
,
Federal Reserve Bank of Chicago
Caspar Siegert
,
Bank of England
Leland Crane
,
Federal Reserve Board
JEL Classifications
- E1 - General Aggregative Models
- C5 - Econometric Modeling