Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ### Recession indicator using unemployment. I take unemployment data
- ### (UNRATE) from FRED, use a STL decomposition to decompose into
- ### Trend/Seasonality/Remainder, and look at the d(Trend)/dt, the
- ### "velocity" of unemployment.
- ###
- ### For this indicator, the important feature *isn't* the peak,
- ### but rather the point where unemployment velocity turns positive.
- ### When the velocity passes through 0, that is a fairly reliable sign
- ### that a recession is coming a few months down the road. In the
- ### entire UNRATE dataset, there are only three "head fakes", which
- ### can be mitigated by waiting for d(UNRATE)/dt to rise above a threshold.
- ###
- ### I use stl() as I often test this script against county-level
- ### unemployment figures which are not seasonally adjusted. For
- ### national unemployment, it would likely be sufficient to use a
- ### simple moving average to extract the trend instead.
- ###
- ### The signal isn't nearly as clean as the Sahm Rule, nor does it give
- ### an actual probability. But it is a *leading* indicator rather than
- ### a lagging indicator.
- ###
- ### written by Mathew Binkley <Mathew.Binkley@Vanderbilt.edu>
- ### Set your FRED API key here. You may request an API key at:
- ### https://research.stlouisfed.org/useraccount/apikeys
- api_key_fred <- "WHATEVER_YOUR_FRED_API_KEY_IS"
- ####################################################################
- ### Load necessary R packages, set the FRED API key, set start/end
- ### dates for graph, add data frame with recession dates so we can
- ### insert recession bars in our graph.
- ####################################################################
- ### We need the following packages for this example.
- packages <- c("fredr", "lubridate", "fredr", "ggplot2", "forecast",
- "ggthemes", "tsibble", "dplyr", "magrittr", "broom", "scales")
- ### Install packages if needed, then load them quietly
- new_packages <- packages[!(packages %in% installed.packages()[, "Package"])]
- if (length(new_packages)) install.packages(new_packages, quiet = TRUE)
- invisible(lapply(packages, "library",
- quietly = TRUE,
- character.only = TRUE,
- warn.conflicts = FALSE))
- fredr_set_key(api_key_fred)
- ### If you want the full range of the data...
- date_start <- as.Date("1948-01-01")
- date_end <- as.Date(now())
- ### Or you can specify a more narrow window of time.
- #date_start <- as.Date("1961-01-01")
- #date_end <- as.Date("1969-01-01")
- ####################################################################
- ### Fetch unemployment data from FRED
- ####################################################################
- ### Load unemployment from FRED
- target <- c("UNRATE", "US")
- # target <- c("TNUR", "Tennessee")
- # target <- c("TNCHEA1URN", "Cheatham County")
- series <- target[1]
- name <- target[2]
- data <- as_tsibble(fredr(series_id = series, frequency = "m",
- observation_start = date_start,
- observation_end = date_end), index = "date")
- date <- data %>% pull("date")
- values <- data %>% pull("value")
- # Decompose the data into trend/seasonal/remainder
- values_ts <- ts(values, frequency = 12)
- # Use stl(). This appears to be the method FRED uses to
- # deseasonalize their data.
- values_decomposed <- stl(values_ts, s.window = "periodic")
- seasonal <- values_decomposed$time.series[, 1]
- trend <- values_decomposed$time.series[, 2]
- remainder <- values_decomposed$time.series[, 3]
- # Use mstl() This is better at extracting more complex
- # seasonal variations and leaving a more accurate trend
- # values_decomposed <- as_tibble(mstl(values_ts))
- # seasonal <- values_decomposed %>% pull("Seasonal12")
- # trend <- values_decomposed %>% pull("Trend")
- # remainder <- values_decomposed %>% pull("Remainder")
- unemployment <- trend
- ####################################################################
- ### Calculations to derive unemployment "velocity"
- ####################################################################
- diff_trend <- diff(unemployment)
- diff_date <- tail(date, n = length(diff_trend))
- ### Unemployment is reported with units of 0.1%. ie, unemployment is 3.5%, 3.4%,
- ### etc. As a consequence, that quantization introduces "stairstepping" when
- ### taking the derivative. As a first pass, I'm using Friedman's SuperSmoother
- ### to eliminate it. There's almost certainly a more mathematically correct
- ### way to do this, but damn it Jim, I'm a physicist, not an economist...
- ### Just comment the two lines below out if you want use raw data instead of the
- ### smoothed data. It works the same and arrives at similar conclusions,
- ### but the unfiltered data yields slightly uglier graphs.
- tmp <- supsmu(decimal_date(diff_date), diff_trend, span = 3/length(diff_trend))
- diff_trend <- tmp$y
- velocity_df <- data.frame(diff_date, diff_trend)
- ####################################################################
- ### Find locations where the unemployment velocity line grows
- ### through x = 0
- ####################################################################
- axis_crossing_locations <- function(date, value) {
- # Set a threshold to filter out minor incursions about x = 0
- # This can improve the accuracy of the indicator at the cost
- # of reducing the time between the predictor triggering and the
- # recession date. As my original reason for writing this was
- # trying to make sure I get my money out of the market before
- # the stock market starts sliding downhill, I personally use
- # a threshold of 0 and take a chance that it could be wrong.
- # Anyone more concerned with accuracy should use a higher
- # threshold
- # threshold <- 0
- # The threshold necessary to eliminate the three incursions
- # above x = 0
- threshold <- 0.021
- # This threshold is because of a local minima in late 2002 that starts
- # triggering the crossing algorithm due to our threshold for eliminating the
- # three incursions... <sigh> Using the smoothed dequantized data eliminates
- # it, but I want a more robust method of handling it and any others that may
- # come along, so I'll write a smarter axis crossing function when I have time.
- # threshold <- 0.036
- # Shift the value by the threshold
- adj_value <- value - threshold
- # See if the sign changes before/after, which indicates it has
- # passed through x = 0
- sign <- sign(value)
- adj_sign <- sign(adj_value)
- # Computer time derivative of sign changes
- diff_sign <- diff(sign)
- diff_adj_sign <- diff(adj_sign)
- # This will pick up *growing* unemployment. To pick up
- # *falling* unemployment, use -2. To find both, use
- # abs(diff_sign). We want to add one month to the
- # date this gives us, thus the "%m+% months(1)" bit.
- locs <- date[diff_adj_sign == 2] %m+% months(1)
- locs
- }
- axiscrossings <- as.Date(axis_crossing_locations(diff_date, diff_trend))
- ####################################################################
- ### Generate recession bars for our graph
- ####################################################################
- ### A data.frame with the start/stop dates of recessions
- recessions_df <- read.table(textConnection(
- "peak, trough
- 1948-11-01, 1949-10-01
- 1953-07-01, 1954-05-01
- 1957-08-01, 1958-04-01
- 1960-04-01, 1961-02-01
- 1969-12-01, 1970-11-01
- 1973-11-01, 1975-03-01
- 1980-01-01, 1980-07-01
- 1981-07-01, 1982-11-01
- 1990-07-01, 1991-03-01
- 2001-03-01, 2001-11-01
- 2007-12-01, 2009-06-01"),
- sep = ",", colClasses = c("Date", "Date"), header = TRUE)
- recessions_trim <- subset(recessions_df, peak >= min(diff_date) &
- trough <= max(diff_date))
- ### Add recession bars if our start/end dates contain one.
- if (length(recessions_trim$peak) > 0) {
- recession_bars = geom_rect(data = recessions_trim, inherit.aes = F,
- fill = "darkgray", alpha = 0.25,
- aes(xmin = peak, xmax = trough, ymin = -Inf, ymax = +Inf))
- } else {
- recession_bars = geom_blank()
- }
- ####################################################################
- ### Graph: Trend Unemployment Velocity
- ####################################################################
- c1 <- paste("U.S. Bureau of Labor Statistics, Unemployment Rate in ", name, " [", series, "]\n", sep = "")
- c2 <- "Retrieved from FRED, Federal Reserve Bank of St. Louis;\n"
- c3 <- paste("https://fred.stlouisfed.org/series/", series, "\n")
- c4 <- paste("Data retrieved on", format(Sys.time(), "%B %d, %Y at %I:%M %p %Z"))
- s1 <- "Indicator shows imminent recession when velocity rises above 0 (note dotted line)\n"
- s2 <- "The farther above/below 0, the faster unemployment is growing/falling\n"
- s3 <- "Recessions marked with vertical bars\n"
- title <- paste(name, "Trend Unemployment Rate Velocity")
- subtitle <- paste(s1, s2, s3, sep = "")
- xlab <- "Year"
- ylab <- "Percent/Month"
- caption <- paste(c1, c2, c3, c4, sep = "")
- p <- ggplot(velocity_df, aes(x = diff_date, y = diff_trend)) +
- ### Plot unemployment velocity
- geom_line(size = 1.3, color = "red3") +
- ### Add our recession bars (if any)
- recession_bars +
- ### Add a line at velocity = 0. Above this line, unemployment is rising, and
- ### the farther above the line, the faster it rises. Similarly, below the
- ### line unemployment is falling, and the further below the line, the faster
- ### it falls
- geom_hline(yintercept = 0, size = 1) +
- ### Draw the recession indicators when the unemployment velocity has
- ### has grown beyond x = 0
- geom_vline(xintercept = axiscrossings, linetype = "dashed") +
- ### Misc graph stuff
- theme_economist() +
- scale_x_date(breaks = pretty_breaks(10), limits = c(date_start, date_end)) +
- labs(title = title, subtitle = subtitle, caption = caption,
- x = xlab, y = ylab)
- print(p)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement