Advertisement
binkleym

Binkley Rule for recessions using unemployment velocity

Dec 4th, 2019
367
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
R 9.76 KB | None | 0 0
  1. ### Recession indicator using unemployment.   I take unemployment data
  2. ### (UNRATE) from FRED, use a STL decomposition to decompose into
  3. ### Trend/Seasonality/Remainder, and look at the d(Trend)/dt, the
  4. ### "velocity" of unemployment.
  5. ###
  6. ### For this indicator, the important feature *isn't* the peak,
  7. ### but rather the point where unemployment velocity turns positive.
  8. ### When the velocity passes through 0, that is a fairly reliable sign
  9. ### that a recession is coming a few months down the road.  In the
  10. ### entire UNRATE dataset, there are only three "head fakes", which
  11. ### can be mitigated by waiting for d(UNRATE)/dt to rise above a threshold.
  12. ###
  13. ### I use stl() as I often test this script against county-level
  14. ### unemployment figures which are not seasonally adjusted.   For
  15. ### national unemployment, it would likely be sufficient to use a
  16. ### simple moving average to extract the trend instead.
  17. ###
  18. ### The signal isn't nearly as clean as the Sahm Rule, nor does it give
  19. ### an actual probability.   But it is a *leading* indicator rather than
  20. ### a lagging indicator.
  21. ###
  22. ### written by Mathew Binkley <Mathew.Binkley@Vanderbilt.edu>
  23.  
  24. ### Set your FRED API key here.  You may request an API key at:
  25. ### https://research.stlouisfed.org/useraccount/apikeys
  26. api_key_fred <- "WHATEVER_YOUR_FRED_API_KEY_IS"
  27.  
  28. ####################################################################
  29. ### Load necessary R packages, set the FRED API key, set start/end
  30. ### dates for graph, add data frame with recession dates so we can
  31. ### insert recession bars in our graph.
  32. ####################################################################
  33.  
  34. ### We need the following packages for this example.
  35. packages <- c("fredr", "lubridate", "fredr", "ggplot2", "forecast",
  36.               "ggthemes", "tsibble", "dplyr", "magrittr", "broom", "scales")
  37.  
  38. ### Install packages if needed, then load them quietly
  39. new_packages <- packages[!(packages %in% installed.packages()[, "Package"])]
  40. if (length(new_packages)) install.packages(new_packages, quiet = TRUE)
  41. invisible(lapply(packages, "library",
  42.                  quietly = TRUE,
  43.                  character.only = TRUE,
  44.                  warn.conflicts = FALSE))
  45.  
  46. fredr_set_key(api_key_fred)
  47.  
  48. ### If you want the full range of the data...
  49. date_start <- as.Date("1948-01-01")
  50. date_end   <- as.Date(now())
  51.  
  52. ### Or you can specify a more narrow window of time.
  53. #date_start <- as.Date("1961-01-01")
  54. #date_end   <- as.Date("1969-01-01")
  55.  
  56. ####################################################################
  57. ### Fetch unemployment data from FRED
  58. ####################################################################
  59.  
  60. ### Load unemployment from FRED
  61. target <- c("UNRATE",     "US")
  62. # target <- c("TNUR",       "Tennessee")
  63. # target <- c("TNCHEA1URN", "Cheatham County")
  64.  
  65. series <- target[1]
  66. name   <- target[2]
  67.  
  68. data <- as_tsibble(fredr(series_id = series, frequency = "m",
  69.                          observation_start = date_start,
  70.                          observation_end   = date_end), index = "date")
  71.  
  72. date   <- data %>% pull("date")
  73. values <- data %>% pull("value")
  74.  
  75. # Decompose the data into trend/seasonal/remainder
  76. values_ts         <- ts(values, frequency = 12)
  77.  
  78. # Use stl().   This appears to be the method FRED uses to
  79. # deseasonalize their data.
  80. values_decomposed <- stl(values_ts, s.window = "periodic")
  81. seasonal  <- values_decomposed$time.series[, 1]
  82. trend     <- values_decomposed$time.series[, 2]
  83. remainder <- values_decomposed$time.series[, 3]
  84.  
  85. # Use mstl()   This is better at extracting more complex
  86. #              seasonal variations and leaving a more accurate trend
  87. # values_decomposed <- as_tibble(mstl(values_ts))
  88. # seasonal  <- values_decomposed %>% pull("Seasonal12")
  89. # trend     <- values_decomposed %>% pull("Trend")
  90. # remainder <- values_decomposed %>% pull("Remainder")
  91.  
  92. unemployment <- trend
  93.  
  94. ####################################################################
  95. ### Calculations to derive unemployment "velocity"
  96. ####################################################################
  97. diff_trend <- diff(unemployment)
  98. diff_date  <- tail(date, n = length(diff_trend))
  99.  
  100. ### Unemployment is reported with units of 0.1%. ie, unemployment is 3.5%, 3.4%,
  101. ### etc.   As a consequence, that quantization introduces "stairstepping" when
  102. ### taking the derivative.  As a first pass, I'm using Friedman's SuperSmoother
  103. ### to eliminate it.  There's almost certainly a more mathematically correct
  104. ### way to do this, but damn it Jim, I'm a physicist, not an economist...
  105. ### Just comment the two lines below out if you want use raw data instead of the
  106. ### smoothed data.   It works the same and arrives at similar conclusions,
  107. ### but the unfiltered data yields slightly uglier graphs.
  108. tmp <- supsmu(decimal_date(diff_date), diff_trend, span = 3/length(diff_trend))
  109. diff_trend <- tmp$y
  110.  
  111. velocity_df <- data.frame(diff_date, diff_trend)
  112.  
  113. ####################################################################
  114. ### Find locations where the unemployment velocity line grows
  115. ### through x = 0
  116. ####################################################################
  117. axis_crossing_locations <- function(date, value) {
  118.  
  119.   # Set a threshold to filter out minor incursions about x = 0
  120.   # This can improve the accuracy of the indicator at the cost
  121.   # of reducing the time between the predictor triggering and the
  122.   # recession date.   As my original reason for writing this was
  123.   # trying to make sure I get my money out of the market before
  124.   # the stock market starts sliding downhill, I personally use
  125.   # a threshold of 0 and take a chance that it could be wrong.
  126.   # Anyone more concerned with accuracy should use a higher
  127.   # threshold
  128.   # threshold <- 0
  129.  
  130.   # The threshold necessary to eliminate the three incursions
  131.   # above x = 0
  132.   threshold <- 0.021
  133.  
  134.   # This threshold is because of a local minima in late 2002 that starts
  135.   # triggering the crossing algorithm due to our threshold for eliminating the
  136.   # three incursions... <sigh>  Using the smoothed dequantized data eliminates
  137.   # it, but I want a more robust method of handling it and any others that may
  138.   # come along, so I'll write a smarter axis crossing function when I have time.
  139.   # threshold <- 0.036
  140.  
  141.   # Shift the value by the threshold
  142.   adj_value <- value - threshold
  143.  
  144.   # See if the sign changes before/after, which indicates it has
  145.   # passed through x = 0
  146.   sign     <- sign(value)
  147.   adj_sign <- sign(adj_value)
  148.  
  149.   # Computer time derivative of sign changes
  150.   diff_sign     <- diff(sign)
  151.   diff_adj_sign <- diff(adj_sign)
  152.  
  153.   # This will pick up *growing* unemployment.   To pick up
  154.   # *falling* unemployment, use -2.   To find both, use
  155.   # abs(diff_sign).   We want to add one month to the
  156.   # date this gives us, thus the "%m+% months(1)" bit.
  157.   locs <- date[diff_adj_sign == 2] %m+% months(1)
  158.   locs
  159. }
  160.  
  161. axiscrossings <- as.Date(axis_crossing_locations(diff_date, diff_trend))
  162.  
  163. ####################################################################
  164. ### Generate recession bars for our graph
  165. ####################################################################
  166.  
  167. ### A data.frame with the start/stop dates of recessions
  168. recessions_df <- read.table(textConnection(
  169.   "peak, trough
  170. 1948-11-01, 1949-10-01
  171. 1953-07-01, 1954-05-01
  172. 1957-08-01, 1958-04-01
  173. 1960-04-01, 1961-02-01
  174. 1969-12-01, 1970-11-01
  175. 1973-11-01, 1975-03-01
  176. 1980-01-01, 1980-07-01
  177. 1981-07-01, 1982-11-01
  178. 1990-07-01, 1991-03-01
  179. 2001-03-01, 2001-11-01
  180. 2007-12-01, 2009-06-01"),
  181.   sep = ",", colClasses = c("Date", "Date"), header = TRUE)
  182.  
  183. recessions_trim <- subset(recessions_df, peak  >= min(diff_date) &
  184.                                         trough <= max(diff_date))
  185.  
  186. ### Add recession bars if our start/end dates contain one.
  187. if (length(recessions_trim$peak) > 0) {
  188.    recession_bars = geom_rect(data = recessions_trim, inherit.aes = F,
  189.                fill = "darkgray", alpha = 0.25,
  190.                aes(xmin = peak, xmax = trough, ymin = -Inf, ymax = +Inf))
  191. } else {
  192.    recession_bars = geom_blank()
  193. }
  194.  
  195. ####################################################################
  196. ### Graph:  Trend Unemployment Velocity
  197. ####################################################################
  198. c1 <- paste("U.S. Bureau of Labor Statistics, Unemployment Rate in ", name, " [", series, "]\n", sep = "")
  199. c2 <- "Retrieved from FRED, Federal Reserve Bank of St. Louis;\n"
  200. c3 <- paste("https://fred.stlouisfed.org/series/", series, "\n")
  201. c4 <- paste("Data retrieved on", format(Sys.time(), "%B %d, %Y at %I:%M %p %Z"))
  202.  
  203. s1 <- "Indicator shows imminent recession when velocity rises above 0 (note dotted line)\n"
  204. s2 <- "The farther above/below 0, the faster unemployment is growing/falling\n"
  205. s3 <- "Recessions marked with vertical bars\n"
  206.  
  207. title    <- paste(name, "Trend Unemployment Rate Velocity")
  208. subtitle <- paste(s1, s2, s3, sep = "")
  209. xlab     <- "Year"
  210. ylab     <- "Percent/Month"
  211. caption  <- paste(c1, c2, c3, c4, sep = "")
  212.  
  213. p <- ggplot(velocity_df, aes(x = diff_date, y = diff_trend)) +
  214.  
  215.   ### Plot unemployment velocity
  216.   geom_line(size = 1.3, color = "red3") +
  217.  
  218.   ### Add our recession bars (if any)
  219.   recession_bars +
  220.  
  221.   ### Add a line at velocity = 0.  Above this line, unemployment is rising, and
  222.   ### the farther above the line, the faster it rises.   Similarly, below the
  223.   ### line unemployment is falling, and the further below the line, the faster
  224.   ### it falls
  225.   geom_hline(yintercept = 0, size = 1) +
  226.  
  227.   ### Draw the recession indicators when the unemployment velocity has
  228.   ### has grown beyond x = 0
  229.   geom_vline(xintercept = axiscrossings, linetype = "dashed") +
  230.  
  231.   ### Misc graph stuff
  232.   theme_economist() +
  233.   scale_x_date(breaks = pretty_breaks(10), limits = c(date_start, date_end)) +
  234.   labs(title = title, subtitle = subtitle, caption = caption,
  235.        x = xlab, y = ylab)
  236.  
  237. print(p)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement