Autocorrelation

Contributor Image
Written By
Contributor Image
Written By
Dan Buckley
Dan Buckley is an US-based trader, consultant, and part-time writer with a background in macroeconomics and mathematical finance. He trades and writes about a variety of asset classes, including equities, fixed income, commodities, currencies, and interest rates. As a writer, his goal is to explain trading and finance concepts in levels of detail that could appeal to a range of audiences, from novice traders to those with more experienced backgrounds.
Updated

Autocorrelation is the degree of similarity between a given time series and a lagged version of itself.

In other words, it quantifies how similar a time series – e.g., interest rates, stock prices – is to itself at different points in time.

A high autocorrelation means that the time series is highly similar to itself at different points in time, while a low autocorrelation means that the time series is not very similar to itself at different points in time.

 

Types of Autocorrelation

There are two main types of autocorrelation: positive and negative.

Positive Autocorrelation

Positive autocorrelation is a statistical relationship between variables such that values at successive time periods tend to be similar.

This relationship occurs when the value of a variable lags behind its preceding value by a certain amount of time.

The benefits of positive autocorrelation are that it can provide information about how a system or process changes over time.

It can also help predict future values of a variable based on past values.

For example, positive autocorrelation can also be seen in economic data. For example, when GDP growth decreases, inflation usually follows suit because spending in excess of goods and services output and real output tend to trend together.

However, there are some disadvantages to positive autocorrelation as well. These include the potential for spurious correlations and the difficulty in interpreting results.

Negative Autocorrelation

In statistics, negative autocorrelation is when a variable’s values decrease as the values of the other variable increase.

This is the opposite of positive autocorrelation. Negative autocorrelation is often seen in volatility in relation to asset prices, where a stock’s price often decreases as volatility increases.

Negative autocorrelation is often used in statistical models to help predict future values. If a model shows negative autocorrelation, it means that the model is more likely to be accurate in its predictions.

 

How autocorrelation works

 

Applications of Autocorrelation

Autocorrelation is used in a variety of fields, including finance and financial markets, economics, meteorology, and physics.

In finance, trading, and markets, autocorrelation is used to measure the degree of similarity between stock prices and past stock prices.

In economics, autocorrelation is used to measure the degree of similarity between economic indicators and past economic indicators.

In a field like meteorology, autocorrelation is used to measure the degree of similarity between weather patterns and past weather patterns.

In physics, autocorrelation is used to measure the degree of similarity between physical phenomena and past physical phenomena.

Autocorrelation and time series analysis

Autocorrelation is also used in signal processing and time series analysis.

The autocorrelation function (ACF) is a tool that can be used to measure the degree of autocorrelation in a time series.

The ACF is a plot of the autocorrelation of a time series at different lag values.

The partial autocorrelation function (PACF) is another tool that can be used to measure the degree of autocorrelation in a time series.

The PACF is a plot of the partial autocorrelation of a time series at different lag values.

 

Time Series Talk: Autocorrelation and Partial Autocorrelation

 

Autocorrelation and Stationarity

A stationary time series is one whose statistical properties (mean, variance, autocorrelation, etc.) do not change over time. Many time series are stationary. A time series that is not stationary is simply called non-stationary.

Augmented Dickey-Fuller test (ADF test)

The stationarity of a time series can be tested using a statistical test called the Augmented Dickey-Fuller test (ADF test).

The null hypothesis of the ADF test is that the time series is non-stationary. If the p-value of the ADF test is less than 0.05, then we can reject the null hypothesis and conclude that the time series is stationary.

The augmented Dickey Fuller test tests the null hypothesis that there is a unit root in the time series.

The test statistic is calculated as:

adf = (t-statistic) / (standard error)

where:

  • t-statistic = the t-statistic from a regression of the time series on its lagged values
  • standard error = the standard error of the regression

If the test statistic is less than the critical value, then the null hypothesis is rejected and there is evidence of stationarity in the time series.

The p-value is the probability that the null hypothesis is true. If the p-value is less than 0.05, then the null hypothesis is rejected and there is evidence of stationarity in the time series.

Durbin Watson Statistic

Testing for autocorrelation is common through the Durbin Watson statistic. The Durbin Watson statistic tests the null hypothesis that there is no autocorrelation in the time series.

The test statistic is calculated as:

dw = (sum of squared residuals) / (sum of squared errors)

where:

  • residuals = the difference between the actual value and the predicted value
  • errors = the difference between the predicted value and the mean of the values

If the test statistic is close to 2, then there is no autocorrelation in the time series. If the test statistic is less than 2, then there is positive autocorrelation in the time series. If the test statistic is greater than 2, then there is negative autocorrelation in the time series.

The p-value is the probability that the null hypothesis is true. If the p-value is less than 0.05, then the null hypothesis is rejected and there is evidence of autocorrelation in the time series.

Autocorrelation is a measure of how closely related a time series is to itself. A time series is said to be autocorrelated if it is correlated with itself at different time periods.

ACF

The autocorrelation of a time series can be measured using the autocorrelation function (ACF). The ACF measures the linear relationship between a time series and its lagged values.

If the ACF of a time series is close to zero, then the time series is said to be uncorrelated.

If the ACF is positive, then the time series is said to be positively autocorrelated.

If the ACF is negative, then the time series is said to be negatively autocorrelated.

Positively autocorrelated time series are often found in economic data, such as stock prices and GDP. Negatively autocorrelated time series are often found in weather data, such as temperature and precipitation.

The stationarity of a time series can be affected by autocorrelation. A time series that is positively autocorrelated is more likely to be non-stationary than a time series that is uncorrelated. A time series that is negatively autocorrelated is more likely to be stationary than a time series that is uncorrelated.

Autocorrelation and the predictability of a time series

Autocorrelation can also be used to measure the predictability of a time series. If a time series is highly autocorrelated, then it is said to be predictable. If a time series is not autocorrelated, then it is said to be unpredictable.

The predictability of a time series can be measured using the coefficient of determination (R-squared). The R-squared measures the percent of the variance in a time series that is explained by its lagged values.

If the R-squared is close to 1, then the time series is said to be highly predictable. If the R-squared is close to 0, then the time series is said to be unpredictable.

Stationarity and seasonality

The stationarity of a time series can also be affected by seasonality. A time series that is seasonal is more likely to be non-stationary than a time series that is not seasonal. Seasonality is often found in economic data, such as retail sales and housing starts.

The seasonality of a time series can be measured using the seasonal index. The seasonal index measures the percent change in a time series from one period to the next.

If the seasonal index is close to 1, then the time series is said to be highly seasonal. If the seasonal index is close to 0, then the time series is said to be not very seasonal.

Autocorrelation and stationarity are important concepts in time series analysis. Autocorrelation can be used to measure the predictability of a time series, and stationarity can be used to test for the presence of seasonality.

 

Autocorrelation in Portfolio Optimization

Portfolio optimization is about choosing the right mix of assets to maximize returns for a given level of risk. 

Here’s where autocorrelation comes in:

Predicting Future Returns

If returns are autocorrelated, knowing past returns can help predict future returns.

This prediction is important for deciding which assets to include in a portfolio.

Risk Assessment

Autocorrelation affects how asset prices move together.

If two assets have similar autocorrelation patterns, they might move in the same direction.

Understanding this helps in assessing the overall risk of the portfolio.

Diversification Strategy

The goal of diversification is to reduce risk by combining assets that don’t move in the same direction.

If all assets in a portfolio are positively autocorrelated, the portfolio might face higher risk.

Recognizing autocorrelation patterns aids in selecting a diverse set of assets, reducing the risk.

This is generally done by including other asset classes or returns streams in a portfolio.

Timing Decisions

For traders/investors who adjust their portfolios frequently, understanding autocorrelation can guide analyses.

If an asset’s returns are positively autocorrelated, it might be beneficial to hold onto it after a high return day, expecting the trend to continue.

Nonetheless, it’s important to not assume that past returns will approximate future returns, especially in systems – like financial markets – where the future can be different from the past.

 

Autocorrelation in Technical Analysis

Autocorrelation in popular in technical analysis and is often used to measure the predictability of price movements.

The most common autocorrelation measure is the linear regression slope. The linear regression slope measures the angle of the trendline in a price chart.

If the linear regression slope is positive, then prices are said to be in an uptrend. If the linear regression slope is negative, then prices are said to be in a downtrend.

The linear regression slope can also be used to measure the strength of a trend. The stronger the trend, the steeper the angle of the trendline.

Another common autocorrelation measure is the moving average convergence divergence (MACD). The MACD measures the difference between two moving averages.

If the MACD is positive, then prices are said to be in an uptrend. If the MACD is negative, then prices are said to be in a downtrend.

The MACD can also be used to measure the strength of a trend. The stronger the trend, the greater the difference between the two moving averages.

 

FAQs – Autocorrelation

What is autocorrelation?

Autocorrelation is the degree of correlation between a time series and its lagged values.

Autocorrelation in finance refers to the relationship between an asset’s past and present returns.

When we say an asset’s returns are autocorrelated, it means the returns are linked over time.

For example, if a stock has a high return today, autocorrelation might suggest it could have a similarly high or low return tomorrow, based on its past behavior.

What is the Durbin Watson statistic?

The Durbin Watson statistic tests for autocorrelation in a time series.

The test statistic is calculated as the sum of squared residuals divided by the sum of squared errors.

What is stationarity?

Stationarity is a statistical property of a time series that indicates that the mean, variance, and autocovariance are constant over time.

What is the augmented Dickey Fuller test?

The augmented Dickey Fuller test tests for stationarity in a time series.

The test statistic is calculated as the t-statistic from a regression of the time series on its lagged values.

What is the p-value?

The p-value is the probability that the null hypothesis is true.

If the p-value is less than 0.05, then the null hypothesis is rejected and there is evidence of autocorrelation in the time series.

Why is autocorrelation important in technical analysis?

Technical analysis is a method of predicting price movements through the study of past price data.

Autocorrelation is important in technical analysis because it measures the predictability of those price movements.

What is the linear regression slope?

The linear regression slope is a measure of autocorrelation that measures the angle of the trendline in a price chart.

If the linear regression slope is positive, then prices are said to be in an uptrend. If the linear regression slope is negative, then prices are said to be in a downtrend.

 

Summary – Autocorrelation

Autocorrelation and stationarity are important concepts in time series analysis.

Autocorrelation can be used to measure the predictability of a time series, and stationarity can be used to test for the presence of seasonality.

Testing for autocorrelation and stationarity is common through the Durbin Watson statistic and the augmented Dickey Fuller test, respectively.

Autocorrelation is also popular in technical analysis and is often used to measure the strength of a trend.