The ASTA team
alpha = 0.9; gamma = 1; n = 100
x = as.ts(5*sin(1:n/5))
eps = arima.sim(model=list(ar=alpha),n=n)
y = gamma*x+eps
ts.plot(x,y,col=1:2)
##
## Call:
## arima(x = y, order = c(1, 0, 0), xreg = x)
##
## Coefficients:
## ar1 intercept x
## 0.8069 0.0679 1.0551
## s.e. 0.0569 0.4795 0.1018
##
## sigma^2 estimated as 0.923: log likelihood = -138.41, aic = 284.82
##
## Call:
## arima(x = price, order = c(1, 0, 0), xreg = forecast)
##
## Coefficients:
## ar1 intercept forecast
## 0.3886 1715.8412 -0.3053
## s.e. 0.0680 73.2894 0.0271
##
## sigma^2 estimated as 117486: log likelihood = -1364.2, aic = 2736.41
nnew = 20
xnew = lag(as.ts(5*sin(((n+1):(n+nnew))/5)),-n)
ts.plot(x,y,xnew,col=c(1,2,1),lty=c(1,1,2))
The cross correlation function is used for checking the relation between two time series at different time points: \[ \rho_{xy}(t+k,t) = \text{Cor}(X_{t+k},Y_{t}). \]
Values that are close to \(1\) or \(-1\) indicate that the two time series are closely related if \(X_t\) is delayed by \(k\) time steps.
Cross-correlation function for the simulated data
## [1] -5
dat_shifted = ts.intersect(lag(as.ts(dat_lag[,1]),estlag),dat_lag[,2] )
ts.plot(dat_shifted[,1],dat_shifted[,2],col=1:2)
##
## Call:
## arima(x = dat_shifted[, 2], order = c(1, 0, 0), xreg = dat_shifted[, 1])
##
## Coefficients:
## ar1 intercept dat_shifted[, 1]
## 0.5938 -0.2047 1.0526
## s.e. 0.0820 0.2347 0.0615
##
## sigma^2 estimated as 0.8884: log likelihood = -129.39, aic = 266.79
There are two fundamentally different model classes for time series data.
So far we have only looked at the discrete time case. We will finish todays lecture by looking a bit at the continuous time case, just to give you an idea of this topic.
In this setup we see the underlying \(X_t\) as a continuous function of \(t\) for \(t\) in some interval \([0,T]\).
In principle we imagine that there are infinitely many data points, simply because there are infinitely many time points between 0 and \(T\).
In practice we will always only have finitely many data points.
But we can imagine that the real data actually contains all the data points. We are just not able to measure them (and to store them in a computer).
With a model for all datapoints, we are - through simulation - able to describe the behaviour of data. Also between the observations.
A key example of a process in continuous time will be the so–called Wiener process or Brownian motion.
Here are three simulated realizations (black, blue and red) of this process: here
## Package 'Sim.DiffProc', version 4.9
## browseVignettes('Sim.DiffProc') for more informations.
A common way to define a continuous time stochastic process model is through a stochastic differential equation (SDE) which we will turn to shortly, but before doing so we will recall some basic things about ordinary differential equations.
Example: Suppose \(f\) is an unknown differentiable function satisfying the differential equation \[\frac{df(t)}{dt}=-4f(t)\] with initial condition \(f(0)=1\). This equation has the solution \[f(t)=\exp(-4t)\]
With a slightly unusual notation we can rewrite this as \[df(t)=-4\cdot f(t)dt\]
This equation has the following (hopefully intuitive) interpretation:
So when \(t\) is increased, then \(f\) is decreased, and the decrease is proportional to the value of \(f(t)\). That is why \(f\) decreases slower and slower, when \(t\) is increased.
We say that the function has a drift towards zero, and this drift is determined by the value of the function.
It will probably never be true that data behaves exactly like the exponentially decreasing curve on the previous slide.
Instead we will consider a model, where some random noise from a Wiener process has been added to the growth rate. Two different (black/blue) simulated realizations can be seen below
The type of process that is simulated above is described formally by the equation \[d X_t=-4 X_tdt+0.1d B_t\]
This is called a Stochastic Differential Equation (SDE), and the processes simulated above are called solutions of the stochastic differential equation.
The SDE \(d X_t=-4 X_tdt+0.1d B_t\) has two terms:
The intuition behind this notation is very similar to the intuition in the equation \(df(t)=-4\cdot f(t)\;dt\) for an ordinary differential equation. When the time is increased by the small amount \(dt\), then the process \(X_t\) is increased by \(-4X_t\,dt\) AND by how much the process \(0.1B_t\) has increased on the time interval \([t,t+dt]\).
So this process has a drift towards zero, but it is also pushed in a random direction (either up or down) by the Wiener process (more precisely, the process \(0.1B_t\))