# Updating formula for the sample covariance and correlation In this article, we provide an intuitive, geometric interpretation of the covariance matrix, by exploring the relation between linear transformations and the resulting data covariance.

Most textbooks explain the shape of data based on the concept of covariance matrices.

But since the data is not axis aligned, these values are not the same anymore as shown by figure 5.

By comparing figure 5 with figure 4, it becomes clear that the eigenvalues represent the variance of the data along the eigenvector directions, whereas the variance components of the covariance matrix represent the spread along the axes.

However, the horizontal spread and the vertical spread of the data does not explain the clear diagonal correlation.

Figure 2 clearly shows that on average, if the x-value of a data point increases, then also the y-value increases, resulting in a positive correlation.

Similarly, a covariance matrix is used to capture the spread of three-dimensional data, and a covariance matrix captures the spread of N-dimensional data.

Figure 3 illustrates how the overall shape of the data defines the covariance matrix: In the next section, we will discuss how the covariance matrix can be interpreted as a linear operator that transforms white data into the data we observed.

Now let’s forget about covariance matrices for a moment.Equation (13) holds for each eigenvector-eigenvalue pair of matrix .In the 2D case, we obtain two eigenvectors and two eigenvalues.So, if we would like to represent the covariance matrix with a vector and its magnitude, we should simply try to find the vector that points into the direction of the largest spread of the data, and whose magnitude equals the spread (variance) in this direction.If we define this vector as , then the projection of our data onto this vector is obtained as , and the variance of the projected data is .   The maximum of such a Rayleigh Quotient is obtained by setting equal to the largest eigenvector of matrix .