Respuesta :
Answer:
[tex] \epsilon = Y -X\beta[/tex]
And the expected value for [tex] E(\epsilon) = 0[/tex] a vector of zeros and the covariance matrix is given by:
[tex] Cov (\epsilon) = \sigma^2 I[/tex]
So we can see that the error terms not have a variance of 0. We can't assume that the errors are assumed to have an increasing mean, and we other property is that the errors are assumed independent and following a normal distribution so then the best option for this case would be:
The regression model assumes the errors are normally distributed.
Step-by-step explanation:
Assuming that we have n observations from a dependent variable Y , given by [tex] Y_1, Y_2,....,Y_n[/tex]
And for each observation of Y we have an independent variable X, given by [tex] X_1, X_2,...,X_n[/tex]
We can write a linear model on this way:
[tex] Y = X \beta +\epsilon [/tex]
Where [tex]\epsilon_{nx1}[/tex] i a matrix for the error random variables, and for this case we can find the error ter like this:
[tex] \epsilon = Y -X\beta[/tex]
And the expected value for [tex] E(\epsilon) = 0[/tex] a vector of zeros and the covariance matrix is given by:
[tex] Cov (\epsilon) = \sigma^2 I[/tex]
So we can see that the error terms not have a variance of 0. We can't assume that the errors are assumed to have an increasing mean, and we other property is that the errors are assumed independent and following a normal distribution so then the best option for this case would be:
The regression model assumes the errors are normally distributed.