Factor analysis

SPSS research methods include factor analysis and principal component analysis. What is the difference between them? And if you are able, please define more extensive Factor analysis.

Don't hesitate to turn to our statisticians. We are proud to offer you quality statistical services supported by 100% money back guarantee. Regardless of the test and software you choose, you can expect from us:
  • comprehensive task evaluation and quick quote
  • custom approach to your statistical project
  • accurate data analysis and detailed interpretation
  • constant project progress updates
  • free adjustments and timely delivery
Let us take care of your data!

1 comment to Factor analysis

  • emmanuel

    factor analysis and principal component analysis differ in a number of ways. some are;
    1. In Minitab, you can only input raw data when using Principal Components Analysis. However, you can input raw data, a correlation or covariance matrix, or the loadings from a previous analysis when using Factor Analysis.

    2. In Principal Components Analysis, the components are calculated as linear combinations of the original variables. In Factor Analysis, the original variables are defined as linear combinations of the factors.

    3. In Principal Components Analysis, the goal is to account for as much of the total variance in the variables as possible. The objective in Factor Analysis is to explain the covariances or correlations among the variables.

    4. Use Principal Components Analysis to reduce the data into a smaller number of components. Use Factor Analysis to understand what constructs underlie the data.

    The mathematical definition of factor analysis is as follows.

    Suppose we have a set of p observable random variables, x_1,dots,x_p with means mu_1,dots,mu_p.

    Suppose for some unknown constants lij and k unobserved random variables Fj, where iin {1,dots,p} and j in {1,dots,k}, where k < p, we have

    x_i-mu_i=l_{i1}F_1 + cdots + l_{ik}F_k + varepsilon_i.,

    Here, the varepsilon_i are independently distributed error terms with zero mean and finite variance, which may not be the same for all i. Let mathrm{Var}(varepsilon_i)=psi_i, so that we have

    mathrm{Cov}(varepsilon)=mathrm{Diag}(psi_1,dots,psi_p)=Psitext{ and }mathrm{E}(varepsilon)=0. ,

    In matrix terms, we have

    x-mu=LF+varepsilon. ,

    If we have n observations, then we will have the dimensions x_{ptimes n}, L_{ptimes k}, and F_{ktimes n}. Each column of x and F denote values for one particular observation, and matrix L does not vary across observations.

    Also we will impose the following assumptions on F.

    F and varepsilon are independent.
    E(F) = 0
    Cov(F) = I

    Any solution of the above set of equations following the constraints for F is defined as the factors, and L as the loading matrix.

    Suppose Cov(x − μ) = Σ. Then note that from the conditions just imposed on F, we have

    mathrm{Cov}(x-mu)=mathrm{Cov}(LF+varepsilon),,

    or

    Sigma = L mathrm{Cov}(F) L^T + mathrm{Cov}(varepsilon),,

    or

    Sigma = LL^T + Psi.,

    Note that for any orthogonal matrix Q if we set L = LQ and F = QTF, the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is identical only up to orthogonal transformations.

Leave a Reply

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>