Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, Time Series Analysis and Forecasting
Based on the theory of multiple statistical hypothesis testing, we elaborate simultaneous statistical inference methods in dynamic factor models. In particular, we employ structural properties of multivariate chi-squared distributions in order to construct critical regions for vectors of likelihood ratio statistics in such models. In this, we make use of the asymptotic distribution of the vector of test statistics for large sample sizes, assuming that the model is identified and model restrictions are testable. Examples of important multiple test problems in dynamic factor models demonstrate the relevance of the proposed methods for practical applications.
REVISTA BRASILEIRA DE BIOMETRIA
The multivariate t models are symmetric and have heavier tail than the normal distribution and produce robust inference procedures for applications. In this paper, the Bayesian estimation of a dynamic factor model is presented, where the factors follow a multivariate autoregressive model, using the multivariate t distribution. Since the multivariate t distribution is complex, it was represented in this work as a mix of the multivariate normal distribution and a square root of a chi-square distribution. This method allowed the complete dene of all the posterior distributions. The inference on the parameters was made taking a sample of the posterior distribution through a Gibbs Sampler. The convergence was veried through graphical analysis and the convergence diagnostics of Geweke (1992) and Raftery and Lewis (1992).
2012
This paper considers the maximum likelihood estimation of factor models of high dimension, where the number of variables (N) is comparable with or even greater than the number of observations (T). An inferential theory is developed. We establish not only consistency but also the rate of convergence and the limiting distributions. Five different sets of identification conditions are considered. We show that the distributions of the MLE estimators depend on the identification restrictions. Unlike the principal components approach, the maximum likelihood estimator explicitly allows heteroskedasticities, which are jointly estimated with other parameters. Efficiency of MLE relative to the principal components method is also considered.
2008
We present new results for the likelihood-based analysis of the dynamic factor model that possibly includes intercepts and explanatory variables. The latent factors are modelled by stochastic processes. The idiosyncratic disturbances are specified as autoregressive processes with mutually correlated innovations. The new results lead to computationally efficient procedures for the estimation of the factors and parameter estimation by maximum likelihood
2013
We derive computationally simple and intuitive expressions for score tests of neglected serial correlation in common and idiosyncratic factors in dynamic factor models using frequency domain techniques. The implied time domain orthogonality conditions are analogous to the conditions obtained by treating the smoothed estimators of the innovations in the latent factors as if they were observed, but they account for their final estimation errors. Monte Carlo exercises confirm the finite sample reliability and power of our proposed tests. Finally, we illustrate their empirical usefulness in an application that constructs a monthly coincident indicator for the US from four macro series.
Journal of Econometrics, 2004
A factor model generalizing those proposed by , , has been introduced in Forni, , where consistent (as the number n of series and the number T of observations both tend to infinity along appropriate paths (n, T (n))) estimation methods for the common component are proposed. Rates of convergence associated with these methods are obtained here as functions of the paths (n, T (n)) along which n and T go to infinity. These results show that, under suitable assumptions, consistency requires T (n) to be at least of the same order as n, whereas an optimal rate of √ n is reached for T (n) of the order of n 2 . If convergence to the space of common components is considered, consistency holds irrespective of the path (T (n) thus can be arbitrarily slow); the optimal rate is still √ n, but only requires T (n) to be of the order of n. * Research supported by an A.R.C. contract of the Communauté française de Belgique, the Fonds d'Encouragementà la Recherche de l'Université Libre de Bruxelles, and the European Commission under the Training and Mobility of Researchers Programme (Contract ERBFMRX-CT98-0213). JEL subject classification : C13, C33, C43.
SSRN Electronic Journal, 2019
This paper considers multiple changes in the factor loadings of a high dimensional factor model occurring at dates that are unknown but common to all subjects. Since the factors are unobservable, the problem is converted to estimating and testing structural changes in the second moments of the pseudo factors. We consider both joint and sequential estimation of the change points and show that the distance between the estimated and the true change points is O p (1). We …nd that the estimation error contained in the estimated pseudo factors has no e¤ect on the asymptotic properties of the estimated change points as the cross-sectional dimension N and the time dimension T go to in…nity jointly. No N-T ratio condition is needed. We also propose (i) tests for the null of no change versus the alternative of l changes (ii) tests for the null of l changes versus the alternative of l + 1 changes, and show that using estimated factors asymptotically has no e¤ect on their limit distributions if p T =N ! 0. These tests allow us to make inference on the presence and number of structural changes. Simulation results show good performance of the proposed procedure. In an application to US quarterly macroeconomic data we detect two possible breaks.
Journal of Econometrics, 2001
We investigate several important inference issues for factor models with dynamic heteroskedasticity in the common factors. First, we show that such models are identi…ed if we take into account the time-variation in the variances of the factors. Our results also apply to dynamic versions of the APT, dynamic factor models, and vector autoregressions. Secondly, we propose a consistent two-step estimation procedure which does not rely on knowledge of any factor estimates, and explain how to compute correct standard errors. Thirdly, we develop a simple preliminary LM test for the presence of arch e¤ects in the common factors. Finally, we conduct a Monte Carlo analysis of the …nite sample properties of the proposed estimators and hypothesis tests.
The Annals of Statistics, 2012
This paper considers the maximum likelihood estimation of factor models of high dimension, where the number of variables (N) is comparable with or even greater than the number of observations (T). An inferential theory is developed. We establish not only consistency but also the rate of convergence and the limiting distributions. Five different sets of identification conditions are considered. We show that the distributions of the MLE estimators depend on the identification restrictions. Unlike the principal components approach, the maximum likelihood estimator explicitly allows heteroskedasticities, which are jointly estimated with other parameters. Efficiency of MLE relative to the principal components method is also considered.
2009
From time to time, economies undergo far-reaching structural changes. In this paper we investigate the consequences of structural breaks in the factor loadings for the specification and estimation of factor models based on principal components and suggest test procedures for structural breaks. It is shown that structural breaks severely inflate the number of factors identified by the usual information criteria. Based on the strict factor model the hypothesis of a structural break is tested by using Likelihood-Ratio, Lagrange-Multiplier and Wald statistics. The LM test which is shown to perform best in our Monte Carlo simulations, is generalized to factor models where the common factors and idiosyncratic components are serially correlated. We also apply the suggested test procedure to a US dataset used in Stock and Watson (2005) and a euro-area dataset described in Altissimo et al. (2007). We find evidence that the beginning of the so-called Great Moderation in the US as well as the Maastricht treaty and the handover of monetary policy from the European national central banks to the ECB coincide with structural breaks in the factor loadings. Ignoring these breaks may yield misleading results if the empirical analysis focuses on the interpretation of common factors or on the transmission of common shocks to the variables of interest.
Review of Economics and …, 2000
The Generalized Dynamic Factor Model:
Allgemeines Statistisches Archiv, 2006
Factor models can cope with many variables without running into scarce degrees of freedom problems often faced in a regression-based analysis. In this article we review recent work on dynamic factor models that have become popular in macroeconomic policy analysis and forecasting. By means of an empirical application we demonstrate that these models turn out to be useful in investigating macroeconomic problems.
Journal of Econometrics, 2017
Conventional factor models assume that factor loadings are fixed over a long horizon of time, which appears overly restrictive and unrealistic in applications. In this paper, we introduce a time-varying factor model where factor loadings are allowed to change smoothly over time. We propose a local version of the principal component method to estimate the latent factors and time-varying factor loadings simultaneously. We establish the limiting distributions and uniform convergence of the estimated factors and factor loadings in the standard large and large framework. We also propose a BIC-type information criterion to determine the number of factors, which can be used in models with either time-varying or time-invariant factor models. Based on the comparison between the estimates of the common components under the null hypothesis of no structural changes and those under the alternative, we propose a consistent test for structural changes in factor loadings. We establish the null distribution, the asymptotic local power property, and the consistency of our test. Simulations are conducted to evaluate both our nonparametric estimates and test statistic. We also apply our test to investigate Stock and Watson's (2009) U.S. macroeconomic data set and find strong evidence of structural changes in the factor loadings. * The authors thank the co-editor Oliver Linton, an associate editor, and three anonymous referees for their constructive comments and suggestions. They also express their sincere appreciation to Serena Ng for discussions on the subject matter.
Stochastic Processes and their Applications
We develop an on-line monitoring procedure to detect a change in a large approximate factor model. Our statistics are based on a well-known property of the (r + 1)-th eigenvalue of the sample covariance matrix of the data (having defined r as the number of common factors): whilst under the null the (r + 1)-th eigenvalue is bounded, under the alternative of a change (either in the loadings, or in the number of factors itself) it becomes spiked. Given that the sample eigenvalue cannot be estimated consistently under the null, we regularise the problem by randomising the test statistic in conjunction with sample conditioning, obtaining a sequence of i.i.d., asymptotically chi-square statistics which are then employed to build the monitoring scheme. Numerical evidence shows that our procedure works very well in finite samples, with a very small probability of false detections and tight detection times in presence of a genuine change-point.
Journal of Time Series Analysis, 2009
The estimation of dynamic factor models for large sets of variables has attracted considerable attention recently, due to the increased availability of large datasets. In this paper we propose a new parametric methodology for estimating factors from large datasets based on state space models, discuss its theoretical properties and compare its performance with that of two alternative non-parametric estimation approaches based, respectively, on static and dynamic principal components. The new method appears to perform best in recovering the factors in a set of simulation experiments, with static principal components a close second best. Dynamic principal components appear to yield the best fit, but sometimes there are leakages across the common and idiosyncratic components of the series. A similar pattern emerges in an empirical application with a large dataset of US macroeconomic time series.
TEST, 2011
The aim of this paper is to show that while all the exact distributions of the most common likelihood ratio test (l.r.t.) statistics, that is, the ones used to test the independence of several sets of variables, the equality of several variance-covariance matrices, sphericity and the equality of several mean vectors, may be expressed as the distribution of the product of independent Beta random variables or the product of a given number of independent random variables whose logarithm has a Gamma distribution times a given number of independent Beta random variables, near-exact distributions for their logarithms may all be expressed as Generalized Near-Integer Gamma distributions or mixtures of these distributions, whose rate parameters associated with the integer shape parameters, for samples of size n, all have the form (n − j)/n for j = 2, . . . , p, where for the first three statistics p is the number of variables involved, while for the fourth one it is the sum of the number of variables involved with the number of mean vectors being tested. What is interesting is that the similarities exhibited by these statistics are even more striking in terms of near-exact distributions than in terms of exact distributions. Moreover all the l.r.t. statistics that may be built as products of these basic statistics also inherit a similar structure for their near-exact distributions. To illustrate this fact, an application is made to the l.r.t. statistic to test the equality of several multivariate Normal distributions.
2004
We derive indirect estimators of multivariate conditionally heteroskedastic factor models in which the volatilities of the latent factors depend on their past values. SpeciÞcally, we calibrate the analytical score of a Kalman-Þlter approximation, taking into account the inequality constraints on the auxiliary model parameters. We also study the determinants of the biases in the parameters of this approximation, and its quality. Moreover, we propose sequential indirect estimators that can handle models with large cross-sectional dimensions. Finally, we analyse the small sample behaviour of our indirect estimators and the approximate maximum likelihood procedures through an extensive Monte Carlo experiment. helpful comments and suggestions. Of course, the usual caveat applies. Financial support from MIUR through the project "SpeciÞcation, estimation and testing of latent variable models. Applications to the analysis and forecasting of economic and Þnancial time series" is gratefully acknowledged. Thanks are also due to Javier Mencía for his help in producing .
Social Science Research Network, 2023
In economics, Principal Components, its generalized version that takes into account heteroscedasticity, and Kalman lter and smoothing procedures are among the most popular procedures for factor extraction in the context of Dynamic Factor Models. This paper analyses the consequences on point and interval factor estimation of using these procedures when the idiosyncratic components are wrongly assumed to be cross-sectionally uncorrelated. We show that not taking into account the presence of cross-sectional dependence increases the uncertainty of point estimates of the factors. Furthermore, the Mean Square Errors computed using the usual expressions based on asymptotic approximations, are underestimated and may lead to prediction intervals with extremely low coverages.
Journal of the American Statistical Association, 2009
The impact of dependence between individual test statistics is currently among the most discussed topics in the multiple testing of highdimensional data literature, especially since introduced the false discovery rate (FDR). Many papers have first focused on the impact of dependence on the control of the FDR. Some more recent works have investigated approaches that account for common information shared by all the variables to stabilize the distribution of the error rates. Similarly, we propose to model this sharing of information by a factor analysis structure for the conditional variance of the test statistics. It is shown that the variance of the number of false discoveries increases along with the fraction of common variance. Test statistics for general linear contrasts are deduced, taking advantage of the common factor structure to reduce the variance of the error rates. A conditional FDR estimate is proposed and the overall performance of multiple testing procedure is shown to be markedly improved, regarding the nondiscovery rate, with respect to classical procedures. The present methodology is also assessed by comparison with leading multiple testing methods.
SERIEs, 2021
Dynamic factor models (DFMs), which assume the existence of a small number of unobserved underlying factors common to a large number of variables, are very popular among empirical macroeconomists. Factors can be extracted using either nonparametric principal components or parametric Kalman filter and smoothing procedures, with the former being computationally simpler and robust against misspecification and the latter coping in a natural way with missing and mixed-frequency data, time-varying parameters, nonlinearities and non-stationarity, among many other stylized facts often observed in real systems of economic variables. This paper analyses the empirical consequences on factor estimation, in-sample predictions and out-of-sample forecasting of using alternative estimators of the DFM under various sources of potential misspecification. In particular, we consider factor extraction when assuming different number of factors and different factor dynamics. The factors are extracted from...
Biometrika, 2004
We specify some conditions for the identification of a multi-factor model with correlated residuals, uncorrelated factors and zero restrictions in the factor loadings. These conditions are derived from the results of Stanghellini (1997) and Vicard (2000) which deal with single-factor models with zero restrictions in the concentration matrix. Like these authors, we make use of the complementary graph of residuals and the conditions build on the role of odd cycles in this graph. However, in contrast to these authors, we consider the case where the conditional dependencies of the residuals are expressed in terms of a covariance matrix rather than its inverse, the concentration matrix. We first derive the corresponding condition for identification of single-factor models with structural zeros in the covariance matrix of the residuals. This is extended to the case where some factor loadings are constrained to be zero. We use these conditions to obtain a sufficient and a necessary condition for identification of multi-factor models.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.