Estimation of Graphical Lasso using the L1,2 Norm

Authors


  • This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record. Please cite this article as https://doi.org/10.1111/ectj.12104

Summary

Gaussian graphical models are recently used in economics to obtain networks of dependence among agents. A widely-used estimator is the Graphical Lasso (GLASSO), which amounts to a maximum likelihood estimation regularized using the L1,1 matrix norm on the precision matrix Ω. The L1,1 norm is a lasso penalty that controls for sparsity, or the number of zeros in Ω. We propose a new estimator called Structured Graphical Lasso (SGLASSO) that uses the L1,2 mixed norm. The use of the L1,2 penalty controls for the structure of the sparsity in Ω. We show that when the network size is fixed, SGLASSO is asymptotically equivalent to an infeasible GLASSO problem which prioritizes the sparsity-recovery of high-degree nodes. Monte Carlo sim-ulation shows that SGLASSO outperforms GLASSO in terms of estimating the overall precision matrix and in terms of estimating the structure of the graphical model. In an empirical illustration using a classic firms' investment dataset, we obtain a network of firms' dependence that exhibits the core-periphery structure, with General Motors, General Electric and U.S. Steel forming the core group of firms.

This article is protected by copyright. All rights reserved

Ancillary