Graphical Lasso (GL) is a popular method for learning the structure of anundirected graphical model, which is based on an $l_1$ regularizationtechnique. The first goal of this work is to study the behavior of the optimalsolution of GL as a function of its regularization coefficient. We show that ifthe number of samples is not too small compared to the number of parameters,the sparsity pattern of the optimal solution of GL changes gradually when theregularization coefficient increases from 0 to infinity. The second objectiveof this paper is to compare the computationally-heavy GL technique with anumerically-cheap heuristic method for learning graphical models that is basedon simply thresholding the sample correlation matrix. To this end, two notionsof sign-consistent and inverse-consistent matrices are developed, and then itis shown that the thresholding and GL methods are equivalent if: (i) thethresholded sample correlation matrix is both sign-consistent andinverse-consistent, and (ii) the gap between the largest thresholded and thesmallest un-thresholded entries of the sample correlation matrix is not toosmall. By building upon this result, it is proved that the GL method–as aconic optimization problem–has an explicit closed-form solution if thethresholded sample correlation matrix has an acyclic structure. This result isthen generalized to arbitrary sparse support graphs, where a formula is foundto obtain an approximate solution of GL. The closed-form solution approximatelysatisfies the KKT conditions for the GL problem and, more importantly, theapproximation error decreases exponentially fast with respect to the length ofthe minimum-length cycle of the sparsity graph. The developed results aredemonstrated on synthetic data, electrical circuits, functional MRI data, andtraffic flows for transportation networks.