Abstract:
With the continuous scaling of semiconductor technologies, new challenges are
in
icted on the circuit design yield and reliability. With millions of devices on
chip, very tight fail probability requirements are necessary to guarantee reliable
designs, and traditional statistical estimation methodologies such as standard
Monte Carlo are no longer practical for the statistical estimation of such rare fail
events. This is especially true for memory designs where few SRAM cell fails lead
to the overall chip fail. Hence, it is necessary to have fast statistical methods to
be able to effciently predict rare fail probabilities. Fast variance reduction methods,
such as importance sampling, have been proposed to speed up the statistical
simulations. In this work, we propose a regularized logistic regression based fast
statistical sampling methodology to speed up the importance sampling methodology
ow. We explore several approaches including a proposed greedy heuristic
based approach and an improved iterative reweighted least squares-based method
for regularized logistic regression. We thus propose a modi fied Group LARS approach as an e cient methodology that tracks Newton's step direction solution
evolution from one iteration of the solution to another to e ciently solve for
the underlying L1-constrained iterative least squares problem. Data balancing
is proven to be very critical and must be employed alongside with regularization
to ensure classi fier generalization capability and proper separation between
classes. We employ data handling techniques in the context of a logistic regression
based importance sampling methodology for accurate statistical modeling of
rare fail events in memory designs. Synthetic Minority Oversampling Technique
(SMOTE) methodology is shown to outperform other data handling methods for
purposes of memory yield analysis. Support Vector Machine (SVM) is a popular
supervised machine learning methodology that is well known for its efficiency,
prediction accuracy and robustness. Typically classi fiers tend to treat data points
equally, however in certain scenarios this may not be optimal and applying certain
weights to the data points can improve the accuracy of the model. We
extend what we learn to propose a Best Balance Ratio Ordered Feature Selection
methodology for enhanced classi er accuracy for memory design yield modeling.
It mimics L0-norm regularization through a dual optimization framework and
compare it to other implicit data balancing approaches. We critically evaluate
the proposed methodologies and prove the efficiency of the proposed approaches
by analyzing state-of-the-art FinFET SRAM designs. We demonstrate excellent
classi fier accuracy and data balancing along with regularization are proven to
result in high fidelity models with excellent yield modeling capabilities.