AUB ScholarWorks

Regularized Logistic Regression for Fast Importance Sampling Based Yield Analysis Methodology

Show simple item record

dc.contributor.advisor Kanj, Rouwaida
dc.contributor.author Shaer, Lama
dc.date.accessioned 2021-09-17T09:43:39Z
dc.date.available 2021-09-17T09:43:39Z
dc.date.issued 2021-09-17
dc.date.submitted 2021-09-15
dc.identifier.uri http://hdl.handle.net/10938/23041
dc.description.abstract With the continuous scaling of semiconductor technologies, new challenges are in icted on the circuit design yield and reliability. With millions of devices on chip, very tight fail probability requirements are necessary to guarantee reliable designs, and traditional statistical estimation methodologies such as standard Monte Carlo are no longer practical for the statistical estimation of such rare fail events. This is especially true for memory designs where few SRAM cell fails lead to the overall chip fail. Hence, it is necessary to have fast statistical methods to be able to effciently predict rare fail probabilities. Fast variance reduction methods, such as importance sampling, have been proposed to speed up the statistical simulations. In this work, we propose a regularized logistic regression based fast statistical sampling methodology to speed up the importance sampling methodology ow. We explore several approaches including a proposed greedy heuristic based approach and an improved iterative reweighted least squares-based method for regularized logistic regression. We thus propose a modi fied Group LARS approach as an e cient methodology that tracks Newton's step direction solution evolution from one iteration of the solution to another to e ciently solve for the underlying L1-constrained iterative least squares problem. Data balancing is proven to be very critical and must be employed alongside with regularization to ensure classi fier generalization capability and proper separation between classes. We employ data handling techniques in the context of a logistic regression based importance sampling methodology for accurate statistical modeling of rare fail events in memory designs. Synthetic Minority Oversampling Technique (SMOTE) methodology is shown to outperform other data handling methods for purposes of memory yield analysis. Support Vector Machine (SVM) is a popular supervised machine learning methodology that is well known for its efficiency, prediction accuracy and robustness. Typically classi fiers tend to treat data points equally, however in certain scenarios this may not be optimal and applying certain weights to the data points can improve the accuracy of the model. We extend what we learn to propose a Best Balance Ratio Ordered Feature Selection methodology for enhanced classi er accuracy for memory design yield modeling. It mimics L0-norm regularization through a dual optimization framework and compare it to other implicit data balancing approaches. We critically evaluate the proposed methodologies and prove the efficiency of the proposed approaches by analyzing state-of-the-art FinFET SRAM designs. We demonstrate excellent classi fier accuracy and data balancing along with regularization are proven to result in high fidelity models with excellent yield modeling capabilities.
dc.language.iso en
dc.subject Yield Analysis, Logistic Regression, Machine learning
dc.title Regularized Logistic Regression for Fast Importance Sampling Based Yield Analysis Methodology
dc.type Dissertation
dc.contributor.department Electrical and Computer Engineering
dc.contributor.faculty Engineering and Architecture
dc.contributor.commembers Chehab, Ali
dc.contributor.commembers Mansour, Mohammad
dc.contributor.commembers Mohanna, Yasser
dc.contributor.commembers Abou Ghaida, Rani
dc.contributor.degree PhD
dc.contributor.degree PhD
dc.contributor.AUBidnumber 200903202


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search AUB ScholarWorks


Browse

My Account