This book by Professor Chih-Ling Tsai and co-author Allan D. R. McQuarrie from North Dakota State University describes procedures for selecting a model from a large set of competing statistical models.
Research Expertise: Regression analysis, model selection, high-dimensional data, time series, biostatistics, application of statistics in business.
Teaching Field: Statistics
In partially linear single-index models, Professor Chih-Ling Tsai and co-authors Hua Liang and Xiang Liu from the University of Rochester and Runze Li from Pennsylvania State University obtain the semiparametrically efficient profile least-squares estimators of regression coefficients. The authors also employ the smoothly clipped absolute deviation penalty (SCAD) approach to simultaneously select variables and estimate regression coefficients. The study shows that the resulting SCAD estimators are consistent and possess the oracle property.
Regularization Parameter Selections via Generalized Information Criterion
Journal of the American Statistical Association, 2010
In this study, Professor Chih-Ling Tsai and co-authors Yiyun Zhang and Runze Li apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity.
Prior Consequences and Subsequent Risk Taking: New Field Evidence from the Taiwan Futures Exchange
Management Science, 2010
In this study, Professors Chih-Ling Tsai and co-authors Ning Zhu from the Shanghai Advanced Institute of Finance and Ming-Chun Wang from National Chengchi University use a data set from market participants in the Taiwan Stock Exchange Capitalization Weighted Stock Index options markets to demonstrate a strong positive relationship between prior trading outcomes and subsequent risk taking. In particular, investors in this market take above-average risks in afternoon trading after morning gains.
Extracting Forward-Looking Information from Security Prices: A New Approach
The Accounting Review, 2008
This paper by Professors Prasad Naik, Chih-Ling Tsai and co-author Dan Weiss from Tel Aviv University proposes a new index to extract forward-looking information from security prices and infer market participants’ expectations of future earnings. The index, called market-adapted earnings (MAE), utilizes stock returns and fundamental accounting signals to estimate market expectations of future earnings at the firm level. MAE outperforms time-series models (e.g., random-walk) in predicting future earnings. Results demonstrate the usefulness of MAE for firms that have no analyst following.
Extending the Akaike Information Criterion for Mixture Regression Models
Journal of the American Statistical Association, 2007
In this paper, Professors Prasad Naik and Chih-Ling Tsai, with co-author Peide Shi from Nuclear Safety Solutions Ltd., examine the problem of jointly selecting the number of components and variables in finite mixture regression models.
In Markov-switching regression models, Professors Prasad Naik, Chih-Ling Tsai and co-author Aaron Smith from the UC Davis Department of Agricultural and Resource Economics use Kullback–Leibler (KL) divergence between the true and candidate models to select the number of states and variables simultaneously.
Constrained Inverse Regression for Incorporating Prior Information
Journal of the American Statistical Association, 2005
Inverse regression methods facilitate dimension-reduction analyses of high-dimensional data by extracting a small number of factors that are linear combinations of the original predictor variables. But the estimated factors may not lend themselves readily to interpretation consistent with prior information.
Isotonic Single-Index Model for High-Dimensional Database Marketing
Computational Statistics and Data Analysis, 2004
While database marketers collect vast amounts of customer transaction data, its utilization to improve marketing decisions presents problems. Marketers seek to extract relevant information from large databases by identifying significant variables and prospective customers. In small databases, they could calibrate logistic regression models via maximum-likelihood methods to determine significant variables and assess customer’s response probability.
in this paper, Professors Prasad Naik and Chih-Ling Tsai derive a new model selection criterion for single-index models, AIC, by minimizing the expected Kullback-Leibler distances between the true and candidate models.
The pro-posed criterion selects not only relevant variables but also the smoothing parameter for an unknown link function. Thus, it is a general selection criterion that provides a unifies approach to model selection across both parametric and nonparametric functions. Monte Carlo studies demonstrate that AIC performs satisfactorily in most situations.
Partial Least Squares Estimator for Single-index Models
Journal of the Royal Statistical Society, 2000
The partial least squares (PLS) approach first constructs new explanatory variables, known as factors (or components), which are linear combinations of available predictor variables. A small subset of these factors is then chosen and retained for prediction.
A New Dimension Reduction Approach for Data-Rich Marketing Environments: Sliced Inverse Regression
Journal of Marketing Research, 2000
In data-rich marketing environments (e.g., direct marketing or new product design), managers face an ever-growing need to reduce the number of variables effectively. To accomplish this goal, Professors Prasad Naik and Chih-Ling Tsai and co-author Michael Hagerty introduce a new method called sliced inverse regression (SIR), which finds factors by taking into account the information contained in both the dependent and independent variables.
Controlling Measurement Errors in Models of Advertising Competition
Journal of Marketing Research, 2000
Commercial market research firms provide information on advertising variables of interest, such as brand awareness or gross rating points, that are likely to contain measurement errors. This unreliability of measured variables induces bias in the estimated parameters of dynamic models of advertising. Consequently, advertisers either under- or overspend on advertising to maintain a desired level of brand awareness.