Distinguished Professor of Management
Research Expertise: Regression analysis, model selection, high-dimensional data, time series, biostatistics, application of statistics in business.
Teaching Field: Statistics
Professor Chih-Ling Tsai is a recognized expert in the practical application of statistics in business, including regression analysis, model selection, high-dimensional data, time series and biostatistics. He has had more than 100 research papers published in academic journals relating to statistics, marketing, finance and biostatistics. He teaches courses on forecasting and managerial research methods, and time series analysis and forecasting.
Tsai has developed new statistical models based upon regression and time series analysis, and has co-authored a book on the subject. He has collaborated with other Graduate School of Management faculty to produce a new method to analyze advertising effectiveness, which helps companies and media buyers determine an unbiased optimal advertising budget. He has also worked on projects to develop statistical methods that can be applied when analyzing long-term investment strategies. He has applied one of the newest techniques in statistics called Sliced Inverse Regression, which helps predict sales response to individual customers based on where they live, their age, size of family, income, zip code and other factors.
In this paper, we propose two important measures, quantile correlation (QCOR) and quantile partial correlation (QPCOR). We then apply them to quantile au- toregressive (QAR) models, and introduce two valuable quantities, the quantile autocorrelation function (QACF) and the quantile partial autocorrelation function (QPACF). This allows us to extend the Box-Jenkins three-stage procedure (model identification, model parameter estimation, and model diagnostic checking) from classical autoregressive models to quantile autoregressive models.
In multivariate analysis, the covariance matrix associated with a set of vari- ables of interest (namely response variables) commonly contains valuable infor- mation about the dataset. When the dimension of response variables is con- siderably larger than the sample size, it is a non-trivial task to assess whether they are linear relationships between the variables. It is even more challenging to determine whether a set of explanatory variables can explain those relation- ships.
Professor Chih-Ling Tsai
Co-Authors: Jeng-Min Chiou, Academia Sinica, Yanyuan Ma, Texas A&M
Recognizing his internationally renowned research contributions and teaching excellence, UC Davis recently honored Professor Chih-Ling Tsai with the title of Distinguished Professor. The designation is the highest campus-level professional faculty title.
This book by Professor Chih-Ling Tsai and co-author Allan D. R. McQuarrie from North Dakota State University describes procedures for selecting a model from a large set of competing statistical models.
- Teacher of the Year, UC Davis GSM, 1991, 1995, 1997-99, 2001-04, 2006-07, 2009-10.
- Who’s Who in the America and Who’s Who in the World, Marquis Who’s Who, 2000.
- 2000 Outstanding Intellectuals of the 20th Century, International Biographical Centre, 2000.
- Who’s Who in the West and Who’s Who in Science and Engineering, Marquis Who’s Who, 1998.
- Fellow, American Statistical Association, 1996.
- 100 most prolific authors published in statistical journals (1988-93), National Sciences and Engineering Resarch Council of Canada.
In partially linear single-index models, Professor Chih-Ling Tsai and co-authors Hua Liang and Xiang Liu from the University of Rochester and Runze Li from Pennsylvania State University obtain the semiparametrically efficient profile least-squares estimators of regression coefficients. The authors also employ the smoothly clipped absolute deviation penalty (SCAD) approach to simultaneously select variables and estimate regression coefficients. The study shows that the resulting SCAD estimators are consistent and possess the oracle property.
Regularization Parameter Selections via Generalized Information Criterion
Journal of the American Statistical Association, 2010
In this study, Professor Chih-Ling Tsai and co-authors Yiyun Zhang and Runze Li apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity.
Prior Consequences and Subsequent Risk Taking: New Field Evidence from the Taiwan Futures Exchange
Management Science, 2010
In this study, Professors Chih-Ling Tsai and co-authors Ning Zhu from the Shanghai Advanced Institute of Finance and Ming-Chun Wang from National Chengchi University use a data set from market participants in the Taiwan Stock Exchange Capitalization Weighted Stock Index options markets to demonstrate a strong positive relationship between prior trading outcomes and subsequent risk taking. In particular, investors in this market take above-average risks in afternoon trading after morning gains.
Extracting Forward-Looking Information from Security Prices: A New Approach
The Accounting Review, 2008
This paper by Professors Prasad Naik, Chih-Ling Tsai and co-author Dan Weiss from Tel Aviv University proposes a new index to extract forward-looking information from security prices and infer market participants’ expectations of future earnings. The index, called market-adapted earnings (MAE), utilizes stock returns and fundamental accounting signals to estimate market expectations of future earnings at the firm level. MAE outperforms time-series models (e.g., random-walk) in predicting future earnings. Results demonstrate the usefulness of MAE for firms that have no analyst following.
Extending the Akaike Information Criterion for Mixture Regression Models
Journal of the American Statistical Association, 2007
In this paper, Professors Prasad Naik and Chih-Ling Tsai, with co-author Peide Shi from Nuclear Safety Solutions Ltd., examine the problem of jointly selecting the number of components and variables in finite mixture regression models.
In Markov-switching regression models, Professors Prasad Naik, Chih-Ling Tsai and co-author Aaron Smith from the UC Davis Department of Agricultural and Resource Economics use Kullback–Leibler (KL) divergence between the true and candidate models to select the number of states and variables simultaneously.
Constrained Inverse Regression for Incorporating Prior Information
Journal of the American Statistical Association, 2005
Inverse regression methods facilitate dimension-reduction analyses of high-dimensional data by extracting a small number of factors that are linear combinations of the original predictor variables. But the estimated factors may not lend themselves readily to interpretation consistent with prior information.
Isotonic Single-Index Model for High-Dimensional Database Marketing
Computational Statistics and Data Analysis, 2004
While database marketers collect vast amounts of customer transaction data, its utilization to improve marketing decisions presents problems. Marketers seek to extract relevant information from large databases by identifying significant variables and prospective customers. In small databases, they could calibrate logistic regression models via maximum-likelihood methods to determine significant variables and assess customer’s response probability.
in this paper, Professors Prasad Naik and Chih-Ling Tsai derive a new model selection criterion for single-index models, AIC, by minimizing the expected Kullback-Leibler distances between the true and candidate models.
The pro-posed criterion selects not only relevant variables but also the smoothing parameter for an unknown link function. Thus, it is a general selection criterion that provides a unifies approach to model selection across both parametric and nonparametric functions. Monte Carlo studies demonstrate that AIC performs satisfactorily in most situations.
Partial Least Squares Estimator for Single-index Models
Journal of the Royal Statistical Society, 2000
The partial least squares (PLS) approach first constructs new explanatory variables, known as factors (or components), which are linear combinations of available predictor variables. A small subset of these factors is then chosen and retained for prediction.
A New Dimension Reduction Approach for Data-Rich Marketing Environments: Sliced Inverse Regression
Journal of Marketing Research, 2000
In data-rich marketing environments (e.g., direct marketing or new product design), managers face an ever-growing need to reduce the number of variables effectively. To accomplish this goal, Professors Prasad Naik and Chih-Ling Tsai and co-author Michael Hagerty introduce a new method called sliced inverse regression (SIR), which finds factors by taking into account the information contained in both the dependent and independent variables.
Controlling Measurement Errors in Models of Advertising Competition
Journal of Marketing Research, 2000
Commercial market research firms provide information on advertising variables of interest, such as brand awareness or gross rating points, that are likely to contain measurement errors. This unreliability of measured variables induces bias in the estimated parameters of dynamic models of advertising. Consequently, advertisers either under- or overspend on advertising to maintain a desired level of brand awareness.