This book by Professor Chih-Ling Tsai and co-author Allan D. R. McQuarrie from North Dakota State University describes procedures for selecting a model from a large set of competing statistical models.
In partially linear single-index models, Professor Chih-Ling Tsai and co-authors Hua Liang and Xiang Liu from the University of Rochester and Runze Li from Pennsylvania State University obtain the semiparametrically efficient profile least-squares estimators of regression coefficients. The authors also employ the smoothly clipped absolute deviation penalty (SCAD) approach to simultaneously select variables and estimate regression coefficients. The study shows that the resulting SCAD estimators are consistent and possess the oracle property.
In this study, Professor Chih-Ling Tsai and co-authors Yiyun Zhang and Runze Li apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity.
In this study, Professors Chih-Ling Tsai and co-authors Ning Zhu from the Shanghai Advanced Institute of Finance and Ming-Chun Wang from National Chengchi University use a data set from market participants in the Taiwan Stock Exchange Capitalization Weighted Stock Index options markets to demonstrate a strong positive relationship between prior trading outcomes and subsequent risk taking. In particular, investors in this market take above-average risks in afternoon trading after morning gains.
This paper by Professors Prasad Naik, Chih-Ling Tsai and co-author Dan Weiss from Tel Aviv University proposes a new index to extract forward-looking information from security prices and infer market participants’ expectations of future earnings. The index, called market-adapted earnings (MAE), utilizes stock returns and fundamental accounting signals to estimate market expectations of future earnings at the firm level. MAE outperforms time-series models (e.g., random-walk) in predicting future earnings. Results demonstrate the usefulness of MAE for firms that have no analyst following.
In this paper, Professors Prasad Naik and Chih-Ling Tsai, with co-author Peide Shi from Nuclear Safety Solutions Ltd., examine the problem of jointly selecting the number of components and variables in finite mixture regression models.
In Markov-switching regression models, Professors Prasad Naik, Chih-Ling Tsai and co-author Aaron Smith from the UC Davis Department of Agricultural and Resource Economics use Kullback–Leibler (KL) divergence between the true and candidate models to select the number of states and variables simultaneously.
Inverse regression methods facilitate dimension-reduction analyses of high-dimensional data by extracting a small number of factors that are linear combinations of the original predictor variables. But the estimated factors may not lend themselves readily to interpretation consistent with prior information.
While database marketers collect vast amounts of customer transaction data, its utilization to improve marketing decisions presents problems. Marketers seek to extract relevant information from large databases by identifying significant variables and prospective customers. In small databases, they could calibrate logistic regression models via maximum-likelihood methods to determine significant variables and assess customer’s response probability.
in this paper, Professors Prasad Naik and Chih-Ling Tsai derive a new model selection criterion for single-index models, AIC, by minimizing the expected Kullback-Leibler distances between the true and candidate models.
The pro-posed criterion selects not only relevant variables but also the smoothing parameter for an unknown link function. Thus, it is a general selection criterion that provides a unifies approach to model selection across both parametric and nonparametric functions. Monte Carlo studies demonstrate that AIC performs satisfactorily in most situations.
The partial least squares (PLS) approach first constructs new explanatory variables, known as factors (or components), which are linear combinations of available predictor variables. A small subset of these factors is then chosen and retained for prediction.
In data-rich marketing environments (e.g., direct marketing or new product design), managers face an ever-growing need to reduce the number of variables effectively. To accomplish this goal, Professors Prasad Naik and Chih-Ling Tsai and co-author Michael Hagerty introduce a new method called sliced inverse regression (SIR), which finds factors by taking into account the information contained in both the dependent and independent variables.
Commercial market research firms provide information on advertising variables of interest, such as brand awareness or gross rating points, that are likely to contain measurement errors. This unreliability of measured variables induces bias in the estimated parameters of dynamic models of advertising. Consequently, advertisers either under- or overspend on advertising to maintain a desired level of brand awareness.
UPDATE: Andrew Barkett is leaving his post as senior engineer at Facebook to bring his decade of experience in Silicon Valley to become the first-ever chief technology officer for the Republican National Committee.The June 4 announcement has stirred a whirlwind of media coverage, including the Huffington Post and Washington Post.Bark
Agilent Technologies’ Electronic Measurement Group is a $3.6 billion business that over the past decade has seen a dramatic shift in its customer base from U.S., and Western European customers to predominantly Asia-based customers. Today, the majority of the division’s revenues are generated outside of the U.S., with an increasing concentration in China.
(Davis, CA) — The UC Davis Graduate School of Management’s full-time MBA program has been ranked among the top six percent of AACSB International-accredited programs nationwide, according to U.S. News & World Report’s latest graduate business school rankings released today.