Past MS Business Analytics Capstone Projects

Spring 2018

Sayali Dasharath Wavhal, Employee Attrition Prediction on Class-Imbalanced Data using Cost-Sensitive Classification, April 2018, (Yichen Qin, Dungang Liu)
Human Resource is the most valuable asset for an organization and every organization aims at retaining its valuable workforce. Main goal of every HR Analytics department is to identify the employees that are likely to leave the organization in the future and take actions to retain them before they leave. This paper aims at identifying the factors resulting in employee attrition and build a classifier to predict employee attrition. The analysis aims at addressing the class-imbalance classification problem by exploring the performance of various Machine Learning models like Logistic Regression, Classification Tree using Recursive Partitioning, Generalized Additive Modeling and Gradient Boosting Machine. This being a highly-imbalanced class problem, with only 15% Positives, “Accuracy” is not a suitable indicator of model performance. Thus, to avoid the bias of the classifier towards the majority class, Cost-Sensitive classification was adopted to tackle misclassification of minority class, where False Negatives have a higher penalty as compared to False Positives. The model performance was evaluated based on Sensitivity (Recall), Specificity, Precision, Misclassification Cost and Area under the ROC Curve. The analysis in this paper suggests that although the recursive partitioning and ensemble techniques of decision trees have a good predictive power of the minority class, but more stable prediction performance is observed with the Logistic Regression Model and Generalized Additive Model.

Yong Han, Whose Votes Changed the Presidential Elections?, April 2018, (Dungang Liu, Liwei Chen)
The unique aspect of the YouGov / CCAP data was that it contained the information of 2008 to 2016 elections from the same group of 8000 voters. This might provide information on voting patterns between elections.

The goals of this study were to find: Was any predictor significant to the 2012 and 2016 presidential vote? Was it consistent between elections? Was any predictor significant to the change-vote between two elections? Was it consistent? Based on exploratory data analysis, 70% of voters never changed their votes, and 20% of voters changed at least once in last three elections. Was any predictor significantly associated with this behavior?

Using VGLM method, this study found that: In single elections, some common predictors were significant in elections, such as Gender, Child, Education, Age, Race and Marital status. Meantime, different elections had different significant predictors. In vote-change between two elections, significant predictors were different between two different elections. Between 2012-2016 elections, model suggested that Education, Income and Race were significant to vote-change. While between 2008-2012, model suggested that Child and Employment status were significant to vote-change. With 2016 elections data, the never-change-vote model found that Income, Age, Ideology, News and Married status were significant to this never-change-vote behavior. Individual election models could predict ~60% of votes in testing samples. Utilizing a previous vote as a predictor, models could predict ~ 89% of votes in testing samples. The never-change-vote model predicted well on the 70% never-change-vote voters, but missed almost all on the 20% change-vote voters.

Yanhui Chen, Binning on Continuous Variables and Comparison of Different Credit Scoring Techniques, April 2018, (Peng Wang, Yichen Qin)
Binning is a widely-used method to group a continuous variable into a categorical variable. In this project, I binned the continuous variables amount, duration and age in German credit data, and performed a comparative analysis on the logistic model using binned variables, to logistic model without using binned variables, to logistic additive model without using binned variables, to random forest, and to gradient boosting. I found that the performance of logistic with binning model is the weakest one among fitted five models. I also shown that the variable importance varied with different models, and the variable checkingstatus is selected as one of the important variables in most of the built models. Binned variables duration and amount were determined to be important variables in logistic with binning model. Random forest is the only model which selected variable history as an important variable.

Jamie H. Wilson, Fine Tuning Neural Networks in R, April 2018, (Yan Yu, Edward Winkofsky)
As artificial neural networks grow in popularity, it is important to understand how they work and the layers of options that go into building a neural network. The fundamental components of a neural network are the activation function, the error measurement and the method of backpropagation. These methods make neural networks good at finding complex nonlinear relationships amongst predictor and response variables as well as interactions between predictor variables. However, neural networks are difficult to explain, can be computationally expensive and tend to overfit the data. There are two primary R packages for neural networks: nnet and neuralnet. The nnet package has fewer tuning options but can handle unstandardized and standardized data. The neuralnet package has a myriad of options, but only handles standardized data. When building a predictive model using the Boston Housing Data, both packages are capable of producing effective models. Tuning the models is important to get valid and robust results. Given the amount of tuning parameters in neuralnet, these models perform better than the models built in nnet.

Kenton Asbrock, The Price to Process: A Study of Recent Trends in Consumer-Based Processing Power and Pricing, April 2018, (Uday Rao, Jordan Crabbe)
This analysis investigates the effects of the deceleration of the observational Moore’s Law on consumer based central processing units. Moore’s Law states that the number of transistors in a densely integrated circuit approximately doubles every two years. The study involved a data-set containing information about 2241 processors released by Intel between 2001 and 2017, which is the approximate time frame associated with the decline of Moore’s Law. Data wrangling and pre-processing was performed in R to clean the data and convert it to a state that was ready for analysis. Data was then aggregated by platform to study the evolution of processing across desktops, servers, embedded devices, and mobile devices. Formal time series procedures were then applied to the entire data set to study how processing speed and price has changed recently and how future forecasts are expected to behave. It was determined that while processing speeds are in a period of stagnation, the price paid for computational power has been decreasing and is expected to decrease in the future. While the negative effects of the decline of Moore’s Law may have an impact on a small fraction of the market through speed stagnation, the overall price decrease of processing performance will benefit the average consumer.

Hongyan Ma, A Return Analysis for S&P 500, April 2018, (Yan Yu, Liwei Chen)
Time series analysis is commonly used to analyze and forecast economic data. It helps to identify patterns, to understand and model the data as well as to predict short-term trends. The primary purpose of this paper is to study the Moving Window analysis and GARCH Models built through analyzing the monthly return of S&P 500 for recent 50 years from January 1968 to December 2017.

In this paper, we first studied the raw data to check its patterns and distributions, and then analyzed the monthly returns in different time windows, that is, 10-year, 20-year, 30-year and 40-year by Moving Window analysis. We found that over the long horizon, the S&P 500 had produced significant returns for investors who had long stayed in investment. However, for a given 10-year period, the return can go even negatively. Finally, we fitted several forms of GARCH models in normal distributions as well as in student t distributions and found the GARCH (1,1) Student-t model as the best model in terms of the Akaike’s Information Criteria and log-likelihood values.

Justin Jodrey, Predictive Models for United States County Poverty Rates and Presidential Candidate Winners, April 2018, (Yan Yu, Bradley Boehmke)
The U.S. Census Bureau administers the American Community Survey (ACS), an annual survey that collects data on various demographic factors. Using a Kaggle dataset that aggregates data at the United States county level and joining other ACS tables to it from the U.S. FactFinder website, this paper analyzes two types of predictive models: regression models to predict a county’s poverty rate and classification models to predict a county’s 2016 general election presidential candidate winner. In both the regression and classification settings, a generalized additive model best predicted county poverty rates and county presidential winners.

Trent Thompson, Cincinnati Reds – Concessions and Merchandise Analysis, April 2018, (Yan Yu, Chris Calo)
Concession and Merchandise sales account for a substantial percentage of revenue for the Cincinnati Reds. Thoroughly analyzing the data captured from Concession and Merchandise sales can help the Reds with pricing, inventory management, planning and product bundling. The scope of this Concession and Merchandise analysis includes general exploratory data analysis, identifying key trends in sales, and analyzing common order patterns. One major finding from this analysis was calculating 95% confidence intervals of Concession and Merchandise sales resulting in improved efficiency in inventory management. Another learning is that generally, fans buy their main food items (hot dog, burger, pizza) before the game and then beverages, desserts and snacks during the game. Finally, strong order associations exist among koozies with light beer and bratwursts with beverages and peanuts. I recommend displaying the koozies over the refrigerator with light beers and bundling bratwursts in a similar manner to the current hot dog bundle with hopes of driving a lift in sales.

Xi Chen, Decomposing Residential Monthly Electric Utility into Cooling Energy Use by Different Machine Learning Techniques, April 2018, (Peng Wang, Yan Yu)
Today the residential sector consumes about 38% of energy produced, of which nearly a half is consumed by HVAC systems. One of the main energy-related problems is that most households do not operate in an energy efficient manner, such as utilizing natural ventilation or adjusting the thermostat upon weather conditions, thus leading to higher usage than necessary. It has been reported that energy saving behaviors may lead to 25% energy-use reduction just by giving consumers a more detailed electricity bill with the same building settings. Therefore, the scope of this project is to construct a monthly HVAC energy use predictive model with simple and accessible predictors for home. The dataset used in this project include weather, metadata, electricity-usage-hours data downloaded from pecan street data port. The final dataset used in this project contains 3698 observations and 11 variables. Multiple linear regression, regression tree, random forest, and gradient boosting are four types of machine learning techniques that are applied to predict the monthly HVAC cooling uses. Root Mean Squared Error (RMSE) and adjusted R2 are two criteria that are adopted to evaluate the model fitness. All models are highly predictive based on the range of R2 from 0.823 to 0.885. Gradient boosting model has the best overall quality of the prediction with out-of-sample RMSE as 0.57.

Fan Yang, Breast Cancer Diagnose Analysis, April 2018, (Yichen Qin, Dungang Liu)
The dataset studied in this paper explains breast cancer tissue from two dimensions. The tissue is either benign or malignant. Our target is to recognize malignant tissue by knowing the dimension (mean, standard error and the worst) of it. This paper shows a section of feature selection which is based on correlation analysis and data visualization. After eliminating some correlated and visually unclassified features, logistic regression, random forest and xgboosting are conducted on training and validation data. 10 fold cross validation is also used for estimating performance of all the models, then prediction accuracy from different models are compared and area under ROC is used to evaluate model performance on validation data.

Sinduja Parthasarathy, Income Level Prediction using Machine Learning Techniques, April 2018, (Yichen Qin, Dungang Liu)
Income is an essential component in determining the economic status and standard of living of an individual. An individual’s income largely influences his nation’s GDP and financial growth. Knowing one’s income can also assist an individual in financial budgeting and tax return calculations. Hence, given the importance of knowing an individual’s income, the US Census data from the UCI Machine Learning Repository was explored in detail to identify the factors that contribute to a person’s income level. Furthermore, machine learning techniques such as Logistic regression, Classification tree, Random forests, and Support Vector Machine were used to predict the income level and subsequently identify the model that most accurately predicted the income level of an individual.

Relationship status, Capital gain and loss, Hours worked per week and Race of an individual were found to be the most important factors in predicting the income level of an individual. Of the different classification techniques that were built and tested for performance, the logistic regression model was found to be the best performing, with the highest accuracy of 84.63% in predicting the income level of an individual.

Jessica Blanchard, Predictive Analysis of Residential Building Heating and Cooling Loads for Energy Efficiency, March 2018, (Peng Wang, Dungang Liu)
This study’s focus is to predict the required heating load and cooling load of a residential building through multiple regression techniques. Prediction accuracy is tested with in-sample, out-of-sample, and cross-validation procedures. A dataset of 768 observations, eight potential predictor variables, and two dependent variables (heating and cooling load) will be explored to help architects and contractors utilize and predict the necessary air supply demand and thus design more energy efficient homes. Exploratory Data Analysis not only uncovered relationships between the explanatory and dependent variables, but relationships amongst explanatory variables as well. To create a model with accurate predictability, the following regression techniques were examined and compared to one another: Multiple Linear Regression, Stepwise, LASSO, Ridge, Elastic-Net, and Gradient Boosting. While each method has its advantages and disadvantages, the models created using LASSO Regression to predict heating and cooling load, balance simplicity and accuracy relatively well. However, when compared against the results from Gradient Boosting, the LASSO models produced greater root mean squared error. Overall, the regression trees created with Gradient Boosting yielded the best predictive results with parameter tuning to regulate “overfitting.” These models meet the purpose of this study to provide residential architects and contractors a straightforward model with greater accuracy than the current “Rules of Thumb” practice.