Past MS Business Analytics Capstone Projects

 

Summer 2017

Anjali Chappidi, Un-Crewed Aircraft Analysis & Maintenance Report Analysis, August 2017, (Michael Fry, Jayvanth Ishwaran)
This Internship comprised of two projects: Analysis of some crew data using SAS and analysis of the aircraft maintenance reports using text mining in R.  The first project identifies and analyzes how different factors affected the crew ratio on different fleets. The goal of the second project is to study the maintenance logs which consisted of the work order description and work order action related to the aircrafts that were reported to go under maintenance.

Vijay Katta, A Study of Convolutional Neural Networks, August 2017, (Yan Yu, Edward Winkofsky)
The advent of Convolutional Neural Networks has drastically improved the accuracy of image processing. Convolutional Neural Networks in short CNNs, are presently the crux of deep learning applications in computer vision. The purpose of this capstone is to investigate the basic concepts of Convolutional Neural Networks in a stepwise manner and to build a simple CNN model to classify images.  The study involves understanding the concepts behind different layers in CNN, studying the different CNN architectures, understanding the training algorithms of CNNs, studying the applications of CNNs, and applying CNN for image classification.  A simple image classification model was designed on an ImageNet dataset which contains 70,000 images of digits. The accuracy of the best model was found to be 98.74. From the study, it is concluded that a highly accurate image processing model is achievable in a few minutes given the dataset has less than 0.1 million observations.

Yan Jiang, Selection of Genetic Markers to Predict Survival Time of Glioblastoma Patients, August 2017, (Peng Wang, Liwei Chen)
Glioblastoma multiforme (GBM) is the most aggressive primary brain tumor with survival time less than 3 months in >50% patients. Gene analysis is considered as a feasible approach for the predication of patient’s survival time. The advanced gene sequencing techniques normally produce large amount of genetic data which contain important information for the prognosis of GBM.  An efficient method is urgently needed to extract key information from these data for clinical decision making. The purpose of this study is to develop a new statistical approach to select genetic markers for the prediction of GBM patient’s survival time. The new method named Cluster-LASSO linear regression model has been developed by combining nonparametric clustering and LASSO linear regression methods. Compared to the original LASSO model, the new Cluster-LASSO model simplifies the model by 67.8%. The Cluster-LASSO model selected 19 predictor variables after clustering instead of 59 predictor variables in LASSO model. The predictor genes selected for Cluster-LASSO model are ZNF208, GPRASP1, CHI3L1, RPL36A, GAP43, CLCN2, SERPINA3, SNX10, REEP2, GUCA1B, PPCS, HCRTR2, BCL2A1, MAGEC1, SIRT3, GPC1, RNASE2, LSR and ZNF135. In addition, The Cluster-LASSO model surpasses the out of sample performance of LASSO model by 1.89%. Among the 19 genes selected in the Cluster-LASSO model, the positively associated HCRTR2 gene and negatively associated GAP43 are especially interesting and worth of further study. A further study to confirm their relationship to the survival time of GBM and possible mechanism would contribute tremendously to the understanding of GBM.

Jing Gao, Patient Satisfaction Rating Prediction Based on Multiple Models, August 2017, (Peng Wang, Liwei Chen)
As the development of economy and technology, online health consultation provides a convenient platform which enables the patients seeking the suggestion and treatment quickly and efficiently, especially in China. Due to the large population density, physicians may need to take hundreds of patients every day at hospital, which is really time-consuming for patients. So there is no wonder why online health consultation grows so rapidly recently. Since healthcare service always related to issues of mortality and life quality for patients, hence online healthcare services and the patient satisfaction are always important to keep this industry running safely and efficiently. So in this project, we focus on the patient satisfaction. We integrate three levels of data (physician level, hospital level and patient level) into one. And we build multiple predictive models in order to know which independent variables have significant effects on the patient satisfaction rate as well as to check the precision of the models by comparison. This paper verifies that the physicians’ degrees of participation with the online healthcare consultation system as well as the hospital’s support affect the patient satisfaction significantly, especially the interactive activity such as total web visits, thanks letters, etc.

Jasmine Ding, Comparison Study of Common Methods in Credit Data Analysis, August 2017, (Peng Wang, Dungang Liu)
Default risk is an integral part of risk management at financial institutions. Banks allocate a significant amount of resources on developing and maintaining credit risk models. Binning is a method commonly used in banking to analyze consumer data to determine whether a borrower would qualify for a bankcard or a loan. The practice requires that numeric variables are categorized into discrete bins for further analyses based on certain cutoff values. The approach for grouping observations could vary from equal bin size to equal range depending on the situation. Binning is popular because of its ability to identify outliers and handle missing values. This project explores the basic methods that are commonly used for credit risk modelling, including simple logistic regression, logistic regression with binned variable transformation, and generalized additive models. After developing each model, a misclassification rate is calculated to compare model performance. In this study, the credit model based on binned variables did not produce the best results, both generalized additive model and random forest performed superior. In addition, the project also proposes other methods that can be used to improve credit model performance when working with similar datasets.

Sneha Peddireddy, Opportunity Sizing of Final Value Fee Credits, August 2017, (Michael Fry, Varun Vashishtha
The e-commerce company allows customers to “commit to purchase” an item and they charge the seller a fee (commission for sale) when this happens. If the actual purchase does not happen because of any reason, seller has to be refunded the fee amount as a credit. In this process, there are multiple reasons why a transaction could not be completed after “commit to purchase”. Also, there are cases where a transaction is taken off the website because of the mutual agreement between buyer and seller. This will result in loss of revenue for the company. The current project involves identifying the key reasons for an incomplete transaction and sizing the opportunity to minimize the credits payment for off platform transactions.

Krishna Teja Jagarlapudi, Solar Cell Power Prediction, August 2017, (Michael Fry, Augusto Sellhorn)
The rated power output from a solar cell is estimated through experimental measurements and theoretical calculations. However, it is difficult to obtain reliable prediction of the power output for varying weather conditions. With the advent of Internet of Things, it is possible to record exact power output from a solar cell over time. This data along with weather information can be used to build predictive models. In this project, a neural network model and a random forest model are built. The performance of the two models is compared using 10-fold cross validation, based on mean absolute error, and adjusted r-squared. It is seen that Random Forest performs better than neural network.

Mansi Verma, 84.51o Capstone Project, August 2017, (Michael Fry, Mayuresh Gaikwad)
84.51° is an analytics wing of Kroger which aims to make people’s life easier by achieving real customer understanding. It brings together customer data, analytics, business and marketing strategies for more than 15 million loyal Kroger Customers. It also collaborates with 300 CPG (consumer packages goods) Clients by driving awareness, trail, sales uplift, earned media impression and ultimately customer loyalty. Using the latest tools, technology and statistical techniques; 84.51° works towards producing insight on customer behavior with their spend data at the stores for business decisions. All goals of the company are centered towards customers at the center and not the profits only.  Targeting the right customers is not an easy job. The objective of the customer targeting is to target right customer base and to know when to target them with what. This right kind of targeting not only drives sales but also saves business resources and maximizes profit. Kroger provides coupons in many channels being tills at the time of billing, emails, website, mobile app and direct mails that it sends to the best customers.  This project aims to discuss about the model for best customer targeting for a direct mail campaign for a beauty CPG client for a new product launched.

Shengfang Sun, Human Activity Classification Using Machine Learning Techniques, August 2017, (Yichen Qin, Liwei Chen)
In this work, machine-learning algorithms are developed to classify human activities from wearable sensor data. The sensor data was collected from 10 subjects of diverse profile while performing a predefined set of physical activities. Three activity classifiers using the sensor metrics were trained and tested: random forest, Naïve Bayes and neural network. Performances of these classifiers were scored by leave-one-subject-out cross validation. The results show that neural network performs best with an accuracy rate of 85%. A closer look at the aggregated confusion matrix suggests that most activities of new subject can be predicted well by the pre-trained neural network classifier, despite that some activities appears to be very subject-sensitive and may require subject-specific training.

Sakshi Lohana, Market Basket Analysis of Instacart Buyers, August 2017, (Peng Wang, Uday Rao)
Market Basket Analysis is a modelling technique used to determine the unique buying behavior of customers. It can be used to formulate strategies to increase sales by suggesting customers what to buy next and providing promotions on relevant products of their choice. Through this project, Market Basket Analysis and Association Rules are explored using the dataset available on Kaggle.com. This data set is transactions by various users on an ecommerce website known as Instacart.  After careful analysis, it is found that the items of daily use such as fruits, milk, sparkling water are ordered the most. Also, the proportion of reordered products is as high as 46% and hence customers can be encouraged to buy the same product again if they are satisfied with the buying experience the first time. There is high level of associations between different yoghurts, pet foods and organic items. A person buying organic cilantro is most likely to buy organic limes.

Sahil Thapar, Predicting House Sale Price, August 2017, (Dungang Liu, Liwei Chen)
Over the recent years we have seen that house prices can be an important indicator of the state of the country's economy. In this project, we will employ machine learning techniques to predict the final sales price for a house based on a range of features of the house. We know that houses can be the single biggest investment an individual makes in his lifetime. A sound statistical model can help the customer get a fair valuation of the house - both at the time of purchase and sale. The final house prices are a continuous variable and are predicted using linear regression. As a part of this project, regularization was performed to achieve simpler predictive models.

Pradeep Mathiyazagan, Website Duration Model, August 2017, (Yichen Qin, Yan Yu)
This capstone project is a natural extension of the Graduate Case Study that I worked on in the Spring Semester, 2017 as part of the Business Analytics program at University of Cincinnati. This will explore a bag-of-words model with user browsing data on the website of a local TV news station in Las Vegas owned by EW Scripps. The original Graduate Case Study did not afford us the time to explore a bag of words model as it involved a fairly large amount of web-scraping. Another worthwhile information I hope to include in this model is the amount of media elements present on a webpage in form of tweets, pictures and videos to analyze their impact on user engagement. Through this, we hope to identify pertinent information that results in better user engagement which would ultimately result in increased advertising revenue.

Rajul Chhajer, Forecasting Stock Reorder Point for Smart Bins, July 2017, (Michael Fry, John E. Laws)
Forecasting the reorder point plays an important role in efficiently managing the inventories. The reorder point is essentially the right time to order a stock considering the lead time to get the stock from the supplier and the safety stock available. It is difficult to determine the replenishment point if the sales information and lead time are unknown. In this study, historical reorder trends have been observed at product level for the forecasting. Apex ActylusTM smart bins have the ability to reorder stocks automatically based on the inventory level and it stores the information of all the past orders. The past reorders helped in understanding the velocity of a product present in a bin and then a moving average technique was used at product level to predict the next replenishment. The reorder point prediction would reduce the frequency of ordering and would help the floor managers in making better reorder plans.

Wei Yue, Analysis of Students’ Dining Survey, July 2017, (Peng Wang, Yinghao Zhang)
The goal of this project is to explore the factors that influence the customer experience the most under the designed circumstance. To achieve this objective, regression models were built to represent the relationship between customer experience and their basic information. The results of model building showed that the customer experience is not directly related to all the information provided by the survey.  The survey results were supplied by randomly selected students from a University in Guangdong Province, China. The purpose of the survey is to help the restaurant management to better understand which dishes are more popular among students, and more importantly, if there are connections between dish ordering patterns and different students.

First, the students’ basic information was collected and categorized, such as, gender, major, frequency of dining out, etc. Then participants were asked to pick 5 dishes from eight categories of dishes on the menu with two in each category (16 in total) as they were dining in and then one out of the five dishes was randomly selected to be out of stock. Under this circumstance, participants need to pick one other dish to replace it. Then the customer experience was surveyed for analysis. The total number of participants is 98.

Gupta, Akash, Customer Segmentation and Post Campaign Analysis, July 2017, (Michael Fry, Naga Ramachandran)
A marketing campaign is a focused, tactical initiative to achieve a specific marketing goal.

Marketing activities require careful planning so that every step of the process is understood before you launch. Because a marketing campaign is tactical and project based, you need to map out the process from the initial promotional intent to the ultimate outcome.  Based on that purpose, you need to set specific goals and metrics or key performance indicators (KPIs) that will help you determine how your campaign is performing against that goal and are helpful when creating or refining marketing strategies. It is important to track our marketing activities to results. Results will be determined by what our goals were for the campaign. But in most cases, results are usually in terms of sales or qualified leads and eventually applications.

Palash Siddamsettiwar, Internship at Tredence Analytics, July 2017, (Michael Fry, Sumit Mehra)
During my period of internship at Tredence Analytics, I was working as an analytics consultant to one of the biggest plumbing, HVAC&R and fire protection distributor in United States with more than $13 billion of yearly revenue. I was involved in building the Analytics capabilities in various divisions including supply chain, operations and products. My primary project involved working with warehouse managers and the head of data to understand how to cut down shipping costs to customers by optimizing modes of shipment and timing of delivery and thus, cutting down fixed and variable costs. By providing cost estimations for the options available, sales representatives and dispatchers would be able to make data-driven decisions rather than instinct-based ones.

My secondary project involved working with the products team and the ecommerce team to help them categorize their products using machine learning techniques. Within more than 3 million SKUs involved and more than 2 million of these still unclassified, the current pace and accuracy of classifying these products was not sustainable. Using machine learning would help these two teams to significantly reduce effort, time and money needed to classify the products and check the classifications. Both projects involve creation of a long-term, automated and real-time solution which will be integrated into their IT systems, to help people make quicker and more efficient decisions.

Jordan Adams, Forecasting Process for the U.S. Medical Device Markets, July 2017, (Yan Yu, Chris Dickerson)
The goal of this capstone is to build a forecasting process and model for Company X to forecast the US medical device market sales and share for Company X and all competitors. The forecasting process will be built using two data analytics tools to handle data management, data modeling, data visualization, and statistical analysis. The forecast process for the medical device market will involve conducting a baseline forecast using an array of time series forecasting methodologies, and adjusting the forecasts based on economic trends, competitive intelligence, market insights, and organizational strategies. The forecaster will have the flexibility to choose among many differing forecasts to select the model that they feel has the best predictability power, and the ability to cleanly visualize and explore each forecast in depth.

Aditya Singh, Churn Model, July 2017, (Michael Fry, Evan Cox)
The client is a cosmetics company based in New York City. The company has close to 9000 members globally, both men and women, from over 2250 companies in the beauty related industries. The primary reasons for becoming a member are as follows:

  1. Networking with other people in the beauty industry
  2. Find a career in the beauty industry
  3. Learn more about the latest trends in the beauty industry
  4. Get your product/company recognized at an awards event hosted by the company

A big percentage of the members churn after just one year of subscription. The goal is to identify patterns among these members who are likely to churn and eventually predict when a member is going to churn. A significant amount of time has been spent setting up the dataset before the modeling process. After, data cleaning and manipulation, I have built a Logistic Regression Model which predicts whether a member is going to churn or not.

Catherine Cronk, A Simulation Study of the City of Cincinnati’s Emergency Call-Center Data: Reducing Emergency Call Wait Times, July 2017, (David Kelton, Jennifer Bohl)
Emergency-response call centers are arguably one of the most important services a city can provide for its constituents. When a person calls 911 there is an expectation that the call will be answered and dispatched to the nearby emergency response department within seconds. In recent years, the total number of calls to 911 have increased, causing wait times to be up to 30 minutes for people contacting emergency services. The purpose of this simulation study is to analyze the current emergency call-center system and data for the City of Cincinnati and simulate alternate systems. The goal is to identify a better system that can achieve the City Administration’s goal of call takers’ answering 90% of 911 phone calls in under 10 seconds.

Michael Ponti-Zins, Inpatient Readmissions Reduction and MicroStrategy Dashboard Implementation, July 2017, (Michael Fry, Denise White)
Inpatient hospital readmission rates have been considered a major indicator of quality of care for several decades and have been shown to have a highly negative correlation with patient satisfaction.  In 2017, the Ohio Department of Medicaid announced a 1% reduction in Medicaid reimbursement for all hospitals that are deemed to have excessive readmissions. In order to improve care and avoid potential payment reductions, Cincinnati Children’s Hospital created an internal quality improvement team focusing on readmissions reduction. In order to better understand the millions of data points related to readmissions, a dynamic dashboard was created using MicroStrategy, a business intelligence and data visualization tool. This dashboard was used to track the percentage of patients readmitted within 7 and 30 days of discharge, track why patients were returning, the percentage of readmissions that were potentially preventable, and other related aspects of each inpatient encounter.  This information was used to identify targeted interventions to decrease future readmissions. These interventions included improved discharge and home medication instructions, automated email notification of providers, and data exports to assist in ad hoc analysis.

Ajish Cherian, Predicting Income Level using US 1994-95 Census Data, July 2017, (Peng Wang, David F. Rogers)
The objective of the project was to predict whether income exceeds $50,000 per year based on US 1994-1995 census data using different predictive models and comparing their performance. Since, the prediction to be made is a categorical value (income <=50K or >50K), the predictive models built were for classification. Models designed for the dataset were Logistic Regression, Lasso Regression, K-Nearest Neighbor, Support Vector Machine, Naive Bayes, Classification Tree, Random Forest and Gradient Boosting. Performance and effectiveness of all the models were evaluated using Area-Under the Curve (AUC) and Misclassification Rate. AUC and misclassification rate are calculated on the training and test datasets. However, for finalizing a model only metrics from the test dataset were used. Gradient Boosting performed best out of the selected models.

Rui Ding, Analysis of Price Premium for Online Health Consultations by Statistical Modeling, July 2017, (Peng Wang, Liwei Chen)
In this project, we focus on the mechanism of how the descriptive information of physicians and information of interactive reviews from patients will affect the price premium of online health consultation. Section 1 briefly introduces the definition of online health consultation and the techniques to be used in the project. Section 2 concentrates on the exploratory data analysis of the data set to obtain the overview of distribution of price premium and physicians. Section 3 discusses the analysis process of the data set by different modeling methods. The performance of each method is evaluated by in-sample, out-of-sample mean squared error and prediction error. Generalized linear modeling and mixed effect modeling demonstrate similar performance without obvious over fitting. Regression tree shows better prediction performance. However, tree-based bagging and random forest methods provide excellent performance with potential over fitting problem. Section 4 concludes the finding from the modeling and interprets the importance of the variables in the finalized models.

S.V.G. Sriharsha, Analysis of Grocery Orders Data, July 2017, (Yichen Qin, Jeffery Mills)
Objective of this analysis is to study order pattern of users of Instacart, a grocery delivering company and provide key insights about the customer behavior. There are 206209 users in the database and 49687 different products available to order through Instacart which can be characterized to 21 different departments. Current database consists of the details about 3421083 orders placed by the users over a certain amount of time.  This analysis starts with exploration of variables then moves on to i) Association rule mining using apriori algorithm, ii) Unsupervised classification of customers based on their buying behavior using K-means clustering algorithm, iii) Product embedding using Word2Vec analysis and concludes with a summary of the results.

Linxi Yang, Analysis of Feedback from Online Healthcare Consultation with Text Mining, July 2017, (Peng Wang, Liwei Chen)
China has experienced rapid economic growth which benefited many industries but not the healthcare system. Because of the uneven economic development in China, not all residents can receive appropriate medical care. With an immature healthcare system and scarce medical resources for 1.3 billion people, the online healthcare consultation community in China now has become as popular as it is in other developed countries. The data was collected from an online healthcare consultation community, Good Doctor Community. Good Doctor Community (www.haod.com), which is the earliest and largest online healthcare consultation community in China, has been growing rapidly in the past 10 years. This research project will focus on how to improve the quality of service in the healthcare industry and provide insightful analysis for Good Doctor Community for future development by using text mining.  Results show that the main purpose of visits is for treatment and diagnosis, and the main reason for choosing the physician is the online reviews and recommendation from friends, relatives, etc. There are 11,671 out of 22,625 respondents registered at the counter before they have seen physician, and 9,290 out of 22,625 respondents registered via an online system. The most frequent word appeared in the dataset is patient, and the most frequent word appeared in the dataset with dissatisfied review is impatient.  By analyzing the sentiment of text, most patients have very positive sentiment and only 1/48 people have negative sentiment.

S. Zeeshan Ali, Image Classification with Transfer Learning, July 2017, (Peng Wang, Liwei Chen)
To correctly classify an image is a problem which has been there since the breakthrough of the modern computers. Nowadays because of techniques like deep learning there has been breakthroughs in this field. We will explore some techniques like transfer learning to classify the images in this project. We will also touch upon image feature extraction and modelling with image arrays. We will see this with a digit image dataset for simplicity.

Apoorv Joshi, Predicting Realty Prices Using Sberbank Russian Housing Data, July 2017, (Dungang Liu, Liwei Chen)
Sberbank is Russia’s oldest and largest bank. It utilizes historical property sale data to create predictive models for realty price and assists customers in making better decisions while renting or purchasing a building. The Sberbank Housing Dataset describes the property and the sub area to which it belongs in Moscow. The dataset contains 30471 observations and 292 variables. The variables are analyzed using Exploratory Data Analysis to see how they individually affect the price of a house. Further, the available data is cleaned, manipulated, and is used to fit models that can predict the house prices. Linear Regression, LASSO, Random Forest, and Gradient Boosting models were fit on the data, and we could make the predictions with sufficient accuracy.

Aishwarya Nalluri, Multiple Projects with Sevan Multi Site Solutions, July 2017, (Michael J. Fry, Doug Gafney)
Client Company A is a well-known fast food restaurant chain, spread across the world. Their business model in USA is divided into major FETs. In this project, an attempt has been made to map employees (supporting Company A but who are employed by Sevan Multi Site Solutions) working at different levels in a single dashboard. The tool used is Power BI. Main challenge is collecting data and preparing it for use in Power BI. The data had to be valid for representing in a dashboard and how the headshots can be embedded in the dashboard instead of simply specifying external hyperlinks.

QBR is a Quarterly Business Report which is presented to board members of the company. Every quarter a meeting is held and an opportunity is provided for each department to represent where they stand and what are the challenges they are facing. QBR is mainly focused on 4 aspects: people, clients, operations and finance. This methodology was introduced when the company started acquiring more projects from a variety of clients. As quarters passed by, many modifications have been made to the process of collecting required data and presenting it. The main challenge that the company faced is that there is no standard framework to work on QBR release reporting. The Finance team had issues collecting data, cleansing and representing it.  As part of the solution to this challenge, a standard approach was formed using excel. The only effort needed by the Finance team now is loading a report from Quick books from excel which would automatically update all the reports. This solution has reduced their time by 50%.

Siva Ramakrishnan, The Insurance Company Benchmark (CoIL 2000), July 2017, (Yan Yu, Edward Winkofsky)
This project focuses on predicting potential customers for the Caravan Insurance Company. The dataset was used in the Computational intelligence and Learning(CoIL) 2000 challenge. It consists of 86 variables and includes product ownership data and socio-demographic data. The aim of the project is to classify customers as either buyers or non-buyers of the insurance policy. Six different models where developed including Logistic Regression, Classification Tree, Naïve Bayes, Support Vector Machine, Random Forest and Gradient Boosted trees. These models were evaluated based on the competition rules where contestants had to select a set of 800 observations from the test set of 4000. The logistic regression model performed better than all the other models.

Nitisha Adhikari, PD and LGD Modelling Methodology for CCAR, July 2017, (Michael J. Fry, Maduka S. Rupasinghe)
With the acquisition of First Niagara Bank in 2016 Key Bank acquired $2.6b Indirect Auto Portfolio. This was a new addition to the list of existing portfolios at Key and a Loss estimation model is being built to generate stressed loss forecasts for the Comprehensive Capital Analysis and Review (CCAR) and Dodd-Frank Act Stress Tests (DFAST). This document talks about the data preparation and modeling methodology for Probability of Default model (PD) and Loss given Default (LGD). The PD and LGD along with the Exposure at Default (EAD) are used to generate stressed loss forecasts for the CCAR and DFAST.

Venkat Kanishka Boppidi, Lending Club – Identification of Profitable Customer Segment, July 2017, (Dungang Liu, Liwei Chen)
Lending club issues unsecured loans to different segments of customers. The interest rate for the loan is dependent on the credit history of the customer and various other factors like income levels, demographics etc. The data of the borrowers is public. The current analysis has several objectives:

  1. To review the lending club dataset and summarize thoughts on LC risk profiles by loan type, grade, sub grade, loan amount, etc. using loan status of ‘Charged Off’ and ‘Default’, as indicators of a ‘bad loan’.
  2. To identify fraudulent customers (customers with no payment) in Lending Club data. The key characteristics of these fraudulent applications.
  3. To Identify best and worst categories by purpose (a category provided by the borrower for the loan request) in terms of risk.
  4. To build a statistical model using classification techniques and identify the less risky customer segment. These recommendations can be used to cross sell the loans for a customer segment which has low default rate and high profit.

Xiaojun Wang, Co-Clustering Algorithm in Business Data Analysis, July 2017, (Yichen Qin, Michael Fry)
In this project, we investigate a two-way clustering method and apply it to a business data set.

The classical clustering method is one-way. Given a data matrix, it is performed either on the whole row (observation-wise), or on the whole column (variable-wise). For example, in the well-known K-means method, all the variables involving in the distance measure either come from variables, or records, but not both. Co-clustering, also called bi-clustering or block clustering, is a two-way clustering method. It does clustering simultaneous on the rows and columns of a data matrix and turns the data into blocks. Our data set comes from a retail company that has hundreds of stores, each of which contains hundreds of business departments. Co-clustering analysis helps to group the data into blocks based on the similarity in productivity. Each block will consist of a group of departments and the corresponding group of stores they belong to. Our goal is to study these blocks so that business decisions can be made based on the information they bring with. The result we get shows co-clustering serves our purpose well.

Manisha Arora, Marketing Mix (Promotional Spend Optimization) for a Healthcare Drug, July 2017, (Michael Fry, Juhi Parikh)
The Healthcare Industry is one of the world’s largest and fastest-growing industries, consuming over 10% of GDP for most developed nations. Data and analytics are playing a major role in healthcare, allowing organizations the ability to make smart, impactful, data-driven decisions to mitigate risk, improve employee welfare and capitalize on the opportunities. This capstone project focusses on evaluating the effectiveness of its professional tactics for a particular drug, and optimizing its promotional spends, based on the channel effectiveness. This project analyzes each of the channels and would try to answer the following questions:

  • What is the impact of each channel on the promotion of the drug?
  • What is the average and marginal ROI for each channel?
  • What would be the ideal spend levels per tactic and optimized based on a brand budget number?

Jayaram Kishore Tangellamudi, Predicting Housing Prices for ‘Sberbank’, July 2017, (Yan Yu, David F. Rogers)
Sberbank, Russia’s oldest and largest bank, helps their customers by making predictions about realty prices so renters, developers, and lenders are more confident when they sign a lease or purchase a building. Although the housing market is relatively stable in Russia, the country’s volatile economy makes forecasting prices as a function of apartment characteristics a unique challenge. Complex interactions between housing features such as number of bedrooms and location are enough to make pricing predictions complicated. Adding an unstable economy further complicates the predictions. Several regression models such as Linear Regression, General Additive Models (GAM), Decision Trees, Random Forest (RF), Support Vector Regression (SVR), Extreme Gradient Boosting (XGB) were built on the housing features alone to predict the housing prices. Additionally, economic indicator data was merged with Housing features data to check if these indicators can further explain the variance in the housing prices. The predictive model performances were compared using the Mean Square Error (MSE) of the logarithmic value of the housing prices.

Ramya Kollipara, Analysis of Income Influencing Factors in Different Professions, July 2017, (Dungang Liu, Liwei Chen)
Knowing the characteristics of a high/low income individual can be useful in marketing a new service targeted at potential customers within a salary range. There is always a cost involved in attracting the right customers, which the organization would always want to minimize. If a model was designed to accurately identify the right people in an income range, the cost could be significantly decreased with a higher rate of returns. The objective of this project is to explore and analyze the variables associated with an individual that might prove to be useful in understanding whether his/her income exceeds $50K/year, specially focusing on 3 different professions: Sales, Executive Managers, Professional Specialties. Various modelling techniques are explored and the different models are compared to see how some characteristics have a greater influence on certain professions compared to others and the most effective model is selected to accurately predict whether an individual’s income exceeds $50K/year based on the census data.

Shalvi Shrivastava, Black Friday Data Analysis, July 2017, (Yan Yu, Yichen Qin)
Billions of dollars are spent on Black Friday and the holiday shopping season. ‘ABC Private Limited’ has shared data of various customers for high volume products from the Black Friday month and wants to understand the customer purchase behavior (specifically, purchase amount) against various products of different categories. The challenge is to predict purchase amounts of various products purchased by customers based on the given historical purchase patterns. The data contained features like age, gender, marital status, categories of products purchased, city demographics etc. We were to build our models on the train data and validation data. The evaluation metric was RMSE, which also seemed a very appropriate choice for this problem.

Junbo Liu, Predicting Movie Ratings with Collaborative Filtering, July 2017, (Peng Wang, Zhe Shan)
Collaborative filtering, the most popular recommendation system, has been widely applied to virtually every aspect of people’s lives and has generated remarkable success in e-commerce. To make a relevant recommendation to an active user, recommendation system must be able to accurately predict the utility of items for this user because items with the highest utility (ratings in movie case) are recommended. Therefore, prediction accuracy is the key to success of a recommendation system. In this report, we compared three representative types of collaborative filtering approaches derived from three distinct rationales using movie ratings data. The three types are user-based collaborative filtering (UBCF), single value decomposition (SVD) and group-specific recommendation system (GSRS). The minimum root mean square error (RMSE) for UBCF is 0.9432 when the number of neighbor is set to 28. For SVD, the minimum RMSE is 0.9240 when the tuning λ is 0.17 and the number of latent factor is 19. For GSRS, the same number of latent factor of 19 is used and the cluster number of both users and items is set to default value of 10. When the λ value is 65, the RMSE for GSRS is 0.9007. Therefore, our results show that GSRS has the highest prediction accuracy, SVD next and UBCF last. They are consistent with the conclusion from publications.

Aditya Nakate, Talmetrix Inc, Cincinnati, July 2017, (Michael J. Fry, Ayusman Vikramjeet)
I am working as a Production Support Analyst Intern at Talmetrix at downtown Cincinnati. This company helps organizations capture feedback regarding the employee experience and analyses that data to help organizations attract desired talent and improve employee retention, performance and productivity. It helps organizations to make more informed decisions about their employees. Analysis or reports created by the company are mainly consumed by the human resource heads of the client company. During my internship, I am working on various projects including report generation and ad-hoc analysis. While working here, I have used technologies like SQL, R, Tableau etc. and have also used various statistical skills like classification algorithm and regression to name a few. This report contains the summary of the work I have done during my internship at Talmetrix. My first project at the company was about the driver’s analysis. This project was intended to find out the categories which are critical for employee satisfaction and which the clients need to focus on. Later, I also worked on a report generation process for one client. We had the employee feedback data. Employees were asked to take surveys which contained both type of questions: Likert’s and open-ended questions. Reports were created in tableau. Different views were created at levels such as overall, region, age-levels, Tenure levels, department, operating unit and suboperating units. Currently, we are generating more reports based on this one as clients are doing some deep-dives.

Suchith Rajasekharan, Allstate Insurance Claim Severity Analysis, July 2017, (Yichen Qin, Michael Magazine)
In the insurance industry, having the ability to accurately predict the loss amount of a claim is of paramount importance. Companies build predictive models based on different features of a claim and use the predictions from these models to apply proper claims practices, business rules and experienced resources to manage the claims. In this paper, we explore the different steps involved in building a model to predict the loss amount of a claim. A Kaggle dataset provided by Allstate Insurance is used for this study. Various machine learning techniques, viz, Multiple Linear Regression, Generalized Linear Model, Generalized Additive Model, Extreme Gradient Boosting, and Neural Networks are used to build different models. The models are implemented using various packages available in the open source software ‘R’. Models built using different techniques are compared based on their performance on a validation set and the best model is chosen. XGBoost model gave the best performance out of all the models. Therefore, it is chosen as the final model.

Mahesh Balan, Cash as a Product, July 2017, (Michael Fry, Fan Yang)
The project analyzed the potential of adding cash as an additional payment feature to more markets. The analysis quantified the pros and cons of cash. The economics of a cash vs non-cash trip on Uber was analyzed. The cash trip was economically beneficial to Uber compared to that of a non-cash trip. The project also deep dived into aspects such as driver and rider experience in a cash vs a non-cash trip. The experience for a non-cash trip appears to be seamless compared to that cash trip. The project also tried to quantify the risk and safety issues in a cash vs a non-cash trip. The non-cash trip appears to be safer and more trustworthy compared to that a cash trip. The project also looked at various ways to improve the existing economics and current rider/driver experience in a cash trip. The recommendations from the analysis was presented to the Growth and Product Team to improve the overall cash experience for a rider and driver in a cash trip.

Anitha Sreedhar Babu, eCommerce Marketing Analytics, July 2017, (Michael Fry, Maria Topken)
The client, a well-known online food delivery service in Cincinnati, is looking to engage its existing customers and increase the size and frequency of purchases. In order to understand the customer behavior and to drive revenue and engagement, customers were segmented based on frequency of orders and average lag between the orders. Customers were grouped into frequent shoppers, yearly shoppers, and one-time buyers. The data was also used to perform a market basket analysis to understand their purchase patterns. This information was used to drive recommendation engines as well as for effective cross selling of products to existing customers, by designing suitable combos. Targeted marketing strategies were developed based on the insights derived from the analysis.

Dhivya Rajprasad, Prediction of 30-Day Readmission Rate for Congestive Heart Failure Patients, July 2017, (Michael Fry, Scott Brown)
Prediction of readmission rates for patients has gained importance in the present healthcare environment for two major reasons. First, transitional care interventions have a role in reducing the readmissions among chronically ill patients. Second, there is an increased interest in using readmission rates as a quality metric with the Centers for Medicare and Medicaid Services (CMS) using the readmission rate as a publicly reported metric aimed at lower reimbursements for hospitals which have excess readmission rates according to reported risk standards. The objective of this project is to understand the factors which contribute to high readmission rates and predict the probability of a patient being readmitted. With a prediction model in place, hospitals will be able to better understand patient dynamics and provide better care while avoiding penalties for higher readmission rates. In this report, several different data mining, advanced statistical and machine learning techniques are explored and used to predict readmission rates. A comparison of the different techniques is also provided.

Prarthana Rajendra, Cincinnati Children’s Hospital and Medical Center, July 2017, (Jason Tillman, Michael Fry)
The scope of the project is to compute the metric usage of reports of various learning networks. This is done through the Extraction, Transformation, and Load processes. Data is extracted from various tables, transformed according to the requirements, and loaded into a single reporting database. The measures computed are then visualized in the form of graphs. SSIS, SSRS and SQL Server are the main technologies used in accomplishing this task. Packages were built in SSIS to automate these tasks and the resultant data sets can be viewed and analyzed through reports built using SSRS.

Matthew Murphy, Optimization of Bariatric Rooms and Beds within a Hospital, July 2017, (Michael Magazine, Neal Wiggermann)
Currently, hospitals do not have the ability to predict the quantity and type of specialty resources needed to care for specialty patients. This inability is especially problematic given the explicit and implicit cost of under or overestimating the need. Two such specialty resources are bariatric beds and bariatric rooms. According to the Center for Disease Control, the obesity rate within the United States adult population has risen to 36%. The increase in the obese population of the United States along with high costs for bariatric beds and dedicated bariatric rooms have necessitated investigating a better way to determine the proper number of bariatric rooms to construct, bariatric beds to own, and bariatric beds to rent. In this paper, we use simulation and probabilistic techniques along with queueing theory models to investigate the relationship between service level of severely obese patients and the number of bariatric rooms needed to reach a designated service level with such patients. Furthermore, we investigate and build a model that can be used to determine the optimal mix of beds to buy versus rent to minimize the overall cost of bariatric equipment for the entire hospital.

Soumya Gupta, Employee Attrition Prediction, July 2017, (Yan Yu, Peng Wang)
Every company wants to make sure that its employees especially the good ones continue to work for it. Losing valuable employees is very expensive for a company both monetarily and non-monetarily. In this project, we aim to predict whether an employee will leave the company. Three classification techniques —logistic regression, decision trees, and random forest —have been used for building the predictive models. Their results have been compared. Valuable employees have also been identified by making a few assumptions and separate models have been built for this set of employees since the cost of losing a valuable employee is much higher. The prediction accuracy of the random forest is quite high in this case.

Apurva Bhoite, Predicting Success of Students at Medical School, July 2017, (Peng Wang, Liwei Chen)
The University of Cincinnati’s College of Medicine wanted to conduct a study to explore the students’ information enrolled at the College of Medicine by exploring their MCAT scores, MMI scores, Academic Background, Race, and overall background. The College of Medicine also want to identify the most influential predictors in determining the success of the students at the medical school, and finally build a predictive model to do so. The main aim of this project was inference based. Thus, a lot of graphical exploratory analysis which included mainly box plots and bar plots faceted over variables were plotted to get an overall idea. Due to the high dimensionality of the data and less number of observations I used lasso subset selection with cross-validation to reduce the number of predictors. The modeling techniques Logistic Regression, Classification Trees and Random Forests were used to build the predictive model and compare its performance to select the best model. The College of Medicine can employ this model while admitting the students to the college.

Swapnil Sharma, Application of Market Basket Analysis to Instacart Transaction Data, July 2017, (Yichen Qin, Edward P. Winkofsky)
With the rise in online transactions, companies are trying to leverage the humongous data generated by the transaction activities to transform it to meaningful insights. Data Mining techniques can be used to develop a cross selling strategy for the products. Data scientists use predictive analytics to improve the customer experience of shopping online by developing models that predict which products a user will buy again, or try for the first time, or which products are bought together. In this paper, we analyze the trend in customer shopping behavior on the Instacart website for buying groceries. The data set was made public by Instacart—a same day grocery delivery service, for 3 million transactions of over 200,000 users. The data set is explored using the open source statistical learning tool R. Market Basket Analysis is done using the Apriori algorithm for various support levels, confidence and lift to suggest combinations of products to be included in a basket to cross sell the products on the platform. The model is developed to predict which previously purchased products will be in a user’s next order. The F-score measures the model’s performance.

Jasmine Sachdeva, Malware Analysis & Campaign Tracking, July 2017, (Michael J. Fry, Dungang Liu)
Any software that does something that causes harm to a user, computer, or network can be considered malware. Malware analysis is all about examining malware to understand how it may harm the device, what is its source, how it works, and how to destroy it. As the number of malware attacks hitting an organization is increasing every day, it is crucial to analyze and mitigate it to ensure the security of the sensitive data residing in the devices. This project is also about IT Security awareness programs that were conducted and analyzed enabling employees to become more vigilant, ensuring data and security is not breached within the organization.

William Newton, Concrete Compressive Strength Analysis, July 2017, (Yan Yu, Edward Winkofsky)
Concrete is an indispensable material in modern society. From roadways to buildings, humankind is literally surrounded and supported by this chemical bond comprised of relatively basic ingredients. Concrete is so ubiquitous today that it is often taken for granted. Many never question how concrete got here or how it can be trusted. Responsible for the construction of buildings thousands of years old, which in some cases still stand, the contained analyses seek to explain what concrete is and how its strength can be evaluated. Using principal components analysis and linear regression techniques, a dataset comprised of different concrete mixtures was analyzed. The analyses provide bases for reasonable inferences to be made about compressive strength and how different elements behave in the presence of others. But they also indicate that this particular dataset is not comprehensive enough to make reliable predictions regarding compressive strength of concrete.

Eric Nelson, Enhancing Staffing Tactics for Retailer Credit Card Customer Acquisition, July 2017, (Peng Wang, Justin Arnold)
Credit card companies across the country spend millions of dollars promoting their credit cards to consumers. Obtaining the attention and interest of a shopper can be extremely difficult, nonetheless so when a credit card is being promoted by a retailer whose marketing budget does not stand up to those of larger banks. To attract customers, many retailers set up in-store promotional activities to give customers a chance to learn more about the card. One such retailer invested in this strategy but required assistance determining which stores should receive marketing at which times as the required materials and staff are limited and expensive. To answer the question, this project looks at existing credit consumers through the lens of their shopping history. A model is used to determine which potential acquisition customers (“lookalikes”) are most similar to customers who already possess one of the retailer’s credit cards. A final tool shows which stores have the largest number of lookalike households and when those shoppers are in store and likely to notice the credit card promotional materials.

Nitish Puri, A Study of Market Segmentation and Application with Cincinnati Zoo Data, July 2017, (Yan Yu, Yichen Qin)
The process of dividing a market into homogeneous groups of customers is known as market segmentation. Customers can be grouped together based on where they live, other demographic factors or even their behavioral patterns. This project explains these along with other ways of grouping customers in the market, purpose of doing segmentation and the general process followed. Clustering is the main statistical technique used for performing segmentation. The two most commonly used algorithms are K-Means and Hierarchical Clustering, and they are explained in detail in this project. The final part of the project describes the membership data from Cincinnati Zoo, and segments the Cincinnati Zoo customers by performing K-Means clustering on this data.

Tauseef Alam, Internship with JP Morgan Chase Bank, July 2017, (Michael Fry, Yuntao Zhu)
The Chase Consumer & Community Banking (CCB) Fraud Modeling team at JPMorgan Chase & Co. is an analytical center of excellence to all fraud risk managers and operations across the bank. CCB Fraud Modelling team is responsible for building predictive models for managing fraud risk at transaction, account, and customer and application level. As part of CCB fraud modeling team my role is to build machine learning model for predicting Credit Card Bust out Account fraud. "Bust-out" fraud also known as sleeper fraud, is primarily a first-party fraud scheme. It occurs when a consumer applies for and uses credit under his or her own name, or uses a synthetic identity, to make transactions. The fraudster makes on-time payments to maintain a good account standing, with the intent of bouncing a final payment and abandoning the account. ("Bust-out fraud white paper" 2009 Experian Information Solutions, Inc.) I used GBM as my modelling technique for predicting fraud accounts. As part of the process I have created some independent variables and tuned model parameters to build the model. As part of our next steps we will enhance our model performance by including more features. Once the model is finalized it would be implemented and the scores generated from this model will be used in deciding whether the Credit Card account is fraud or not.

Xiaoming Lu, Investigating the Information Loss of Binning Variables for Financial Risk Management, July 2017, (Peng Wang, Dungang Liu)
In financial risk management, binning technique is widely used in the credit scoring field, especially in the scorecard development. Binning is defined as the process of transforming numeric variables into categorical variables and regrouping the categorical variables into new categorical variables. This technique is usually employed at the early stage of model development to coarsely select important variables for further evaluation. One potential problem of binning is the information loss due to transformation. To tackle this question, we performed automatic binning on the German Credit dataset using the “woeBinning” package. Then, we explored the potential information loss of binning in the development of several models using R, including logistic regression, classification tree and random forest. We employed residual mean deviance, OOB estimate of error rate, ROC Curve, symmetric and asymmetric misclassification rates (MR & AMR) to compare model performance. In general, there is little difference in the model performance for the original data and binning data, which means there is little information loss after binning the data.

Sushmita Sen, Digit Recognition with Machine Learning, July 2017, (Yan Yu, Liwei Chen)
Computer vision is a subject that piques everyone’s interest. As humans, we learn to see and identify objects very early and as such don’t give much thought to the process. But in the background an immensely complex architecture of neurons carry on this task. In pursuit of replicating this process, many fields of study have emerged. Machine learning and pattern recognition are among those. In this project, I have attempted to identify images of handwritten digits from the very popular MNIST dataset. I have used a very popular classifying algorithm Support Vector Machine and Neural networks and compared their results in this document.

Abhishek Rao, NBA’s Most Valuable Player of 2017, July 2017, (Yan Yu, Liwei Chen)
Analytics and sports have been around together for a while now, with advancements in sports technology, the application of analytics to sports increases with every passing day. A lot of decision making, scouting, recruiting, and coaching in this age has something to do with how they crunch their numbers. While it’s true that uncertainty in sports is the best thing, an increasing number of people are becoming proponents of analytics applied to Basketball. The nature of the sport makes it very suitable for statistical analysis. The plethora of variables and their interrelationships reveal some of the important facets of the game. Although it’s difficult to evaluate an individual’s ability through analysis of a team game, it reveals things that wouldn’t have been noticed by plain sight.

Gautam Girish, Predicting Wine Quality, July 2017, (Peng Wang, Yan Yu)
Wines have been produced across the world for hundreds of years. However, there are significant differences in the quality of the wine which may be due to several factors. These factors can range from alcohol content, pH of the wine, fixed and volatile acidity etc. In this paper, I am trying to predict the quality of wine based on several of these factors. Different modeling techniques will be used to determine the best model to predict the quality of wine. The prediction techniques used are Linear regression, Generalized Additive Models, Regression trees. Ensemble methods like Boosting and Random Forest have also been used. Principal Component Analysis will also be done to try and improve the model performance. The dataset has been obtained from Kaggle. All the ensuing analysis and model building has been done in R with the necessary packages. The R-squared values obtained from the test dataset is used as the metric for model comparison.

Keerthana Regulagedda, Diabetes Prediction In Pima Indian Women, July 2017, (Yan Yu, Michael Magazine)
The objective of the project is to predict diabetes in Pima Indian women based on different diagnostic measures. As the size of the data set in consideration is small and has missing values in some of the variables, models are built using algorithms that are robust to missing data. In order to achieve this, first data exploration is performed and all the predictor variables are analyzed, correlations and patterns in the data are noted. Based on the preliminary analysis, variable selection is done and initial prediction model is built using logistic regression technique by removing the records with missing data. While removing the missing data, half of the information is lost and as a result the logistic regression model built gave poor results in prediction with AUC as 0.6 and misclassification rate as 0.54.  In the model building process, CART and Gradient boosting classification algorithms that handle missing data well are implemented and performance metrics are calculated. Missing data imputation is also done and effects of imputation on variable distributions are studied. Finally, models are built on complete data to see if the accuracy of prediction improves after imputation of missing values.

Rajarajan Subramanian, Predicting Employee Attrition Using Data Mining Techniques, July 2017, (Yan Yu, Edward Winkofsky)
For any organization, human resources form one of the many pillars of foundation to ensure its sustainability in the market. Employee satisfaction and attrition are two critical factors that impact its growth in the near future. They tend to have positive and negative influences respectively not only in an organization’s success but also among the existing workforce in the organization. Using statistical modeling, it would be possible to determine the various causes/factors that lead to employee attrition and predict whether an employee would leave the organization or not. The objective of this study is to compare various predictive models and identify the best among them for predicting an employee attrition. A fictional dataset from Kaggle, created by IBM data scientists is used for the study. The models built include logistic regression, classification tree, generalized additive model, random forest and support vector machines. All the models are evaluated with respect to their out-of-sample prediction performance. Misclassification rate, Cost and Area Under Curve (AUC) are considered as metrics for comparison.

Rohit Khandelwal, Comparison of Movie Recommendation Systems, July 2017, (Yan Yu, Peng Wang)
Recommendation systems have become a key tool in marketing and CRM strategies of companies in all spheres of life. This project aims to build a movie recommendation system that will use users’ ratings of movies to recommend movies they are likely to watch. Various models have been built to predict ratings and recommend movies accordingly and the results from these models are compared. A good model is one that not only predicts the ratings right but also has high precision and recall and makes recommendations in the right order.

Andrew Garner, Coffee Brand Positioning from Amazon Reviews, July 2017, (Yan Yu, Roger Chiang)
Online reviews contain rich information about how customers perceive brands in a product category, but the information can be difficult to extract and summarize from unstructured text data. Text mining and machine learning are applied to Amazon reviews to map the brand positioning of coffee companies. Specifically, a two-dimensional map places companies with similar Amazon reviews close together. This was accomplished by cleaning the text data, training a word2vec model to create a numeric representation of the review text, and applying t-SNE to reduce the high dimensional data to a two-dimensional map. Hierarchical clustering was used to label brands with distinct clusters.

Nitin Abraham Mathew, Lending Club Loan Default Analysis, July 2017, (Yan Yu, Liwei Chen)
Peer to peer lending platforms have become increasingly popular over the past decade. With relaxed rules and less oversight, the possibility of an investor losing money has greatly increased. This calls for the need to build risk profiles for each and every loan disbursed on these peer to peer platforms. The objective of this project is to explore the application of different risk modeling techniques along with techniques to tackle class imbalance on financial lending data in order to maximize expected returns while minimizing expected variance or risk.

Dhanashree Pokale, Image Classification using Convolutional Neural Network and TensorFlow, July 2017, (Yan Yu, Dungang Liu)
The inspiration behind the Image Classification problem considered for this project is employing deep learning techniques and TensorFlow like advanced data processing libraries hosted in Python to classify data. The focus of this project is using Convolutional Neural Network which layer by layer learns features from the images and considers the fact that in images pixels close by are more correlated than those which are apart. Feed forward neural network is a fully connected neural network and hence fails to utilize spatial correlation factor while classifying. With 2 convolutional layers, I could achieve a classification accuracy of 89% on Street View House Numbers dataset. With deeper architecture employment, maximum accuracy of 97% can be achieved.

Matthew Wesselink, Analysis of NBA Draft Selections, July 2017, (Yichen Qin, Edward Winkofsky)
The NBA offseason is a short time from June to October each year when players and teams have the opportunity to regroup and improve their prospects for the coming season. Teams can do this a number of ways either through free agency, the NBA draft or outright trades. The focus of the following analysis will be on the NBA draft and the future performance of those players. By analyzing win shares, one measure of an individual player’s offensive efficiency, we can better project the value each draft selection will provide to their team. Multiple forms of regression were used to predict player win shares value based on draft position. To evaluate the models, we analyzed AIC and BIC values, residuals, Cook’s Distance, and leave-one-out cross validation. Consistently, the logarithmic model performed better than other forms of regression. Logarithmic regression does well at modeling the average, but fails to predict an individual player’s success.

Angie Chen, Textual Analysis of Quora Question Pairs, July 2017, (Peng Wang, Dungang Liu)
Quora is an online platform that allows people to ask questions and connect with those who can share unique insights. The site’s mission is to distribute knowledge in order for people to better understand the world. However, with the platform’s ever-growing popularity, many users submit similar questions. At the same time, there are a limited amount of experts who do not have time to answer multiple variations of the same questions. Quora aims to allow experts to share knowledge in a scalable fashion – writing an answer once and disseminating the information to a wide audience. As a result, Quora wishes to focus on the canonical form of a question – phrases that are the most explicit, least ambiguous construct of a question. To address this dilemma, we used data analysis and modeling techniques to identify duplicate question pairs. Exploratory data analysis and text mining procedures were performed to develop a predictive model that classifies for duplicate question pairs. Two types of ensemble learning procedures, random forest algorithm and gradient-boosted trees, were attempted. Based on the research, an effective model was ultimately developed through sentiment analysis (positive or negative valence), evaluation of key question pair characteristics (number of common words, difference in character length, similarity ratio), and gradient-boosted trees, which yielded an accuracy rate of 70% on the testing data. As such, this solution can be used to efficiently focus on the canonical form of a question. This will facilitate high quality answers, and provide for better user experience on the platform.

Jainendra Upreti, Rossmann Store Sales Forecasting, July 2017, (Peng Wang, Dungang Liu)
For retail stores, the sales are affected by a combination of several factors such as, promotional offers, presence of competitors, assortment levels, store types etc. It is very important for the stores to understand how these parameters and use the analysis to predict the sales in future. Predictive models based on these characteristics are used to forecast the sales efficiently and accurately. These predictions help the store managers to comprehend the store performance against performance indicators and prepare in advance, the measure that should be taken to improve sales for example, introducing promotional offers, understanding competitor market, etc.

In this paper, we cover the processes involved in building a model to forecast store sales over a given period based on certain attributes. A store sales dataset from Kaggle is used to achieve the same. Different modelling techniques are explored - random forest, gradient boosting and Time Series Linear Model. All the models are built using R. The different modeling techniques are trained on the training dataset using the metric Root mean square percentage error (RMPSE) and then based on the prediction power they are used to forecast the store sales. Since our test dataset does not hold any value for sales therefore, the prediction error is tested by submission of the outputs on Kaggle.

Matt Policastro, District Configuration Analysis through Evolutionary Simulation, July 2017, (Peng Wang, Michael Magazine)
This capstone replicates a methodology for identifying biased redistricting plans in a new context. Rather than electoral districts, twenty-four of the City of Cincinnati’s fifty neighbourhoods across three of the five Cincinnati Police Department districts were chosen as the units of analysis. It should be noted that this project did not constitute a rigorous analysis of potentially-biased districting practices; instead, this project identified advantages, trade-offs, and other challenges related to implementation and analysis. While the results of the evolutionary algorithm-driven simulation suggested deficiencies in the current implementation, the underlying methodology is sound and provides a basis for future improvements in evaluation criteria, computational efficiency, and evolutionary operators.

Ritesh Gandhi, Gender Classification by Acoustic Analysis, July 2017, (Dungang Liu, Liwei Chen)
With the advent of machine learning techniques and human-machine interaction, automatic speech recognition is finding practical utilities in today’s world. As a result, gender classification based on acoustic properties of speaker’s voice is applicable in a range of applications in different fields. It starts with extracting voice characteristics from huge databases containing human voice samples before processing and analyzing those features to propose the best model for implementation in respective systems. The purpose of this paper is to perform comparative study of gender classification algorithms applied on voice samples. Extreme Gradient Boosting (XGBoost), Random Forest, Support Vector Machine (SVM) and Neural Network are employed to first train our model and then compare the results to determine the best classifier for gender. It has been shown for all the models that average model performance crosses 95% and misclassification rate stays below 5%. Final results suggest that Random Forest is the best classifier among all the six techniques that are used for gender recognition.

Anurag Maji, Analysis of the Global Terrorism Data, July 2017, (Yichen Qin, Edward Winkofsky)
The problem we are trying to address is to predict whether an attack will result in casualties given the nature and characteristics of the attack. The dataset was obtained from Kaggle [10]. The motivation behind this project was to gain an understanding of how terror attacks have spread over time and over different regions and what have been the key drivers in features involved in such an incident. A detailed analysis has been done on spatial and temporal nature and characteristics observed in majority of the attacks. The target variable has been converted from continuous to a categorical one as it was deemed more important to know whether there will be civilian casualties as opposed to knowing the magnitude of damage to life.

Krishnan Janardhanan, Win Probability Model for Cricket, July 2017, (Peng Wang, Ed Winkofsky)
Cricket is a popular team sport played around the world with batters and bowlers. There are a limited number of resources available to each team in the form of wickets and balls. In order to understand the impact of the resources and the current situation of the game, a win probability model was created which estimates the probability of the team winning. The model was created using Logistic regression, Classification tree using boosting and Local regression. The models were analyzed based on model parameters such as Area under the curve and Misclassification Rate. The most suitable win probability model was chosen, and the model was applied to a game to examine its predictability. Win probability models may be used to evaluate player value and contribution, used in betting sites to calculate odds of a team winning.

Kuldip Dulay, Fraud Detection, July 2017, (Yan Yu, Edward Winkofsky)
Credit cards have become an integral part of our financial system and most of the people use it for their daily transactions. Given the huge volume of transactions that occur, it is very important to ensure that these transactions are valid and have been performed by the credit card owner himself.

With the advancement in machine learning algorithms it has become possible to narrow down our search for fraud transactions to a very limited number of records which can be later manually verified. For this project, I have tried to implement 4 such machine learning techniques to identify fraud transactions. Predictive models have been built using these 4 techniques and based on a few selected performance criteria the best model has been identified.

Nidhi Mavani, How Can We Make Restaurants Successful Using Topic Modeling and Regression Techniques, July 2017, (Dungang Liu, Liwei Chen)
Yelp is an online platform both website and app, where people write about their experiences about a place they visited. Yelp had published the data for competition, wherein the information about different businesses across the countries of US, Canada, Germany and U.K and their check-ins, reviews were made available. The objective of the project is to identify the factors that affects the business of restaurants across the mid-west region of US, states of Ohio, Wisconsin, Illinois and Pennsylvania. About 6.2K restaurants with 50 attributes and having 2M reviews are analyzed. Analysis is spread across two spectrums, first is to analyze the text and identify the topics that customers are more concerned about using Topic Modeling technique called Latent Dirichlet Allocation (LDA) and the other is to find the features of highly appreciated restaurants using Logistic Regression. Thus, an analysis both on qualitative and quantitative data is done to understand the customer’s preferences.

Aditya Bhushan Singh, Price Prediction for Used Cars on eBay, July 2017, (Dungang Liu, Liwei Chen)
With the advent of E-Commerce Industries everything from household items to cars are being made available online. Changing market trends indicate various demographics now prefer to shop for almost everything from the comfort of their own homes. In this analysis we will predict the price of second hand cars whose ads were posted on EBAY Kleinanzeigen based on various attributes of the car made available by the sellers. This will help prospective buyers gather an accurate estimate of the cars while also helping sellers price their cars at an optimum level. In order to accomplish this, different data mining algorithms will be applied and evaluated to identify the best solution for this problem. Once the best solution has been established for this use case, it can be then easily transferred onto other products being sold across the E-commerce spectrum.

Ishant Nayer, Airbnb Open Data for Boston, July 2017, (Yan Yu, Yichen Qin)
The purpose of the analysis was to show people how Airbnb is really being used and how it is affecting their neighbourhood. By analysing reviews from Airbnb’s data, itself we can judge which areas are most popular, or which apartment types are most commonly used and how all the listings are reviewed. Airbnb started their open data initiative in which they disclosed some data location wise. The data was analysed using R and visualizations such as Google Static Maps, Word Clouds, were integrated into the Sentiment analysis to highlight the sentiments - location wise for the Boston area. Such kind of analysis gives a holistic approach where a vibe of the neighbourhood can be picked up using analytics. Recommendations can be made to Airbnb or the people who put up their places on their website. With the use of all the recommendations Airbnb can improve their service which will lead to happy customers and thus, better business. Sentiment Analysis was done using faceted vertical bar graphs, word cloud, and horizontal bar charts, etc. The analysis show that overall there is a Positive vibe from the listings at Boston, MA.

Yash Sharma, Image Recognition, July 2017, (Bradley Boehmke, Liwei Chen)
Computer vision is a concept which deals with automated extraction, analysis and understanding of information from images. This field has enormous use cases and numerous organizations like Google, Tesla, Baidu, Honeywell etc. have invested significant resources into research and development of computer vision technologies. Computer vision can be utilized in autonomous vehicles, language translators, wildlife conservation, medical solutions, forensics, census and many more fields. Character recognition could be taken as the first step into computer vision. This project leverages data from a Kaggle competition where 42000 labeled samples of numbers were given and participants must build models which could accurately recognize numbers from 0 to 9. Machine learning techniques like Principal Component Analysis, Random Forest and Artificial Neural network were used to build models which were trained to identify hand written numbers. Predictions from each of these models were compared and metrics such as precision, recall and F1 scores were used to judge the accuracy of the model predictions.

Aditya Kuckian, Loan Default Prediction, June 2017, (Dungang Liu, Liwei Chen)
Loan defaults is a most common problem that banks face today for all its assets. The problem aggravates at the time of economic downturn. This project is from an online competition on Hackerearth wherein a bank wants to control its Non-performing assets by timely identifying the propensity of loan defaults among applicants. The data provided was related to loan application, customers’ engagement and demographics, and credit information. It had ~532K records and 45 features. The scope of the project was to identify the characteristics of loan defaulters for credit card and house purchase. This information was extracted from ‘purpose’ column in the data. Machine learning classification models such as Logistic Regression and Gradient Boosting Machine were used for this purpose. The predictions from each of the models were compared for concordance and area under the curve (AUC) metrics. Both the techniques identified similar characteristics from factors such as number of inquiries and credit lines, delinquency metric, verification status, grade, etc. that distinguished loan defaulters. Variable ‘total interest received till data’ showed contrasting behaviour. Gradient Boosting machine could significantly improve the predictions for credit card defaults.