2000 and Prior
Svetlana Nikolaeva, Trading Rules and Stock Returns: A Simulation Analysis, July 21, 2000 (David Kelton, David Rogers, Gary Raines)
In this paper are tests for three popular trading rules used for technical analysis of securities trading: Moving Averages, Relative Strength Index, and Lane's Stochastics. Trading indicators are applied to simulated stock-price time series generated for six different market environments. Standard statistical analysis was used to test stock returns following buy and sell signals. Overall, the results provide support for all studied trading strategies: the returns following buy signals are higher than returns following sell signals. Moreover, the absolute difference between the sell and buy returns is higher for more volatile markets. The method developed in the paper can be used for preliminary testing of any stock-trading rule in any specific market environment.
Qiang Lin, A Survey of Power Analysis in Design of Experiments, July 21, 2000 (Martin Levy, David Rogers, Jeffery Camm)
This is a technical report summarized from the book 'Statistical Power Analysis for the Behavioral Sciences' by Jacob Cohen. The power of a statistical test is the probability that we can reject the null hypothesis based on the sample results when the null hypothesis is false. We want the power to be high so that when we cannot reject the null hypothesis based on the sample results, we know the probability of accepting the true null hypothesis is very low. For some statistical tests, power analysis and sample-size analysis can be very complicated. This report summarizes the methods of computing power values and the sample-size values to obtain ideal powers for different tests. For each test, the definitions of important parameters and the computational methods are followed by illustrative examples. SAS IML programs are provided for each example. The power tables are not reproduced in the report because using SAS programs to compute power can be much easier than looking at a table. This report can be used as a handbook to obtain power and sample-size values for different statistical tests.
Boris A. Orlov, An Analysis of Impact of Price Protection on Supply Chain Profits in Short-Cycle Industries, June 6, 2000 (Nikhil Jain, Michael Magazine, Sean Willems)
In short life cycle industries such as the personal computer industry, price and costs of the product decline rapidly over the product life cycle. Declining prices increase the cost of holding a unit of inventory. Without price protection distributors would hold less inventory increasing unsatisfied demand. Price protection assumes that manufacturer compensate to the distributor the difference in price if it declines for a specified period of time or for a proportion of units unsold. A two-period inventory model is developed in order to measure the influence of price protection on channel profitability in short life cycle industries. The level of price protection that maximizes the individual profits of the manufacturer and the distributor is different from the optimal level of price protection that maximizes the total profit of the supply chain. Examples are given to illustrate the impact of change in profit margins on the optimal level of price protection. Some implications for supply chain management are discussed briefly.
Reeja Marath, Integration of E-Business into the Supply Chain, June 6, 2000 (Ram Ganeshan, Michael Magazine, Nikhil Jain)
The idea of doing business electronically has been around for some period of time. Today many companies are moving away from Phone and Fax to the Internet. Companies have started using Web to communicate, and to achieve real business value by incorporating Internet technology into their core businesses. E- Business is about better customer service, integrating with the suppliers and partners and being able to expand the physical market through electronic means. It is about streamlining the current business processes that would in turn add the value that is provided to the customers. Organizations that succeed in grasping and adopting the new elements of web based E-Business have an edge over their competitors. This study has focused on the aspect of integration of E-Business into the supply chain. Case studies of various companies, which market varied products and offer distinct services and have implemented Web based E-Business in their firms has been presented. In this project, we have conducted a case study analysis on a Company X, a distributor of specialty goods located in Cincinnati. We have looked into the aspect of implementing web based E-Commerce in the firm by analyzing the existing system and providing a proposal for implementation of a new system. The objectives are to ensure that the front end Order entry and the back end i.e. inventory management, supplier and customer relationship management, forecasting is coordinated effectively through efficient integration of Information systems. Another goal is to transact business directly between the customer and the Company, with minimum response time and negligible overheard costs. To achieve these objectives a detailed study has been conducted on the existing bottlenecks and a proposal to overcome these pitfalls has been presented. A cost comparison based on the resources required software and hardware requirements between the existing and the proposed system has been provided. The methodology for implementation of the proposed system and the areas where the benefits would be achieved has also been described in the present study.
Eric W. Kramer, A Heuristic Method for the Honors Plus Program Interview Scheduling at the University of Cincinnati, June 2, 2000 (Michael Magazine, Norman Baker, Jeffrey Camm)
Timetabling is an area that has often been difficult for which to generate solutions. Many universities often have difficulty scheduling courses, exams, and other activities requiring the scheduling of various entities during specified time periods. The Interview Scheduling System is a problem that requires the scheduling of a series of interviews between employers and Honors Plus students at the University of Cincinnati's College of Business Administration. The students are freshmen that will have completed their first year of study in June and are filling positions as interns with companies in the greater Cincinnati area. In January of each school year, scheduling of companies to interview students begins. After the companies have been assigned interview times, the students are assigned interview times with the companies. Finally, after the interviews are conducted in February, students are assigned as interns with the various companies. A heuristic method is developed based on a set of integer programs for solving an interview-scheduling problem. The problem is formulated in terms of reducing conflict between interview times and student course schedules. The heuristic leads to a solution of an otherwise difficult problem to solve.
Gautam Dalvi, Finding All Optimal Solutions for the Generalized Set Covering Problem, May 19, 2000 (Jeffrey Camm, David Kelton, James Cochran [Louisiana Tech University])
The generalized set covering problem (GSP) occurs in development of optimal network of land sites for conservation of natural and biological resources. Since development of conservation network may involve purchasing/leasing sites from existing owners, an optimal solution obtained by solving the GSP may not be always feasible to implement within budget constraints. Consequently, during negotiations with site owners, the decision-makers must be aware of alternative ways for developing the network and relative importance of the sites in ensuring an optimal network, which is represented by their irreplaceability indices (IRI). Since IRI is the percentage of all optimal solutions in which the given site is present, we must determine all optimal solutions to GSP to compute IRI for any site. In this project, we study the percent reservation problem, which is formulated as a GSP, for the New South Wales National Parks and Wildlife Service of Australia. We first explore computational issues involved in determining all optimal solutions for the percent reservation problem. We then present a problem reduction technique and two algorithms, namely restrictive enumeration algorithm and replacement site algorithm, for estimating IRI. Problem reduction decreases the solution space and identifies some sites that are absolutely essential for the optimal network. Restrictive enumeration algorithm allows generation of new optimal solutions in a controlled way. Replacement site algorithm algebraically generates large optimal solutions in very short time algebraically. We present computational results using the above three algorithms for data sets provided by the park services and evaluate the efficacy of the algorithms in determining all optimal solutions.
Linda A. Hirsch, Telephone vs. Internet Interviewing - A Comparison of Scale Usage, May 19, 2000 (Norman Bruvold, David Rogers, Martin Levy)
For many years, telephone interviewing has been the cornerstone for data collection on countless marketing research studies. Now, however, with the influx of telephone management options (voice mail, answering machines, Caller ID, etc.) and rising refusal rates among those who can be reached, the research community must explore alternative means for collecting quality data. The Internet has the potential to be at the forefront of the next generation of data collection for the marketing research industry. As such, it is important to assess the quality of the results obtained from this medium. This research, which was conducted among individuals with access to the Internet, examines the similarities and differences between data collected via the Internet and data collected via telephone interviewing. Specifically, it explores participation rates, scale usage, and the impact of offering respondents a 'Don't Know' response on Internet surveys. This study also compares Internet and telephone interviews from the respondent's perspective by examining the extent to which they enjoyed the interview experience and their likelihood to participate in similar studies in the future.
Dapeng Cui, Archetypal Analysis and Its Applications in Business Research, February 7, 2000 (James Cochran [Louisiana Tech University], Jeffrey Camm, Martin Levy)
There are multiple statistical methods for analyzing multivariate data. This paper discusses and illustrates a recently developed multivariate technique, archetypal analysis, and explores its applications to business problems. Archetypal analysis, developed by Cutler and Breiman (1994), results from the need to find archetypal patterns, a mixture of which could well represent each observations in a data set. It also requires that archetypal patterns must be a mixture of the observations in the same data set. Archetypes are constructed by minimizing the squared error that results from representing each individual as a mixture of archetypes. Two applications to survey data in this paper show that archetypal analysis is valuable because it aids in identifying archetypal patterns in the data and analyzing and understanding the heterogeneity of consumers in a market. Another application of archetypal analysis to conjoint data, however, indicates that archetypal analysis is not always helpful in all respondents' data analysis probably due to the vast heterogeneity of consumer behavior. Limitations of archetypal analysis are analyzed and discussed.
Zaizai Lu, Infant Feeding Behavior and its Impact on Child's Health in China, January 27, 2000 (Martin Levy, Marcia Bellas, Jeffrey Camm)
This study examined the factors affecting children's feeding behavior in China, and the impact of children's feeding behavior on their health and growth conditions, using the 1993 China Health and Nutrition Survey Data. I selected 330 children age 3 or younger for the final sample. I used 222 of the children with feeding information in the final analysis. The data showed that living area, household income, mother's age, educational level, occupation, smoking or drinking habits do not have any significant effects on a child's feeding behavior. Father's smoking habit and occupation have a significant effect on the feeding behavior. A child's gender also significantly affects feeding behavior in the expected direction. The data does not support the argument that breastfed children have a lower body weight and height, nor does it support the argument that breastfed children are healthier than non-breastfed children. The data shows that a child's feeding behavior or duration of breastfeeding does not have any significant effect on his/her health status or growth indices. I recommend that a more representative sample with more complete and clean data be used in future similar studies.
Kenneth W. Schmahl, Application of an Unconstrained Multi-Product Newsboy Model for a Style Goods Business, September 8, 1999 (Amitabh Raturi, David Rogers, Michael Magazine)
Inventory analysis is critical to the profit and loss of many businesses; this is especially true in the style goods retail market. The fickle nature of fashion and fads make it important to accurately estimate inventory requirements in order to be successful in this industry. Because of the seasonality of fashion goods, it's critical to find a balance between overestimating and underestimating the demand for each season's inventory. A common technique for such an analysis is the newsboy problem. This paper examines the inventory requirements of a maternity wear rental business, Classic Maternity Sales and Leasing. An applied analysis of the multi-product, no constraint, newsboy inventory model has been utilized to examine the inventory needs of Classic Maternity. I will discuss how the newsboy model has been modified to meet the criteria necessary for the inventory analysis of such a business. I will also provide some sensitivity analysis to show the effects of overestimating or underestimating the demand for the maternity wear and salvage value of the maternity wear. Included in the paper is a brief comparative study of other literature relating to the newsboy problem and the extensions that have been made to it.
Molina Beck, Approaches to Handling Missing Data, August 31, 1999 (Martin Levy, Norman Bruvold, Jeffrey Camm)
Missing data occur in statistical analysis in most practical situations. They present a problem since the units with missing data represent an absence of information, so that overall there is a loss of information. For example, model selection and estimation for time series is based on the assumption that the time series is complete. However, in practice, this is not usually the case. Incomplete series should not be fitted with models as this can lead to a serious lack of fit, especially when the number of missing observations is large. For the same reason, it is also not advisable to simply omit the missing observations from the series. Further, most common software packages that are used for estimation, such as SAS, SPSS, or RATS will cause errors in data analysis with missing observations, since their procedures expect input data sets to contain observations for a contiguous time sequence. This poses the question of how to estimate a model for such data and how to estimate the missing observations if these values are of interest in themselves. Historically, missing data has been estimated in an ‘ad hoc' manner. The traditional approaches to estimation consist of either discarding the observations with missing values, or imputing them by replacing these values with the means of available observations, or by regressing the missing values on the observed values for a case, and replacing the missing values by the predicted values thus obtained. In recent years, researchers have advocated the use of model based procedures. A model is defined for the missing data, and inferences are based on the likelihood under that model, with parameter estimation being done by such procedures as maximum likelihood. This approach has the advantage of flexibility and the avoidance of ad-hoc methods, since model assumptions are known and can be evaluated. In order to maximize the likelihood function for these models, several iterative algorithms such as the Newton-Raphson algorithm, the EM algorithm and the Kalman filter are discussed and evaluated, both for univariate and multivariate data. The application of the EM algorithm in evaluating means and covariance matrices, in multiple regression, and in time-series data is also discussed. This project compares the various methods of estimating missing data for the purpose of statistical analysis. The first part of the project is a discussion and comparison of the different ways of estimating missing data, and the latter part consists of the practical application of one or more of these methods to the available data.
Kemal H. Sahin, Development of Scheduling and Waste Minimization Techniques for Batch Processing Plants, August 27, 1999 (Amitabh Raturi, Jeffrey Camm, Amy Ciric)
Batch processing is the preferred option for industries that produce a wide range of products in small amounts. The scheduling of the available processing equipment to satisfy demand for all products has been investigated in detail in operations research. Many of these methods have concentrated on optimizing economic performance. However, waste recovery, which can contribute to very large costs, has not been analyzed in detail for a combination of both economic and scheduling concepts. The aim of this project is to develop a method that will include waste recovery considerations in the scheduling of batch processes. Two different approaches will be used to analyze the effect of waste treatment costs. An aggregated approach will simultaneously determine the optimal schedule for both processing and waste treatment, while disaggregated methods will develop waste recovery schedules for processes after the optimization of the production section is completed. Both simultaneous and continuous approaches will be used for comparison purposes. In case of simultaneous operation, every time a product is generated, waste has to be treated, while the continuous operation examines the common practice of continuous waste treatment. The models are developed for a single product/single waste process, as well as a multi-product/multi-waste operation. Case studies have been used for determining the efficiency of both methods. The aggregate approach results in cost savings in the range of 6% over the disaggregated approaches but takes seven times longer for even small problems. For larger problems, aggregate approaches may be too complex and time consuming for realistic implementation.
Meghna Sinha, An Evaluation of Combined Ranking, Selection and Multiple Comparison Procedures in an Industrial Application, August 25, 1999 (David Kelton, Jeffrey Camm, James Cochran [Louisiana Tech University])
In the simulation literature, ranking and selection procedures have often been recommended for comparing system designs, particularly when the goal is to select the best design. However, in empirical research multiple comparison procedures are commonly employed. For example, the researcher interested in making pair-wise comparisons among the groups can do so by constructing a confidence interval for the difference between the performance measures of the pair of results. The difference between ranking and selection procedures and multiple comparison procedures is analogous to the difference between hypothesis testing and interval estimation. The former results in a decision, rather than an estimate, so it is less informative. Typically, ranking and selection procedures provide inference only about the design selected as the best or one of the best in some sense. Two-stage sampling or sequential sampling is needed to attain a pre-specified probability of selecting the best design. In contrast, multiple comparison procedures provide inference about relationships among all system designs and can be implemented in a single stage of sampling, but they do not guarantee a decision. However, when using simulation experiments to estimate the expected performance, the best system can neither be selected nor the differences between the systems be bounded with certainty. In 1995, Nelson and Matejcik presented procedures that simultaneously control the error in selecting the best and in bounding the differences. These procedures combine the standard indifference-zone selection procedures, that control the error when choosing the best, and the standard multiple-comparison procedures that control the error in making simultaneous comparisons. The procedures assume that data are normally distributed, but they do not assume known or equal variances across systems. In this paper we apply the simulation ranking technique and the multiple comparison procedure simultaneously, as proposed by Nelson and Matejcik, to compare three product-mix scenarios in a manufacturing plant. The objective is to determine the optimal mix, where an optimal mix is one that allows all machines to remain idle for a minimum amount of time. The results will also determine how much better the best mix is relative to each alternative. We also compare the Bonferroni selection procedure to Nelson and Matejcik's new procedure, NM. Both procedures exploit the use of Common Random Numbers (CRN) to reduce variance and hence reduce computation efforts.
Xinxin Liu, A Comparative Study of Neural Networks and Statistical Models for Customer Choice Modeling, June 18, 1999 (David Rogers, David Kelton, Norman Bruvold)
This paper is an empirical study intended to be a bridge between the behavioral and statistical lines of research in customer choice behavior. The relationship between retail store characteristics and customer buying behavior from a choice set of two stores is explored using the following approaches: the conditional logit model and the neural network (NN) model. Using a data set of 400 survey responses, a NN was created using store characteristic variables and its accuracy checked with a holdout sample. The same was done for the conditional logit model. The comparison of results revealed that the NN outperformed the conditional logit model in terms of predictive accuracy. Sensitivity analysis was conducted for the NN model and managerial implications were outlined.
Brian L. Sersion, An Application of Optimization for Establishing a Landfill Sampling Network, June 4, 1999 (Jeffrey Camm, Amitabh Raturi, David Rogers)
Waste-management operations require significant capital expenditure for ground-water sampling of sanitary landfills. High costs associated with outsourcing make internalization of this service an attractive proposition. The facilities-location problem, in this context, involves determining the optimal number and location of sampling teams to service landfill customers. The solution process includes the completion of a customer survey and linear regression to estimate demand for a two-stage mixed integer linear program. The results of this study support a managerial recommendation for Browning-Ferris Industries' landfill-sampling network.
Sanjay Chadha, Analysis of Salaries of College of Business Administration Professors at the University of Cincinnati, March 25, 1999 (Martin Levy, Norman Bruvold, David Rogers)
In this project a regression model has been formulated that explains 80% of the variation in the salaries in CBA with some exceptions. The following three research hypotheses have been tested using regression: 1) The newly hired professors in the College of Business Administration are being offered higher salaries in comparison to the professors who are serving over the last 5-15 years. 2) There is a variable difference in terms of national origin in terms of salaries of professors. 3) There is a gender variable difference in terms of salaries of professors. The results are that the first hypothesis is accepted, while the other two hypotheses are rejected.
Lubov Skurina, Exchange Rates and the Value of Foreign Operations, March 18, 1999 (Yong Kim, David Rogers, Martin Levy)
In this study I examine the effect of exchange rates on the value of foreign operations. I perform a pooled cross-sectional and time-series regression analysis of company data and include exchange-rate trend and volatility as independent variables. The results indicate that the exchange-rate trend does not have a significant effect on the value of foreign operations, but the volatility of exchange rates has a significant negative effect.
Chay Hoon Lee, The Relationship of Team Members' Cognitive Decision Styles and Team Performance, January 12, 1999 (Charles Matthews, David Rogers, Martin Levy)
In most organizations, teams play a central role in planning and strategic decision making (Gilad & Gilad, 1986). Although many studies have examined the influence of demographic characteristics on team performance, few have examined the cognitive decision making styles of team members that can also influence team performance. Hackman and Morris (1975) proposed that the extent to which the team uses that knowledge and skills of its members can influence the quality of a team's performance. Therefore, understanding the team members' cognitive decision-making styles that influence the team's effectiveness seems critical because teams can shape an organization's future through the decisions they make. The challenge for any organization is to try to maximize the level of teams' effort and knowledge brought to bear on the team performance. Thus, this paper explores the influence of cognitive decision-making styles of team members on performance.
Jeffrey D. Rieder, Estimating Store-Level Promotion Effects from Market-Level Data, December 10, 1998 (Norman Bruvold, Martin Levy, David Rogers)
The debiasing procedure outlined in "Using Market-Level Data to Understand Promotion Effects in a Nonlinear Model" (Christen, et al., Journal of Marketing Research, August, 1997) attempts to quantify both the direction and magnitude of the bias associated with market-level promotion effects. Since merchandising response functions are typically non-linear, and market-level data are aggregated linearly over a set of heterogeneous stores, market-level estimates of these response functions are often severely biased. Christen, et al. claim to be able to estimate the bias and provide a mechanism for reducing the bias through the application of regression analysis. This research applies the methodology outlined by Christen, et al. to a real world data set and, after some modifications and assumptions are incorporated to fit the methodology to the available data parameters, produces some encouraging results. Using regression analysis, the market-level bias is found to be a function of the marketing environment. The resulting regression model is then used to predict future merchandising responses.
Girish Kulkarni, Determination of the Optimal Routing for the Consumer Products Division of the University of Cincinnati, October 16, 1998 (Ram Ganeshan, Jeffrey Camm, George Polak [Wright State University])
The Master Plan for the University of Cincinnati envisages a pedestrian-friendly campus with open spaces to affect positively the quality student life on campus by creating an environment for an educational experience. One of the major issues is to reduce the conflict zones between pedestrians and service-vehicle traffic. The Consumer Product Division supplies soft-drink cans to vending machines to nearly 40 buildings on the West Campus. Duties include servicing these machines via three trucks on a predetermined schedule. This division, therefore, needed to re-investigate and realign their servicing and routing scheme to the new Master Plan. Using quantitative techniques, we helped the Consumer Products Division by: (1) Performing an efficiency analysis for the available vending-machine-demand data and made recommendations for a servicing schedule. (2) Using optimization techniques we recommended a servicing route that works with the above schedule, resulting in shorter travel times for the vehicles.
Shailesh Kulkarni, An Optimal Clustering Model for Cellular Manufacturing, August 31, 1998 (David Rogers, Jeffrey Camm, James Cochran [Louisiana Tech University])
In this paper the problem of simultaneous clustering of parts into part families and machines into machine cells in a cellular manufacturing context is addressed. A mixed integer linear programming model is developed for addressing the problem. This model is then solved using conventional branch-and-bound procedures for small-sized problems. Considering the NP-complete nature of this class of problems, a genetic algorithm-based solution procedure is developed to solve realistically-sized problems of larger dimensions. Two problems from the literature are solved using the genetic algorithm. The attractiveness of the proposed model and the solution procedure to provide simultaneous grouping of parts and machines is evaluated on the basis of grouping efficacy
Amanda R. Angle, Fill-Rate Optimization Models for Supply Chain Systems, June 24, 1998 (David Rogers, Michael Magazine, Ramk Ganeshan)
Multi-echelon inventory management is very important when attempting to influence the performance of a supply chain. Formulation of a complete inventory model often requires more than just attempting to achieve minimal inventory levels to reduce holding costs. Customer satisfaction must be taken into consideration or the cost of lost sales could outweigh any inventory costs. In this paper, four models of multi-echelon inventory systems in which several finished goods are produced from a common component are considered. These models are for optimizing base stock levels when there is a penalty cost of having a backorder. Fill-rate consideration is employed to measure the customer service level. The first model is for maximizing the fill-rate subject to a budget constraint on holding costs. The second model is for minimizing the expected number of backorders subject to a budget constraint and a fill-rate constraint. The third model is for minimizing the penalty costs of backorders and inventory holding costs subject to a fill-rate constraint. In the last model, penalty costs of backorders and inventory holding costs are minimized subject to a budget constraint on holding costs and a fill-rate constraint. In the results section we will assume that demand is to be normally distributed in all the models and a non-linear optimization model is used to determine base stock levels.
Joga R. Palutla, Minimizing Maximum Lateness In a Family Single Machine Scheduling Problem, June 11, 1998 (Michael Magazine, James Cochran [Louisiana Tech University], Amitabh Raturi)
This paper studies the problem of scheduling jobs on a single machine in order to minimize the maximum lateness. The jobs are grouped according to processing requirements in families. The problem is NP-hard and computationally intensive. Heuristics are the only feasible means of solving large problems. The paper describes several existing heuristics and analyzes heuristic performance relative to one another and the optimal. Lower bounds are developed in place of the optimal solution in this analysis. The paper attempts to determine the best heuristic for a given set of problem parameters and its closeness to the optimal solution.
Nawal K. Roy, Risk Management: Exploration of Value at Risk, May 29, 1998 (David Rogers, Martin Levy, Ram Ganeshan)
Risk management is the fastest growing field in the investment and financial industry. This paper covers the most sophisticated methodology of risk management, i.e. Value at Risk modeling. As an overview paper, it deals with all the issues related with Value at Risk modeling: different methodologies for estimating the VaR parameters, its highlights and shortfalls, and the regulatory status. It also discusses the statistical model of J.P. Morgan's RiskMetrics and the expected recent development (the course of future research) in the field of Value at Risk modeling.
Ronald N. Gnau, A Comparison of Logistic Regression and Discriminant Analysis as Classification Techniques, May 26, 1998 (Martin Levy, David Rogers, Norman Bruvold)
Strategic marketing in modern business organizations involves three key elements: segmenting, targeting, and positioning. The development of a sound marketing strategy in today's competitive environment is barely possible without the use of multivariate statistical analysis. Two multivariate techniques that can be useful in assigning customers to the most appropriate market segment are logistic regression analysis and discriminant analysis. Each technique makes assumptions about the type of data used in variables. The independent variables in logistic regression models can be categorical, whereas discriminant models generally require that the data for independent variables come from normal populations with identical covariance matrices. This empirical study applies both techniques to the same data to classify customers to market segments, and compares the performance of each technique on the basis of classification accuracy.
Glenn A. Dahl, Core Carrier Selection: A Comparison of Solution Approaches, May 8, 1998 (Jeffrey Camm, Martin Levy, David Rogers)
In this work the problem of choosing preferred transportation companies for shipping, called core carriers, is examined. Optimal selection is treated as an extension of the Maximal Set Covering Problem. Three versions are examined. In the first model the desired core coverage is expressed as a percentage of total coverage, and all decision variables are binary. The second model is a relaxation of the binary restriction in the first model for the variables that represent lane assignments. A simple rounding heuristic is used to convert fractional solutions to integer. In the third version is a goal-programming weighting method: total core load is treated as a goal, allowing for the removal of the coverage constraint from the model.
Detelina Marinova, Between Strategic Intent and Inertia: Tracing Individual Knowledge Structure Evolution in Organizations, April 15, 1998 (Martin Levy, Murali Chandrashekaran, David Rogers)
Though organizations often employ multifunctional teams in strategic decision making to ensure maximal information dissemination in the organization, the alleged benefits of teams are seldom realized. The central objective of this paper is to explore the process underlying individual learning in group settings, and to secure an understanding of why groups often do no produce extensive collaborative efforts. Accordingly, we develop a conceptual model that traces individual behavior as well as knowledge structure evolution in group settings. Our central thesis is that despite the strategic intent of each decision maker to make a 'good' decision and choose the 'best' course of action from a set of alternatives, communication with group members is likely to be shaped by the balance of intent and inertia. As a result, communication flow in groups and, hence, individual learning is likely to proceed in a selective fashion. We further identify possible drivers of inertia and propose hypotheses about their effect on individual knowledge structure evolution as well as on communication and influence in groups. Econometric analyses of data obtained from a longitudinal field experiment converge to strongly support our conceptual model.
Jun Zhou, Low Birth Rate Prediction Models for the State of Ohio, April 3, 1998 (David Rogers, Martin Levy, Edward Donovan)
Low Birth Weight (LBW) prediction models were built based on the Ohio birth certificate data from 1993 and 1994. Maternal age, education level, smoking, alcohol consumption, pre-pregnancy weight, race, fetus gender, marital status, and pre-term medical complication were found significant in the logistic model. The study also showed that some interaction terms between the main factors made significant contribution to LBW. Two logistic models were built and they were validated by the 1995 birth certificate data. The model provided a quantitative tool to direct the limited resource to high-risk population of LBW in order to achieve a more cost-efficient and economical prevention result.
Srilatha S. Sekaripuram, Distribution Planning in Supply Chains - The Equal Periods of Supply (EPOS) Approach, March 26, 1998 (Ram Ganeshan, David Rogers, Michael Magazine)
One of the important challenges facing a distribution manager is the effective control of inventory. Inventory is necessary and useful but too much inventory is expensive. If improperly managed, inventories become a significant liability, resulting in a reduction of profit and possible erosion of the competitive advantage of the firm. Hence, determining the proper inventory-management technique is important for the firm. Distribution resource planning (DRP) is a computerized tool that has been aiding distributors for planning and solving some of the inherent problems in statistical ordering techniques. Equal periods of supply (EPOS) is a DRP approach to schedule replenishments for multiple products. Use of EOPS with DRP helps to reduce overall costs, keep inventory in check, and make planning convenient. In this paper a heuristic method by which a distribution planner can incorporate the EPOS approach into DRP will be presented. Using this method results in optimal costs in situations where the transportation costs dominate.
Timothy J. Cantor, Evaluating A Taxonomy of Supply Chain Management Research, November 7, 1997 (Ram Ganeshan, Michael Magazine, Amitabh Raturi)
As we approach the twenty-first century, the evolution of emerging management practices continues to unfold. Supply-chain management is one of the more rigorously debated movements. Supply-chain management covers the flow of goods from supplier through manufacturing and distribution chains to the end user. While not difficult to define, its complexity makes for uncertain boundaries and abstract scope. The areas where the discipline has been researched and those where opportunities exist must be identified. By providing a taxonomical understanding, it is determined that at least four such opportunities exist. The taxonomy is of a hierarchical nature consisting of two principal levels. At the strategic level, papers generally deal with the means by which objectives and policies should be developed. At the operational level, authors explore the efficient operation of an established aspect of the chain. Both of these principal levels can then be divided horizontally. At the strategic level, this split resulted in the sub-levels designated explanatory essays and system representations. At the operational level, the sub-levels coordination analysis and material flow analysis resulted. Finally, each of these sub-levels can be further segregated into categories that by which a biased selection of current supply chain management literature are classified.
William Pordan, Evaluating NFL Quarterback Performance Efficiency Using Data Envelopment Analysis, July 10, 1997 (Michael Magazine, Jeffrey Camm, James Evans)
Managers are often faced with evaluating the performance of numerous operating units which produce multiple products and services. Comparison analyses can be desirable for identifying which units are performing at an efficient level, and which units are utilizing resources in an inefficient manner. This task becomes difficult when there exists no proper valuation mechanism for determining the worth of one product relative to another, or when expended resources are not readily priceable. A mathematical programming method known as data envelopment analysis (DEA) has been applied to such situations in performance assessment. DEA allows each operating unit to assign a unique set of weighting factors to its outputs and inputs so as to maximize its efficiency ratio. Constraints on the weight selections lead to the identification of relatively efficient and relatively inefficient units. This research project presents an overview of the theory and formulation of data envelopment analysis, and offers an application of its use in evaluating the performance efficiency of 1996 National Football League (NFL) quarterbacks. The production of each is ranked based on his DEA efficiency score, and a comparison is made with the NFL passer rating system currently used by the league.
Christopher M. Lynd, Heuristic Solution to a Baseball Scheduling Problem, July 2, 1997 (Michael Magazine, Jeffrey Camm, James Evans)
Heuristic techniques and mathematical programming have often been at odds with one another. The mathematical-programming camp preaches global optimization, whereas the heuristic camp preaches tradeoffs. The question of which method to use should be decided on an individual problem basis. Some problems, especially large combinatorial problems, lend themselves to heuristic techniques. For instance, mathematical-programming techniques such as branch and bound and dynamic programming perform essentially no better than does complete enumeration for NP-hard problems like the traveling salesman: (N-1)!/2. Users and developers must weigh the costs of global optimization, whether it be computing time, software or development dollars with the resulting benefits. In this paper, I define an NP-hard baseball-scheduling problem. I outline three different approaches to solving the problem: two heuristic techniques and one mathematical-programming technique. The two heuristics employed are tabu search and genetic algorithms. The mathematical-programming technique being used is integer programming. I present the results and outline the advantages and disadvantages of each technique.
Ian Clough, Body Image2: Data Analyses, July 1, 1997 (Martin Levy, Terri Byczkowski [Cincinnati Children's Hospital], David Rogers)
This document is a report of a number of statistical analyses performed on a variety of data sets. Programs and computer output have been included in the appendices. The work was performed over a ten-month period.
Amy M. Anneken, Applying GIS and Benders' Partitioning to the Uncapacitated Facility Location Problem, June 12, 1997 (Dennis Sweeney, Jeffrey Camm, David Curry)
Facility location problems are very important and practical in business decision making today. The facility location model concerns finding locations to serve customers in an economical and high quality way. This project aims at providing a way to solve these types of problems in a manner that incorporates both objective and subjective means. The objective of this project is to explore the potential for an algorithm that involves both human and mathematical iterations. The problem studied is the Uncapacitated Facility Location problem. A Geographical Information System is used to assist the human decision maker in selecting good solutions. A Benders' partitioning algorithm is used to generate bounds and to suggest alternatives for the decision maker. A geographic computer interface that serves as a front end to an Operations Research algorithm has many advantages. Finding an optimal solution to a problem is the best alternative, but many companies never do this because they do not have the time or the expertise to do so. The results from this project can provide many benefits to both the business and OR community.
Angela Bansal, Discounts/Premiums on Country Funds - Time Series and Multivariate Analysis, June 3, 1997 (Martin Levy, David Rogers, Yong Kim)
In this paper, the time-series behavior of discounts/premiums of closed-end country funds is examined by using the models of Hardouvelis, La Porta, and Wizman(1993). The results show that most of the funds of emerging markets trade at a premium. This premium has predictive power for fund return but not for its nest asset value returns. Results also show that country funds are good diversification tool for US investors and at least three local stock markets are cointegrated with the US market.
Rajdeep Grewal, The Long Run Advertising-Sales Relationship: Incorporating the Impact of Economic and Political-Legal Environments, May 12, 1997 (Martin Levy, Jeff Mills, Raj Mehta)
A methodological framework for investigating marketing parameter functions with time varying coefficients is adopted, to investigate the relationship between market performance (e.g. sales, market share), marketing effort (e.g. advertising, sales promotion), and environmental conditions (e.g. market growth, inflation). The nine-step framework relies on recent methodological developments in the econometric and time series (ETS) literature to present a sequence of statistical tests and estimation techniques. The authors elaborate on the framework to provide a rationale for expecting specific behavior by marketing performance variables, marketing effort variables, and environmental variables. Further, the authors illustrate the framework for the famous case of the Lydia Pinkham Medicine Company.
Jennie Bao Jin, A Markov Chain Analysis of the New York Stock Exchange Composite Index, May 2, 1997 (David Rogers, Martin Levy, Norman Bruvold)
The behavior of stock-market prices has been researched extensively via different empirical methods (Fama 1970, Poterba and Summers 1988, Fama and French 1988, Fama 1991). Whether certain price trends and patterns exist to enable the investor to make better predictions of the expected values of future stock market prices is still debatable. A number of researchers have shown that both the relative strength of a security in the market and the nature of its successive price movements may be interpreted with the framework of Markov theory (Dryden 1969, Fielitz and Bhargava 1973, Fielitz 1975, Mcqueen and Thorley 1991) and these studies are modeled in such a way as to provide useful information to the individual investors and portfolio managers concerning stock-market movements. While most of the previous work in the area has been done in the individual-security setting, I investigate the relevant Markovian behavior with the entire stock market, which is represented by the NYSE Composite in this project. Relatively new data (from 1985 to 1995) are used to test and formulate both a first-order three-state (up, unchanged, and down) and a first-order two-state (up and down) Markov-chain model based on daily price changes of NYSE Composite. Statistical inferences are conducted to test whether the NYSE Composite movements are random, which means the probabilities for the stock-market price's going up or down on a daily basis are the same. The organization of the paper is as follows: Section II is a brief review of the literature on Markov chain analysis of security prices. Section III is a description of the methodology and data used in this project. In Section IV the three-state Markov chain model is formulated and estimated. In Section V the two-state Markov chain model is estimated and a statistical inference test regarding the hypothesis of randomness of stock market movements is conducted. Section VI is a summary and conclusion of the paper.
Himani Mohan, Application of Simulation Techniques in Operations Analysis and Facility Design, December 4, 1996 (David Kelton, David Rogers, Jeffrey Camm)
Marie D. Lane, Capacity Planning in the Machine Tool Environment: A Case Study of Ahaus Tool & Engineering, Inc., August 15, 1996 (David Rogers, Jeffrey Camm, Amitabh Raturi)
Issues that affect resource and production planning in the machine-tool industry are discussed in this paper. One company and its particular operating characteristics will be the focus of the paper. Suggestions are made on improvement possibilities to their production-planning system. These suggestions are made based on a literature search of the variety of production-planning systems and models that are available, as well as this researcher's opinions from observations of the company's operating practices and discussions with the company's management. A goal-programming model was developed that can be used as a part of the production-planning process.
Laura Miser, Enrollment Projection Models at the University of Cincinnati, August 12, 1996 (Martin Levy, Jeffrey Camm, Corey Brewer)
Gregory A. Graman, The Effect of Variation in the Intermediate Delay on the Solution to the Multi-Echelon Inventory Problems with Newsboy-Style Results and Backorder Optimization, March 4, 1996 (David Rogers, Jeffrey Camm, Martin Levy)
The statistics of variance and standard deviation are used in many disciplines to provide a measure of the level of uncertainty that exists in a wide variety of situations and studies. The uncertainty of the intermediate delay in a multi-level inventory problem with newsboy-style results and a objective of minimizing backorders is examined. An expression for the standard deviation is derived, and implementation of these results is revealed.
Thomas Osterhus, Development and Testing of an Integrated Model of Conservation Behavior, July 1995 (Martin Levy, Jeffrey Camm)
Mary J. Frey, A Discussion and Analysis of Mathematical Modeling Techniques for the Location of Retail Establishments Using Geodemographic Data, Autumn Quarter 1994 (Jeffrey Camm, David Curry, Dennis Sweeney)
Bernard B. O'Bryan, An Evaluation of Software System Designs Using Data Flow Diagrams, Data Dictionaries and Mini-Specifications, 1993 (Roger Pick)
An evaluator for an experiment involving software engineering discusses his part in the project. The experiment had the evaluator -- without prior knowledge of the experiment (blind) -- rate some data-flow diagrams, data dictionaries, and mini-specifications of some software project performed in teams. The 'blind' evaluations were then used to rate the effectiveness of using computer-assisted software engineering (CASE) technologies. Ten three-person teams, composed of undergraduate information-systems majors, independently developed a software product -- a Pascal pretty printer. Four teams used the same automated CASE software, while the remaining teams did not use an automated CASE software. The major results of this experiment were (1) those teams that used the automated CASE software were able to code the programs in less time than those who did not, (2) all of the teams using automated CASE software were able to meet more of the requirements than those who did not use the software, and (3) the quality measures of the CASE-group designs were rate superior to the non-CASE-group design. Also, some literature is reviewed to give the reader a point or reference on data-flow-diagrams, data dictionaries, and CASE tools in general. Further, some biographical data on the 'blind' evaluator (the author) is also included.
Patricia Laber, ACL Knee Brace Design Study: Data Analysis, September 3, 1993 (Martin Levy, Jeffrey Camm)
Stephen E. Kelley, A Multiple Regression Model Used to Predict Indicated Airspeed, May 24, 1993 (Jeffrey Camm, Martin Levy, David Rogers)
William Milligan, Assessment of Collective Bargaining Issues with Sample Survey Methods - Design, August 3, 1992 (Martin Levy, Thomas Innis [Adjunct Associate Professor])
Karen Averbeck, Assessment of Collective Bargaining Issues with Sample Survey Methods - Analysis, August 3, 1992 (Martin Levy, Thomas Innis [Adjunct Associate Professor])
Deryck Lampe, On the Analysis of a Repeated Measures Design, June 11, 1992 (Martin Levy, David Rogers)
Jo A. Gallagher, Rating of Designs for a Study on Computer Assisted Software Engineering, July 17, 1991 (Roger Pick, Jeffrey Camm, Timothy Sale)
Barbara C. Zellner, Using Aggregation Methods to Solve Single-Commodity Transportation Problems, May 1990 (James Evans, David Rogers, Jeffrey Camm)
Many companies must routinely solve transportation problems. However, because of time and hardware constraints, these problems are often not solved to optimality. In many cases, the problems are not modeled. This paper examines single-commodity transportation problems solved to optimality using a personal computer. Aggregation is used to convert the original problem into a two-source transportation problem. After solving the modified problem to optimality, the solution is disaggregated and used as a starting solution to the original problem. The time to reach optimality using this two-step method is compared to the computational time of using a poor starting solution in the original problem and solving to optimality in one step. Various methods of aggregation are used and discussed.
Calvin Taylor, Multivariate Analyses of Telephone Company Data, August 2, 1989 (John Bryant, Martin Levy)
Yiching Lee, Bayesian Approach to Testing Equilibrium in a Segmented Line Model, 1987 (John Bryant, Martin Levy, Jeffrey Camm)
Steve Nielsen, Cost Reduction of Paper Manufacturing Through Quality Control of Pulp Production, December 11, 1987 (Jeffrey Camm, David Anderson)
Mark Kleinhenz, An Algorithm Using Aggregation to Solve a Large Scale Linear Programming Problem to Optimality, June 26, 1986 (James Evans, David Rogers, Jeffrey Camm)
As Lasdon has remarked, the solution of linear-programming problems is often hampered by size -- the problem is simply too big. The cost of supercomputers and technological limitations are two reasons for the difficulty in solving such problems. Supercomputers, such as those manufactured by Cray of Minnesota, can cost between five and 15 million dollars. But even supercomputers are limited in the size of problems that they can solve, although these limitations are continually being extended by technological advances. The need for solutions to large-scale linear-programming problems has inspired the development of several solution strategies. One such strategy is that of aggregation. After developing a smaller but similar problem to the original problem through the clustering of the latter's columns or rows, this aggregate problem is solved obtaining a solution that is 'close' to that of the original problem. Techniques have been described for reformulating the aggregate problem and improving its accuracy of solution. In this paper the technique of aggregation is applied as a step in an algorithm to solve a large-scale general linear-programming problem to optimality. The format of this paper is as follows. The experimental design and the method of problem generation are presented in Section I. Section II describes the algorithm and a sample problem is solved to illustrate the working of the algorithm. Section III details the computer software and hardware employed in the research project: computational results are presented as well. Section IV is an evaluation of the algorithm using the quality of the basis as the criterion. A related issue is addressed in Section V: the question of whether the use of even-weighting or the use of weighting provides better quality of solution after solving the aggregate problem. In the final section, conclusions are drawn and further work is suggested.
Joel I. Kahn, Analysis of Automatic Warehousing System Operating Policies, June 1981 (James Evans)
The subject area of this study deals with operating policies for Automatic Storage/Retrieval warehousing systems. Four system design parameters are investigated: storage algorithms, retrieval algorithms, level of storage utilization, level of crane utilization. The system investigated has a single crane which stores and retrieves product from either side of a storage aisle. The aisle contains two racks each having 1,000 storage locations. Each storage location is capable of handling one pallet and each pallet contains only one product type. The question to be focused on, relative to the system studied, is the way in which the four parameters mentioned above affect crane travel distance. This study answers the above question by developing a digital simulation model of the system being investigated. Where possible, the results of the simulation will be compared with analytic and other simulation results in order to validate the model. The simulation is capable of acting as a design aid for future systems by allowing the designer to vary the four parameters mentioned and obtaining their impact on system measures of performance. The type of system studied is common in Japanese industry and is beginning to appear here in America. A perceived major barrier to wider spread utilization of these systems in America has been the inability to accurately predict the rate of return on these investments. Having a model which can accurately predict system operating characteristics will greatly aid the rate of return analysis process.
Sharon Hannig-Smith, An Airport Passenger Processing Simulation Model, January 1981 (James Evans)