This research is supported by the Technology Foundation STW, applied science division of NWO and the technology programme of the Dutch Ministry of Economic Affairs.
Note that most of this wiki-page is adopted from Chapter 1 of the thesis of Kock.
Even small reductions in production capacity loss may yield significant financial benefits or savings. For this reason, industry puts great efforts in the reduction of capacity losses due to disturbances such as machine downs, setup, rework, etcetera, for instance by using metrics such as the OEE. In addition to capacity loss, the various disturbances in the manufacturing system cause variability in processing. A high level of variability also adversely affects the throughput/flow time performance.
Several tools and performance indicators are in use for the performance analysis of manufacturing systems. Two parameters that are often used are throughput (the number of lots processed per time unit) and mean flow time (the average time a lot spends in the system). Throughput as well as mean flow time are descriptive performance indicators, that is they quantify the performance of the system. They do not explain why the performance is the way it is, nor do they assist in finding solutions to improve the performance. For that purpose, other indicators are used.
A well-known indicator aiding performance improvement is the overall equipment effectiveness (OEE) (Nakajima 1988). The SEMI-E10 and SEMI-E79 norms commonly used in the semiconductor industry are for instance based on the OEE. Recently a revision of the OEE, E, has been proposed by De Ron and Rooda (2005). The OEE quantifies mean time losses during processing. Losses are divided into availability losses, performance losses and quality losses. The OEE readily gives insight in the cause of undesired behavior at workstations. The OEE quantifies the production capacity losses, which relates to the utilization of the installed capacity. Note that the OEE does not quantify the variability in processing which also affects the manufacturing performance.
Workstation utilization and variability are the two basic parameters explaining the performance of a manufacturing system regarding throughput and mean flow time. For a manufacturing system consisting of infinitely buffered workstations, an approximate expression due to Sakasegawa (1977) and Whitt (1993), is insightful to explain the contribution of utilization and variability to the flow time performance (Hopp and Spearman 2001). In that equation, utilization is determined by dividing the mean effective process time by the mean interarrival time. The mean effective process time includes all capacity losses due to the various outages such as machine breakdowns and setup time. Similarly, the coeffifient of variation on the effective process time is used in said equation. It results from the combination of the processing and the various outages.
Once the performance of a system is analyzed, one may want to improve that performance. The performance metrics described above do not provide the possibility to predict the impact of changes in the system on system performance. Predicting the changes in system performance may be difficult due to the large number of processes and the interaction between processes in the manufacturing network. To understand the impact of changes in the system configuration, queueing models are used.
For the performance prediction of manufacturing systems, typically discrete event simulation models or analytical queueing models are used. Both model types have their own specific advantages and disadvantages. Analytical models are computationally fast, but it is difficult to include many shop-floor realities in the model. As a result, analytical queueing network models are little used in manufacturing industry. The gap between model assumptions and shop floor reality is often considered too large (Fowler and Rose 2004, Shanthikumar et al. 2007). If one would be able to aggregate the shop-floor realities and the processing into a single distribution for each workstation, and then be able to actually measure this aggregate distribution from simple shop-floor events such as lot arrivals and departures, then this may provide an opportunity to bridge this gap. Also for simulation models aggregation of shop-floor realities into a single workstation would be advantageous: a simulation model would require less input data, while the model becomes computationally cheaper since only one distribution per workstation is induced. The STW project “Effective process time” aims to provide such an aggregation method.
The concept of effective process time (EPT) was first introduced by Hopp and Spearman (2001). They define the EPT as the time spent by a lot at a workstation from a logistical point of view. Thus, all time during which a lot claims machine capacity is included in the effective process time. Hopp and Spearman show how the EPT of a workstation can be computed, given distribution parameters regarding the clean process time and preemptive and non-preemptive outages, as expressed in for instance the mean busy time between failures, the mean time to repair and setup. Other outages are treated as either preemptive or non-preemptive outages. The notion of combining all individual influences on processing into a single distribution is also used in the context of sample path analysis (Dallery and Gershwin 1992, Buzacott and Shanthikumar 1993, Rossetti and Clark 2003). However, in many practical cases, the outages may not all be quantifiable Jacobs et al. (2001, 2003) presented an algorithm to obtain effective process time distributions for infinitely buffered workstations from simple lot arrivals and departures. Their method does not require the quantification of the individual contributing factors. The motivation of their work was to arrive at a measurable metric for variability at a workstation (variance in processing), that can furthermore be used to build abstract but accurate aggregate models. They used closed form queueing equations as well as simulation to predict the flow time. They feeded their EPT-based models with the first two moments of the effective process time distribution. Jacobs, Van Bakel, Etman, and Rooda (2006) extended their method to batch machines. Also several M.Sc. students contributed to these initial efforts: Van Bakel (2001), Rooney (2002), Wullems (2002) and Kock (2003). Wullems (2002) and Kock (2003) for instance started to work on the EPT for finitely instead of infinitely buffered workstations. Finitely buffered manufacturing lines are, among others, encountered in automotive manufacturing. Following up on this initial work, the Systems Engineering group and the Stochastic Operations Research group, both of the Eindhoven University of Technology, initiated an STW project on the effective process time in 2004. The goal of the project was to develop an aggregate modeling methodology that enables one to build simple yet accurate models of manufacturing networks using operational data such as arrival and departure events without the need to characterize all contributing disturbances and shop-floor realities. In the project two parts can be distinguished: