Improving Factory Scheduling with Statistical Analysis of Automatically Calculated Throughput

Holland Smith – Program Manager Analytics and Equipment Integration, Intelligent Manufacturing Systems

Cabe Nicksic – Director, Database Architecture, Intelligent Manufacturing Systems

Abstract

Optimized factory scheduling is a powerful technique for solving the problems of automated fab operations. Scheduling is generally more sophisticated and capable than older rule-based dispatch logic approaches for directing the minute-by-minute processing priorities of semiconductor factories but requires much greater computational power and a higher fidelity operations digital twin. One of the most important pieces of data a factory scheduler uses is throughput – the processing time required for a tool to run a specified recipe. While throughput data sets were formerly compiled from manual stopwatch studies, modern fab scales and volumes all but guarantee that comprehensive throughput data sets require automatic calculation based on event data from process tools. A factory scheduler must always know some estimate of throughput when attempting to optimize a schedule, and automatic calculation ensures a comprehensive data set. However, there are many potential data quality issues when automatically calculating throughput from tool events that can be difficult to detect systematically. In this paper we describe a statistical method for analyzing throughput data quality. The method reveals some common sources for noise in throughput data and reveals the importance of correct tool event interpretation.

Look for Holland's presentation at the Advanced Semiconductor Manufacturing Conference, May 4 – 7, in Saratoga Springs, NY.