skip to main content

Next-Generation Fault Detection Improves Quality and Reduces Cost

James Moyne, Jimmy Iskandar, and Michael Armacost

Fault detection (FD) is pervasive in the industry and is now a key capability in the ongoing effort to improve quality and reduce cost. The next generation of FD will significantly reduce setup times, improve detection with fewer false alarms, and take advantage of big data capabilities to decrease response times and increase depth of analysis.

It’s hard to believe, but fault detection (FD) has been a part of our industry for over 20 years and an integral component of microelectronics manufacturing as a whole for at least the past decade. Manufacturers rely on FD to minimize scrap, improve product quality, detect quality degradation, and determine when equipment may need to be shut down for maintenance, among other benefits. Today’s typical fab employs some form of FD on almost all processes.

But while FD provides significant benefits, there are also cost and performance issues associated with current FD deployment and operation that present challenges to both users and suppliers. For example, Paul Ewing, an FD deployment expert and part of Applied Material’s FD advanced services deployment team, said, “It often takes up to two weeks to correctly configure univariate FD for a process tool, including collecting data, refining limits, and correlating limits violations to actual events of importance.” Also, there are often too many false alarms or missed alarms associated with a particular FD model.

The problem is illustrated in figure 1. With just this single sensor trace, the FD engineer must investigate several features and develop multiple models, each with limits. For the entire fab, there are often thousands, if not millions, of models and limits to manage.[1,2] Clearly, an opportunity exists for improvement in FD setup, execution and maintenance capabilities.

Figure 1. Illustration of the difficulties associated with manually understanding and configuring models and limits for a single FD trace. This exercise must be repeated thousands of times across a typical fab.

In determining how to address the problem it is important to consider the human factor (see related article “Human Factors in Automation Systems” on page 2 of this issue of Nanochip Fab Solutions). Much of the cost of FD deployment results from the time and occasional error associated with humans executing repetitive tasks that could benefit from automation; however it is important to continue to harness human expertise in process, equipment, and sensor knowledge. FD improvements should provide the optimal cost-benefit balance in level of automation.

Applied Materials has been researching this problem for several years. We have spoken to customers and FD deployment experts, collected statistics on the benefits and costs of FD deployment, researched new techniques in fault detection and classification in microelectronics and other industries, and innovated as needed to address specific issues.

As a result, several improvements to our FD capability are being implemented that will allow us to address key issues and provide a higher-quality FD solution. Some of the more noteworthy improvements are summarized in table 1, and are collectively part of the Applied Materials Next-Generation Fault Detection and Classification (NGFDC) solution. A few of these features are explored in this article, with references provided for further reading.

Table 1. Improved FD features included in the Applied Materials NG-FDC solution.

NEXT-GENERATION FAULT-DETECTION AND CLASSIFICATION CAPABILITIES: A DEEPER LOOK

Automated, expert-driven trace transition detection and feature selection, with ranking

If we analyze the trace of figure 1 we see that the FD modeling engineer must complete a number of tasks in order to provide a high-quality FD solution for this single trace. The engineer must first define the regions that need to be monitored and their boundaries, denoted as “steps” in figure 1. He must then decide if traces should be aligned according to specific region boundaries before analysis, or if the lack of alignment is actually a sought-after anomaly. He must also determine what FD model or models are best suited to assess a fault in a particular region.

For example “max” and “range” are selected in step 4, while “mean” and “sigma” are the methods of choice in step 12. Warning, alarm, and control limits must be applied to these models. The choice of boundaries, alignment, models, and limits are usually derived from analysis of multiple traces of the same sensor, combined with process and equipment knowledge as to what boundaries, steps, and features are important. NG-FDC features will partially automate this process while ensuring that process and equipment knowledge is incorporated into the final FD model set.

Figure 2. Illustration of moving window approach to identify trace boundaries: (a) A moving window is used with a difference function to capture transitions; (b) The size of the window is determined by signal stability, noise, and other properties so as to generate the best difference function; (c) normalization of the signal is needed because multiple traces might have different values and value change profiles; and (d) the transition points are mapped back onto the original trace (here utilizing color for demarcations) to identify regions and boundaries for analysis.

As illustrated in figure 2, NG-FDC will utilize techniques such as moving windows and wavelets to determine region boundaries.[3] Once the regions are identified, several techniques can be used to determine which features should be extracted and modeled. One is a “Monte Carlo” approach, where existing model types such as mean, standard deviation, and slope are applied to the region to determine the level of variability of the feature they capture. The model-types are then ranked. More complex techniques such as binning and structural feature extraction can also be employed.

The output is a list of features with a ranking that indicates the level of signal-to-noise that would be captured by feature-monitoring. Leveraging this automation, the expert would then select region boundaries, plus the features to model within these boundaries, incorporating process and equipment knowledge. In this way model quality is guaranteed from both an analytical and process and equipment perspective, while model setup times are significantly reduced.

Trace ranking and move to “supervised” models

In traditional FD all sensors and regions are candidates for monitoring, which results in an unwieldy number of models, many of which are relatively useless. It is up to the expert to pare down this list, an effort that can be time-consuming and error-prone. This modeling process is called “unsupervised” because models are developed without directly correlating trace data to quality data (such as metrology or yield).

With NG-FDC there is often an opportunity to reduce the set of features that need to be extracted, by determining which sensors and trace regions are associated with a particular issue that needs to be monitored, such as a metrology or yield excursion. The variability in the trace information is analyzed with respect to quality variability. Sensors, sensor trace regions, and features can then be ranked according to their impact on quality variability.

Techniques such as guard-band statistics and hidden Markov models (HMM) are useful to determine these critical sensors and regions. This process of incorporating quality or other “output” data into the determination of sensors, critical features, models, and model limits is part of the NG-FDC move from “unsupervised” to “supervised” modeling techniques.

Model management

A key issue in FD performance over time is the ability of models to continue to accurately reflect the operation of the tool, minimizing false positives (i.e., false alarms) and false negatives (i.e., missed excursion detections) across PMs and other events that alter the state of the tool.

Fortunately, techniques are being developed for advanced capabilities like virtual metrology that can be leveraged back into FD model maintenance.[4] These “supervised” techniques allow feedback of information such as false positives and false negatives to be used for model optimization. They can provide decision points on when to adjust models or limits in response to a change in offset in the tool or process, and when to rebuild models from the ground up. Additionally, they enable troubleshooting of faults to determine the critical sensors associated with a particular excursion, as illustrated in figure 3.

Figure 3. Illustration of ability to determine critical sensors associated with a particular excursion, leveraging techniques originally developed for predictive technologies (virtual metrology and predictive maintenance).

Incorporating revolutionary capabilities

While a large part of NG-FDC is focused on the automation and improvement of traditional FD features, a parallel focus is to incorporate new and innovative features to make NG-FDC more effective and easy to use, and to endow it with additional capabilities. For example, a wafer (or panel) topography prediction capability is being developed that utilizes FD information collected for a process to predict wafer topography (e.g., film thickness). As illustrated in figure 4, process sensor value or recipe set point adjustments can be simulated to determine the sensitivity of particular parameters to this topography. Utilizing this capability, product quality and yield degradation from topographical issues such as nonuniformity can be reduced.[5]

Figure 4. Illustration of topography prediction using FD information, and the ability to adjust key parameters to simulate their impact on topography and determine the optimal settings for a desired topography.

Another example of FD innovation that can be incorporated into NG-FDC is a technique developed to determine sensor traces related to a target sensor being analyzed. The correlation is determined based on (1) the location where sensor values change and (2) the change in trace signature with respect to the target sensor. This technique is useful to identify sensors or sensor groups that may be better suited to monitor a particular fault, thereby providing stronger signals and insight into fault classification and root cause.[3]

SUPPORT FOR BIG DATA FRAMEWORKS AND CAPABILITIES

The big data revolution provides us with opportunities to leverage improvements in the “five V’s” of big data: volume, velocity, variety (merging of data sources), veracity (data quality) and value (algorithms).

NG-FDC systems can leverage big data ecosystems such as Hadoop to provide FDC advancements from each of the five “V” perspectives.[6] Data “volume” improvements support improved models that mine larger quantities of data from both depth (archive length) and breadth (number of sensors). Improvements in the “velocity” of data collection and analysis allow for finer granularity and increased complexity of analysis without increased development time. The improvements in data merging (”variety”) allow direct access to quality data (e.g., yield and metrology) alongside trace and FD output data. This will facilitate the move from “unsupervised” to “supervised” modeling in NG-FDC. Finally, data quality improvements (“veracity”) will support more sophisticated modeling techniques (“value”). These will reduce the occurrence of false positives and false negatives in NG-FDC systems, and pave the way for more complex predictive solutions such as yield prediction feedback for fab-wide control to hit yield targets, or even improve yield.

THE FUTURE IS NEAR

Each of the features included in NG-FDC will improve the performance and lower the cost of an FD solution by (a) decreasing setup time and reducing false positives, (b) simplifying management of models and limits, and (c) expanding capabilities. Collectively they provide a foundation not only for NG-FDC, but also for key emerging technologies that rely on FDC, such as virtual metrology and predictive maintenance.

For additional information, contact michael_d_armacost@amat.com.

Acknowledgments: The authors would like to thank Brad Schulze, Deepak Sharma, Kommisetti Subrahmanyam and Jianping Zou for their support in the development of this article.

[1] This fact was underscored at the 2015 Integrated Measurement Association (IMA) APC Council meeting, attended by key users. Three main points of consensus from this meeting: (1) FD limits management is a top concern of fab APC managers, (2) there is a need for some level of automation in the FD model-building processes, while retaining process and equipment expertise, and (3) no comprehensive solution is currently available.
[2] “IMA- APC Council meeting minutes,” October 2015. See www.apcconference.com.
[3] Kommisetti V R Subrahmanyam, Jianping Zou, Jimmy Iskandar, and Ryan Patz, “Automatic Trace Data Windowing and Determining Level of Correlation of Sensor Traces to Obtain a Better Understanding of the Nature of Faults in FD Systems,” APC Conference XXVIII, Austin, Texas, October 2015.
[4] Jimmy Iskandar and Michael Hsu, “Maintenance of Virtual Metrology Models,” APC Conference XXVIII, Austin, Texas, October 2015.
[5] J. Iskandar, C. Jiang, M. Armacost, and B. Schulze, “Topography Predictions Using System State Information,” United States Provisional Patent Application, fi led September 2015.
[6] J. Moyne, J. Samantaray, and M. Armacost, “Big Data Emergence in Semiconductor Manufacturing Advanced Process Control,” Proceedings of the 26th Annual Advanced Semiconductor Manufacturing Conference (ASMC 2015), Saratoga Springs, New York, May 2015.