skip to main content

Feature Story: Evolution or Revolution? Big Data Creates Opportunities

David Lammers

"The fab is continually becoming more data driven and requirements for data volumes, communication speeds, quality, merging, and usability need to be understood and quantified.” — ITRS Factory Integration chapter.

At first glance, the move to big data analytics might appear to be just another evolutionary change in the semiconductor industry. After all, engineers have always measured, and techniques such as fault detection and classification (FDC), statistical process control (SPC) and run-to-run(R2R) control are certainly portents of what is to come.

But the move to big data, at the very least, is an inflection point, if not an outright revolution, as the industry shifts from reactive techniques to predictive approaches that involve major changes in how data is gathered, stored, and—most importantly—managed. Engineering groups at equipment suppliers and semiconductor manufacturers will need to share data to an unprecedented degree, experts agree, though how to do so remains a work in progress. As the University of Michigan’s James Moyne puts it, engineers will need to learn to “drill sideways,” with transparency between applications as data is mined to support these predictive analytics.

The Factory Integration chapter of the International Technology Roadmap for Semiconductors (ITRS)—available at http://www.itrs.net/—is the product of a technology working group headed by Moyne and cleverly divvies up the challenges facing the big data evolution/revolution among five words beginning with the letter V: volume, velocity, variety, veracity and value. (See more on this topic in Moyne’s article “The Move to Big Data and Predictive Analytics in Semiconductor Manufacturing” elsewhere in this issue.)

As more sensors are added to tools and data is collected at higher rates, Tim Miller, deputy director of equipment solutions at GLOBALFOUNDRIES, said “I strongly believe there will be an absolute explosion in the volume of data. Of all the five Vs, the volume one is the most problematic, because that potential data explosion could drive-up costs.”

In an interview following his keynote speech at the 2014 Advanced Process Control (APC) Conference in Ann Arbor, Michigan, Miller said one way to control data storage costs is to make greater use of Hadoop database technology, which allows greater compression and faster volume access than today’s relational databases. “When we start talking about using high-end hardware for a petabyte[1] of storage, that becomes expensive. We have to find ways to get lower cost for that storage. If we can achieve a 10X reduction in hardware costs with Hadoop,we can’t ignore it,” Miller said.

“To be competitive, information is key. The bottom line is that we need faster cycle times on data-driven decision-making. People need data faster. If I have to wait two months, instead of a day, the result can be a multimillion dollar loss,” the GLOBALFOUNDRIES executive said.

Hadoop was first developed at Yahoo by Doug Cutting, now chief architect at Cloudera Inc. in Palo Alto, California, which integrates the Hadoop Distributed File System with various other software tools into a commercial software product based on open source technology. Cutting recognized the importance of a December 2004 paper published by Google Labs on the MapReduce algorithm, which allows very large scale computations to be easily parallelized across large clusters of servers using low-cost disk storage. Hadoop—named after Cutting’s son’s stuffed elephant toy—is also adept at dealing with various forms of data, such as the text, video, and photos seen on Facebook and elsewhere, said John Howey, a sales engineer at Cloudera. (Howey was part of an all-day workshop on big data held at the October APC Conference).

Miller said Hadoop “no doubt, will play a part in the future. There is a certain level of immaturity to Hadoop, and we need to let things play out a little bit more in the security area. But in my opinion, the time to invest is now.”

James Moyne, PhD, associate research scientist in Mechanical Engineering at the University of Michigan.Tim Miller is deputy director of equipment solutions at GLOBALFOUNDRIES.

“BRASS KNUCKLES” DATA ANALYSIS

According to Sanjiv Mittal, vice president of the Services Technology Group at Applied Global Services, there are “myriad things we can do with this [big data analysis] approach which can make the life of the engineer much more productive.”

Reducing the number of test wafers required to prove that a post-maintenance tool is healthy is one possibility that “can take out a lot of cost.” Another potential benefit is chamber matching, wherein a fab owning 30 of the same tools can match the chambers within each tool, and make sure the tools all run the same.

Big data brings big challenges, Mittal said. “Whether you call it predictive or not, it is a technical challenge to be able to leverage the sensors in the tool to be able to keep the tools in control. Knowing when some component is deteriorating, either from a component level or from a process level, is di‘fficult. People have a tough time doing it.”

Suppliers must know what to look for in the masses of data to avoid false positives. “Engineers don’t like false alarms, and if they do get a significant number of them, they can become desensitized,” Mittal said, calling data analysis “brass knuckles kind of stuff.”

To roll out a new technology node, Mittal said semiconductor vendors need real-time monitoring, “controlling a tool to where it needs to be controlled. It is all about yield and productivity, avoiding the scrapping of wafers and reducing the risk of low-yield wafers. If we can get virtual metrology right and become more predictive about when a part might begin to fail, then the semiconductor makers can start to plan for maintenance, and get the parts and people in place” before unscheduled downtime occurs.

DIVIDED ROLES, SHARED RESPONSIBILITIES

Predictive maintenance, virtual metrology, and predictive scheduling are all part of what one executive called “a new business model at Applied Materials,” where the technology teams that develop new tools work hand-in-glove with data experts to create analytical models used to help keep tools up and running in the fab (see figure 1). These are part of enhanced outcome-based service agreements, often with specific goals for improving tool uptime and overall equipment effectiveness (OEE), and reducing variability.

“This is not standard maintenance. This is an engineering function in the fab, which is why some customers may say this is part of their core competency,” Mittal said. “Our view is that there is going to be a confluence, where we work together with our customers at being predictive, at knowing when tools change.”

John Scoville, senior director of application engineering at Applied Global Services, spoke at the APC Conference about how Applied is sharing responsibilities with semiconductor manufacturers, who often fear that data-gathering by vendors could result in leakage of their intellectual property, including process recipes (see figure 2). Scoville said Applied sees its role to be in the realm of EHM and improving OEE, while the chip makers concentrate on developing their processes and improving yields.

Figure 1. Analytics Evolution: Moving from today’s reactive forms of data analysis to predictive analytics, such as predictive maintenance (PdM), virtual metrology (VM), equipment health monitoring (EHM) and yield prediction (YP), can improve yields and throughputs. (Source: Applied Materials; 2014 APC Conference.)

Figure 2. Protecting IP: Service providers (blue-colored at top of graphic) can combine their domain knowledge of the tool and its maintenance requirements with the user-maintained proprietary functions (green colored). A division of labor is needed to protect IP and accomplish predictive analytics. (Source: Applied Materials; 2014 Advanced Process Control Conference.)

“Our primary focus is on overall equipment availability. We need to work on the high-quality models, to look at the tool over time to see problems that are repeatable. Applied has fleets of tools around the world with common failure modes, knowledge that can be reapplied from customer to customer” without sharing anyone’s IP.

In order for predictive techniques to gain widespread adoption, a great deal of engineering work remains to be done, Scoville said, including improving the quality of the models to avoid false positives. Moyne noted that “false negatives,” wherein a predictive model fails to predict a failure in time, are equally onerous. “It’s a trade-off,” he said. “Deploying predictive solutions effectively requires balancing false positives against false negatives, determining a trade-off point that is maximized to the particular company’s financials.”

HOW MUCH IS ENOUGH

Scoville said he believes that the volume of data being stored and analyzed will require wide adoption of Hadoop technology, which by some estimates compresses data to one-tenth the size required by a SQL-based database. “A big change is coming,” Scoville told the APC attendees. “Though the relational database isn’t going away, we are moving very quickly from an Oracle-based world. Hadoop is here. It is still gray, and we have to figure out how to use it. But it will be a part of our future.”

Scoville noted that Applied’s goal is to help customers improve the productivity of and lower operating costs for the tools it sells. “If we can reduce the tool cost of ownership by 10%, year after year, that is what it will take” to stay on the Moore’s Law curve of lower costs.

Asked by an APC attendee how long it may take for predictive technologies to be widely adopted, Scoville said, “I am pretty confident that in a fab, this will be a reality in less than five years. The major manufacturers are well down this path—they are working hard and spending money on it.”

One debatable subject is how much data, gathered over how long a period of time, is required to create accurate predictive models. Scoville suggested that Applied Materials engineers need access to data collected over a year or longer, involving 10 tools with 3 to 5 instances of a particular maintenance event. Moyne suggested that 3 years might be the optimum. And tool event data must be correlated with maintenance data, metrology data, R2R data and other data, all in different formats. It takes, on average, about 30 weeks to create a workable predictive model, and these are not static but must adapt over time utilizing run-time data, he added.

Michael Armacost, a managing director at Applied Global Services, said as companies expand their ability to store more data, they can upgrade the velocity of incoming data. By collecting more data they can detect problems with greater granularity. “There are so many tools in the fab, engineers are limited in what they can deal with. Now that we can store more data, at higher data rates, there is a point on the velocity curve where we can choke—get too much data—and then we have to filter out the noise.”

Applied engineers are learning to find what Armacost called “the sweet spot, the right velocity for the problem you are trying to solve.” Another challenge is consolidating various forms of data into merged databases “so that we can make some sense out of it. Metrology data and maintenance data sets tend not to be nearly as large as the tool data. As the variety of data comes in, merging it in a way so that it aligns in the right time scale is a big deal,” Armacost said.

The creation of accurate predictive models, capable of detecting and analyzing variability, was a key theme at the APC Conference, with Applied and other companies presenting multiple papers on the mathematical techniques available to create accurate predictive models.

Helen Armer, who directs the expert knowledge base within Applied’s FabVantage Consulting Group, said that variability in a data set is key to developing accurate models with high prediction power. “Our methodology includes comparing data from a good wafer and a bad wafer, from a good tool and a bad tool, from a good chamber and a bad chamber,” to determine where variances occur, and under what conditions.

“Our data mining and regression techniques on relatively small data sets have produced models with high predictive power that were validated and used to solve excursions and optimize tool performance,” she said.

COLLABORATION IS KEY

Often the data gathered from a tool, and the domain expertise of the equipment engineers intimately familiar with that tool, must be joined with other forms of information and expertise that are held by the semiconductor manufacturers. Armer said that many times, for equipment vendors to get to the right answer, they need additional data beyond what comes off of the tool.

“To troubleshoot, we may need to know what the failure was at electrical test. We may need a short loop to identify a root cause, where the semiconductor company would have to share some fab data,” Armer said. “The equipment companies cannot resolve the issues customers want them to resolve if they don’t have the data they require. But with IP being so valuable in this industry, there’s always a delicate balance between what information customers and suppliers will share with each other.”

Tom Sonderman, a pioneer in the APC field while at Advanced Micro Devices, said “Fabs have tons of equipment-level data, but are they really making sense of it? In a fab, we believe that engineers are spending 70–80% of their time compiling the data, and only 20% of their time analyzing. It should be the opposite.”

John Scoville, senior director of application engineering, Applied Global Services. (Photo courtesy of Dori Sumter and APC 2014 Committee.)Helen Armer, PhD, is knowledge base director at Applied Global Services.Tom Sonderman, general manager of the software business unit at Rudolph Technologies, Inc.

According to Sonderman, who is currently general manager of the software business unit at Rudolph Technologies, Inc. in Flanders, New Jersey, the leap from reactive to proactive techniques requires collaboration between the tool vendors and device makers. “Engineers must be able to compare what is going on at the wafer level with what is going on at the tool level. If they see a certain fingerprint on the tool, and then see a certain pattern on the wafer, that is where the world of the fab comes together with the world of the equipment supplier. It makes sense for the two sides to work together.”

As much as the individual companies would like to work largely on their own, the challenges facing the device and tool makers will force close collaborations, Sonderman said.

“It will be just like the design teams at the fabless companies and the foundry guys. The have had to learn to collaborate. At 20nm and below there absolutely has to be a collaboration. The equipment companies and device makers are crossing a threshold, sharing information that they wouldn’t do before,” he said.

SEMATECH SEES A ROLE

Bill Ross, a project manager in the manufacturing group at SEMATECH in Albany, New York, said the SEMATECH members have identified data interoperability standards as a key issue, one that he said SEMATECH is uniquely well positioned to resolve. “We have 25–40 meetings a year on equipment productivity with our members. Big data and this problem of getting data into a usable format is the next problem the industry has to solve,” Ross said.

SEMATECH has folded its former International SEMATECH Manufacturing Initiative (ISMI) subsidiary back into the main body of SEMATECH programs. ISMI operated several test-bed EHM programs, involving predictive maintenance and virtual metrology techniques, with Micron Technology, Intel, and other chipmakers involved, he noted. Since SEMATECH has five core members among the semiconductor makers, the consortium can represent the device makers as a group with “a huge payback for the industry,” Ross said.

“The biggest problem now is ‘How do I get the data into a usable form?’ At one member company, they use one set of software to bring the data in, at another they use another [different] set. At SEMATECH we have found no one in the commercial space has all the software needed,” said Ross.

Brad van Eck, who worked at SEMATECH and its ISMI subsidiary until three years ago, said the consortium has examined the data preconditioning challenge in the past, and some of those efforts are winding their way through the SEMI standards committees. More needs to be done, he added.

Van Eck, who now serves as the co-organizer of the APC meeting, said virtual metrology in particular depends on correlating different forms of data. Every process is different, he noted, and virtual metrology depends on having access to varying forms of data, from the process equipment and the metrology tools. “If you have access to all of the data you have a prayer of doing good predictive metrology. In many cases the noise is so great that you don’t get manufacturing-worthy results and can’t trust it. And you have to have a su‘fficiently large data stream,” he said.

Miller said there is “a powerful drive toward increased transparency across the entire supply chain,” but the industry needs greater controls. “Inevitably, data will have to be made available to the people that need it.”

“There is a continual movement toward tighter and tighter controls and improved data visibility. If you are caught on the wrong side of it, particularly from a tool supplier standpoint, you could find yourself left out,” Miller said.

All of this shows that data drawn from tools has tremendous value, if it receives the right kind of analysis. Getting to that point will draw tool vendors and chip makers together, into an ever-closer symbiotic relationship that starts with trust and ends with higher levels of predictability, productivity, and profits.

For additional information or to comment on this article, contact nanochip_editor@amat.com

[1] See Computer Weekly’s article on sizing of a petabyte at http://www.computerweekly.com/feature/What-does-a-petabyte-look-like