skip to main content

Cover Story: Big Data and Neural Networks

New Drivers for the Semiconductor Industry

By David Lammers

As streams of data began multiplying over the past decade and the term “big data” became common, concerns mount about how such large amounts of raw data could be turned into useful information.

Now, a powerful answer has emerged in the form of machine learning: neural networks, or artificial intelligence (AI), are increasingly capable of ingesting voice, image, and many other forms of data and turning it into valuable information. Neural networks and AI applications will be major driving forces for the next generation of semiconductor devices, and will add to the arsenal of data analysis techniques being deployed in semiconductor fabs.

Even as processor and algorithm design teams work to sort out the best technical approaches to neural networks, it is clear that we are at the start of something that will impact the semiconductor industry in ways yet to be fully understood (see figure 1).


Figure 1: Competition is heating up among processors aimed at neural networks, with power consumption being a key concern. (Source: Embedded Vision Alliance, 2017)

Dave Anderson, president of SEMI’s Americas Operations, sees machine learning as a major driver for improving semiconductor device speeds going forward. Voice recognition, language translation, assisted driving, and medical diagnostics are just a few examples of how machine learning is changing the landscape.

“Visual analysis systems will all require a neural network behind them, and that involves a lot of compute power,” said Anderson, who earlier worked as a senior manager at SEMATECH.

While the semiconductor industry’s growth has been driven by personal computers, gaming and smartphones, future growth will come from the need to analyze large amounts of data quickly with neural networks operating in the Cloud and in user devices as well. “We are entering the data phase, with the chip industry poised for another period of rapid growth,” Anderson said.

What’s Good? And What’s Bad?

Linley Gwennap, principal analyst at Microprocessor Report, said “we are in the very beginnings of the whole neural network thing.” Early neural networks have proven able to solve some problems at higher rates of success than humans.

Thus far, much of the network training process has been done on modified graphics processors from NVIDIA Corporation, but Gwennap said the landscape is likely to change as processor design teams at both established processor vendors and startups work on “neural network processors designed from scratch.” Intel has made some key AI-related acquisitions, notably startup Nervana, Inc., and its executives have vowed to be leaders in the field, Gwennap noted. He said the high-performance silicon used to train neural networks now accounts for only a few percent of the processor spending at data centers, but that could rise to 10–20% of what is now about a $10B market.

On the inference engine side (the AI processors used in drones, robots, cars, smartphones, and other end-user systems), Gwennap said semiconductor vendors need to be wary of the power consumed.

Gwennap is convinced neural networks will have a big impact on how data is analyzed. To date, software engineers have had to write complex applications in C-code and then spend considerable time tweaking their programs. Neural networks, by contrast, “program themselves. They look at a big pile of data, and sort things out. They look at the patterns and figure out what’s good and what’s bad,” he said.

Is It a Dog?

A neural network “is supposed to emulate the synapses of your brain,” said Gordon Cooper, product manager for the Synopsys embedded vision products based on synthesizable ARC processor cores. A convolutional neural network (CNN) is the current state of the art for visual processing, and it trains the neural network’s layers to recognize something by adjusting the weight between nodes. “For example, when shown an image, it must decide ‘yes or no: is it a dog?’ Depending on the answer, as the weights adjust, you are training the network.”

Much of the advanced driver-assistance systems (ADAS) phenomenon is based on the ability to train neural networks using high-performance computing systems and then deploy the pattern-recognition capability on higher volume inference engines in the vehicles. Some of these inference engines will be small cores added to processors, but others will be high-performance ICs consuming significant fab capacity.

Cooper said the inference engines in ADAS require powerful multi-core system-on-chip (SoC) solutions. ADAS vendors are protective of the particular methods they use to train their neural networks to recognize pedestrians and other obstacles. But all of them require fast silicon to do the inference processing on images coming into the vehicle.

Embedded vision works on the individual images in a video stream, on a frame-by-frame basis. “This is uncompressed, full-frame data. Depending on the megapixel rate of your camera, that is a lot of pixels,” Cooper said. Some customers use quad-core SoCs running as fast as 800 MHz to perform pattern recognition in the ADAS-equipped vehicles. “The ADAS system will need real processing horsepower to go make a decision,” he said.

Before an ADAS application can examine an image coming in from a car’s camera and force the car to stop, or not, the inference engine silicon must “figure out what is the region of interest for the image, evaluate different candidates to see if it could be a pedestrian, and report: ‘yes or no, that is a pedestrian,’” Cooper said.

And this is not a futuristic scenario: Tesla now offers an augmented driving capability so that a car’s vision system can see two or three cars ahead and stop the vehicle before a pileup might occur.

“With Google and Facebook hiring so many people knowledgeable about neural networks, automotive customers are finding it difficult to hire people in this field. It is a struggle to find the right people,” Cooper said. Not all inference engines will require such multi-core dedicated processing. When neural networks are trained to detect credit card fraud, for example, the results can be deployed with a conventional CPU acting as the inference engine, said David Kanter, principal analyst at Real World Technologies.

“Machine learning will be used in a myriad of ways, in autos, hospitals, and security systems, or to detect spam on the Internet. In some cases, machine learning is not that computationally heavy, not enough to justify a special piece of hardware,” he said.

In many cases, machine learning can be deployed in end-user systems with a small coprocessor added to the main processor, Kanter said. “The hardware is going to be somewhat different from one application to the other.”

Boosting Chip Fab Yields

Could these techniques also work to boost yields in a semiconductor fab, or guide a chip designer?

Chris Rowen, a pioneer of the synthesizable microprocessor sector while at Tensilica (part of Cadence Design), now heads up a venture capital firm, Cognite Ventures, aimed at AI startups. “Manufacturing industries in general are just waking up to the potential in machine learning,” Rowen said (see figure 2).


Figure 2: Neural networks are behind automated speech recognition (ASR), which empowers a voice interface for consumer markets. (Source: Cognite Ventures)

Certainly, high-value industries like semiconductors “are in a strong position to use it. The benefits of being really in control of the process are so huge, and machine learning can introduce predictability into that manufacturing world.” (See figure 3.)


Figure 3: Illustration of the drivers of electronic design evolving to cognitive computing applications. Cognitive computing [1] generally refers to the computer hardware/software that mimics the functioning of the human brain, often leveraging neural network and AI techniques. (Source: Cognite Ventures)

James Moyne, a University of Michigan engineering associate research scientist, said he believes AI techniques will best serve the semiconductor industry when they are used in conjunction with human experts.

“Neural net and AI techniques for big data, such as ‘deep learning,’ will impact semiconductor manufacturing, but it will be far from a panacea. Everyone is looking for a one-size-fits-all technique for these predictive analytics. However, deep learning leaves domain knowledge on the table and therefore is generally not good for things like fault detection, predictive maintenance, and virtual metrology,” he said.

Neural networks might work well as “a layer on top” that finds “odd anomalies” and then alerts an expert to investigate.

“We want to get people thinking about leveraging big data techniques, but we also want to help them understand that they are not a substitute for hard work at configuration and domain knowledge. Eventually we’ll need to carve out the problem space and identify those areas where deep learning might be the best technique and where it is not,” he said.

Kirk Hasserjian, Applied Global Services (AGS) vice president of service product development, argues that “supervised” models incorporate the expertise of the equipment companies as well as the intimate process knowledge of the semiconductor companies. Speaking with Tech Design Forum’s correspondent Paul Dempsey at SEMICON China earlier this year, Hasserjian said these supervised models currently are better at separating the signal from the noise.

Pure machine learning, which relies on “unsupervised” models of unlabeled data, “is essentially looking for groupings and trends, identifying anything that is anomalous,” Hasserjian said. “There’s quite a bit of data coming out of our processes and tools that you can use that modeling for.” [2]

Models, both supervised and unsupervised, are part of a larger data analysis framework, computational process control (CPC), being developed by Applied Materials, which impacts both the manufacturing and design processes.

Speaking at the 2016 Advanced Process Control meeting, Hasserjian said CPC—which includes prescriptive and predictive capabilities within a larger computational data analysis framework—is part of the larger evolution from statistical process control (SPC) and advanced process control (APC) (see figure 4).


Figure 4: Machine learning will complement domain knowledge of fab and equipment engineers in the Computational Process Control era. (Source: Applied Materials, Inc.)

Juan Rey, senior director of engineering at Mentor Graphics Corporation, said that “we do know that these neural network algorithms don’t care what they are recognizing. They need to be trained to differentiate a cat from a dog, so we know they should be able to be trained to recognize vias from trenches, or etches in a dual damascene process. Absolutely.”

For Mentor, AI research is just starting, but a team is in place. “We are trying to look into these algorithms,” Rey said, adding that he would like to see the Semiconductor Research Corp. (SRC)— where he sits on an advisory board—put some of the consortium’s research funding into AI techniques.

Data Quality Needs Work

Most neural networks to date have excelled at working with sets of patterned data that have labels (a golden retriever versus a dachshund, or a cancerous tumor versus healthy tissue, for example). But Rowen said neural networks are becoming increasingly adept at taking raw, unlabeled data and coming up with meaningful solutions.

For many organizations, neural networks are like “a shiny new hammer” that companies are still trying to figure out how to use, Rowen said. But the possibilities are high for the technique to find widespread use in the semiconductor industry, where even a 1% yield gain is worth many billions. “Neural networks can be used to take masses of data in situations where there is a clear idea of the outcome but no certainty of what the causality is. This kind of capability could be applied to a fab’s yield issues, where manual techniques often make it more difficult to drill down to the root cause,” Rowen said.

Neural networks can figure out “a complex model of causality; what causes a defect at this certain point. If you have enough yield examples, you can develop very good predictive models, with a high degree of statistical accuracy, based on the type of failure and the cause, and determine what you can do to prevent it. That is very hard to do with manual methods, or with previous statistical methods,” Rowen said. Moyne said a central challenge facing the semiconductor industry is to create higher-quality data sets, incorporating the several types of data captured in fabs today.

“In our industry, we have a lot of data-quality issues, and we have to do data filtering and feature extraction to augment our data techniques. Neural networks are very good for large data sets, a free-form method of looking for patterns that humans have no idea about. And they are really good when you are not looking for a perfect solution, where you don’t have to be right all the time, such as helping define people’s preferences in order to put up Google ads.

“There is a place for these things, but because it leaves on the floor a lot of the domain knowledge, it is not going to be a cure-all solution,” Moyne said.

Jensen Huang, CEO of NVIDIA, sees big changes underway. Writing on that company’s blog, he argued that “we stand at the beginning of the next era, the AI computing era…. In this era, software writes itself and machines learn. Soon, hundreds of billions of devices will be infused with intelligence. AI will revolutionize every industry.”

Raul Valdes-Perez, adjunct associate professor of computer science at Carnegie Mellon University, differentiates between machine learning versus machine discovery. Machine learning finds common patterns in data, and uses them to learn and adapt without being explicitly programmed. Machine discovery, Valdes-Perez said, takes it up to another level, with the algorithms assisting humans in “extracting potentially useful and novel knowledge from the common patterns found in the data.”

Moyne’s scenario—with domain experts using neural networks—is almost certainly the way AI will be used initially in a field as complex as semiconductor manufacturing.

But already examples are surfacing of neural networks that are so much faster and cheaper than human experts that whole job categories are threatened. Stock traders, for example, are rapidly being replaced by computer scientists on Goldman Sachs’s securities trading floor. And radiologists—extensively trained doctors who spend decades learning how to read X-rays, MRIs, and other images—can no longer individually match the accuracy or speed of AI systems trained by the cumulative knowledge of multiple experts to recognize cancerous tumors, according to Siddhartha Mukherjee, an oncologist and author of the 2011 Pulitzer Prize-winning book, “The Emperor of All Maladies: A Biography of Cancer.”

In semiconductor manufacturing, Moyne said, some statistical techniques such as partial least squares regression (PLS) are more appropriate than neural networks for certain applications.

“Ultimately, it will be a combination. There is no one technique that will solve everything. The quality of the data and the existence of domain knowledge play a large role in what technique you choose. We will need them all,” Moyne said.

For additional information, contact nanochip_editor@ amat.com.

[1] https://en.wikipedia.org/wiki/Cognitive_computing

[2] http://www.techdesignforums.com/practice/technique/ computational-process-control-applied-materials/