Paul Franzon – The Efficiency of Machine Learning

<h1>Paul Franzon – The Efficiency of Machine Learning</h1> post thumbnail image

Paul Franzon discussed machine learning in the electronics industry with us during DesignCon 2019.

Paul Franzon – The Interview

Here are the topics covered in the interview:

  • 0:08 Can you talk about your work?
  • 0:48 Can you give us an example of what problems you solved?
  • 1:56 What is the scope of the Center for Advanced Electronics through Machine Learning (CAEML)?
  • 2:30 What is the difference between machine learning and artificial intelligence?
  • 3:28 What are the business benefits of machine learning in EDA?
  • 4:30 Who will own the intellectual property (IP) with machine learning?
  • 5:12 How can you make a data-driven tool if you don’t own the data?
  • 6:13 What is your next project on machine learning?
  • 7:15 Can you give an example?
  • 7:38 Why did you come to DesignCon?
  • 8:23 Is there a specific way to collect the data for machine learning to be beneficial?
  • 9:01 How much data do you need for machine learning to be beneficial?
  • 10:15 Have you seen any application of machine learning in practice in the electronics industry?
  • 10:51 How is DesignCon important to the industry?

Can you talk about your work?

So, first of all my name is Paul Franzon. I’m a professor at North Carolina State University. And one of the things I do is I’m a sight director for a center, the Center for Advanced Electronics through Machine Learning. Which is mispronounced as CAEML. Not camel but CAEML.

And what we do in the center is apply machine learning to problems in electronic design automation. We do that for chip design problems, we do that for board design problems. We also have problems in hardware security and underlying algorithms.

Can you give an example of what problems you have solved in the EDA world? Chip design, for example.

We’ve had many projects run through the center and there are many projects active right now. I’ll give you a couple of examples. One is in a high-speed receiver it is very time consuming to simulate it accurately and you want to evaluate it for different settings in the receiver for the filters.

So recently we produced a model using a deep learning technique called long short term memory that accurately captures the behavior of such a receiver and allows you to do a simulation and thus tuning the parameters very very quickly.

Another recent project is applying deep learning to design rule checking on chips. In modern silicon chips, there are tens of thousands of rules and they’re very complex. It is now beyond the scope of a human to understand the rules unless they have designed the rules. So what we have is an interactive design rule checker that’s trained off the rule set that can interact with the human designer to very quickly allow the human designer to resolve trade-offs related to the design rules.

There are just two of the recent projects in the center.

Have you seen machine learning or has the center worked on machine learning for anything else other than EDA but actual manufacturing? Are you dealing with any manufacturers? What’s the scope of the center?

The mission of the CAEML center is to produce better models for electronic design, including aspects of reliability, design rules, optimization, circuit design, interconnect design. We cover a variety of areas including a lot of projects like end reliability.

Also our projects in hardware security. How to use machine learning to better secure the hardware against attacks such as the recent Microsemi attack which got a lot of press.

What’s the difference between machine learning and artificial intelligence?

What’s the difference between machine learning and artificial intelligence? That’s a loaded question that will get different answers from different people.

I’m an engineer, I’m a pragmatist. To me, machine learning is a technique where you get a lot of data and you analyze that data to produce quick to run models that are trained directly off the data. There are other techniques that together make up the field of artificial intelligence such as expert systems, which has been around for quite a while, and other solutions. However, the big change recently is the ability to develop models from the data using a variety of techniques rather than have to have a human think of how to build a model. I wouldn’t call that artificial intelligence but it’s rolled into the field of artificial intelligence. Personally, I don’t think we’re approaching a singularity because we can train a model to recognize a cat.

What are the business benefits? Have you seen any potential business benefits of machine learning in EDA?

Are there business benefits from machine learning in EDA? Yes, there are. Now we’re a membership-based center so we have many member companies. Of course, they make the business case internally to the companies and they don’t tell us because we talk to a lot of people. So they don’t want to leak their business model to us.

But the business benefits are that you can create new EDA tools that are data-driven rather than driven by the creation of new algorithms which is much more complex and much longer time to bring to market. For the companies using the tools, they can create better, more optimized designs and make, the tools don’t replace humans because there’s a shortage of humans that can do this sort of thing already, but they allow them to be more productive and design more transistors a day, more boards a day, or however you want to do it.

Both of those translate into business benefits for the EDA vendor or for the company using the end tools that we produce.

So an EDA vendor makes the tool so the company can build their product. And all the IP, the company owns the IP. Now in machine learning, which is based on data, now who’s going to own the IP? Is there an IP issue there?

So the question is is there an IP issue in data-driven tools? First of all, I just want to clarify something. Quite often, even in our center, our code goes directly to the end user companies and they often use it directly. There are only certain codes that are worth productizing as an EDA tool.

If you’re going to have an EDA tool driven by data, then what becomes valuable is your data. What we find is our member companies that are doing designs are very protective of their data. So quite often what we do is generate our own data, which is representative of what the member companies might own. But we don’t redistribute the member companies’ data. Data is the new IP as you would term it.

Someone like a Cadence, they don’t have customer data. The customer owns their own data. Cadence just makes the tools. So how can they make a data-driven tool when they don’t own the data?

So the question is how does an EDA vendor deal with this problem of the companies possessing the data? Well, there are a couple of solutions I’ve seen out there. One is that sometimes they do get to collect their customers’ data. Though it might be encrypted or otherwise obfuscated to protect key IP. Or it might be open, sometimes the EDA vendors have open access to a specific customer’s data. It’s up to them, then to protect that data and not redistribute it.

But the other solution I’ve seen is actually to use data, is to use the examples generated by the university. We also generate kits, design kits that are public domain so they can be used openly for demonstrations, to demonstrate new tools, to prevent the need for sharing data across companies. So we produce kits, we produce data. Not quite as big of data as a Samsung or an Intel would produce but still big enough data to obtain a result from a data-driven tool. And we can give that data freely.

Thanks for explaining that. What’s your next project on machine learning?

We’ve got a number of projects in formulation including improvements on current projects. Another project just finished was analog design optimization with machine learning. In terms of my interests, what I see is starting to push machine learning to go beyond model building and into design improvement that is presenting options to human designer that might be viewed as the machine being creative but only being creative in a small way. So helping the human designer converge on a better design. I see a lot of interest in that.

In addition, I think there are big opportunities in applying quantum computing in a machine learning loop for a number of situations. And we’ve started to look into that.

Finally, we do hardware as well. We’ve been looking at how inference engines can operate on the edge and interface with large memories to do real-time inference in edge computing solutions.

Want to give an example of that?

That work is funded by DARPA so we’re looking at military examples. One example we’re looking at is enable unmanned aerial vehicles to better identify what they’re looking at, how to track what they’re looking at, and so forth.

Why did you come to DesignCon?

I came to DesignCon frankly because I was invited yesterday to give a tutorial on machine learning. There we talked about the basics of machine learning and a little bit about the applications. And that tutorial was very well attended. There’s a lot of interest at DesignCon in applying machine learning to problems that designers are facing in addition to building machine learning inference and training engines. And of course at DesignCon, it’s a very industry focused conference, I can learn about what are the problems people are facing. Is there, for example, go to what’s next after pan 4 and 112 gigabits per second interfaces and things like that and think about what some of the solutions might be to those problems.

It sounds like in our conversation, data’s come up a little bit. And collecting the data is very key. Is there a certain way to collect the data in order for it to be beneficial to create a machine learning model?

You’re asking about data collection and preparation for machine learning. That is a key step to machine learning. What I’m seeing more and more of is you can’t just treat machine learning as a tool you throw at a problem. Data preparation for machine learning tools is critical. Sometimes you need to do feature extraction, sometimes you can use just the raw data. But you’ve got to think about it and say is this raw data the right thing to present to the machine learning tools for training and inference. And often it’s not, you have to do preparation on the data.

How much data do you need for machine learning to be beneficial?

As I’m sure you know, you need a lot. Quite often what we do is take a combination of real designs, use that to generate synthetic designs to produce the data which has order of a million training samples in it. I must admit it does vary from problem to problem. One thing we have been doing is doing machine learning based optimization with as few as 50 samples. And that is doable for the right problem, with the right set of machine learning tools, in that case, what’s called surrogate modeling together with Bayesian optimization or the candidate algorithm. But other times, if you want to train a deep network, for example, you need a very large, very rich data set. And you somehow have to generate the order of a million samples for that training data set.

But there are ways of doing that, and those methods of generating that data in themselves are an intellectual exercise. So how can you generate a useful range of data for training a deep learning model.

Have you seen any application of machine learning in EDA or single integrity or in the electronics industry in practice? Have you seen any application today?

Yes, actually. Within our center we produce machine learning models for the drivers, the interconnect structures, the receivers. We have a project right now at Georgia Tech, within the center, I’m not running the project but a colleague of mine is. Where they’re trying to use machine learning to identify the combinatorial worse case for simulation. So rather than simulate every net in the design, every cross talk scenario in the design, use machine learning to produce a synthetic example of the worst case combination. And all you have to do is solve that problem to solve the design as a whole.

What would you say about DesignCon and how it’s important to the industry?

Obviously, as an academic, I think education is important and I think any engineer needs to continuously educate themselves and improve their knowledge base and improve their ability to execute. The world changes almost totally every five to ten years and engineers have to live with that cycle and be prepared for that cycle and prepare themselves for that cycle. Conferences such as DesignCon can help educate engineers as to what the upcoming changes are, how to prepare for them, and sometimes even train them directly in how to prepare for those changes.

I totally want to know how to apply machine learning to our business and I want to do it now.

Again, the issue comes down to, if you’re concerned about yourself, do you have enough data for a data-driven model to identify those relationships? I mean if you introduce a new material and it catches you off guard because it doesn’t adhere enough to the other laminates, there’s no data. You’re just going to be caught out.

But regular work that we do over and over or things that fit within this universe that we have a lot of data on, machine learning should be able to help a lot with, I will say, speeding up the process?

Yes. The way that we think about it today, even any of our automation that we put in place, is the automation will run and then the human will review the results. So the human is not running automation, waiting, running, waiting, running, waiting. The automation runs and returns back all the results and then the human quickly sifts through.

So I’m thinking that machine learning could help reduce the amount of engagement a human has to do?

Yes. But in the beginning it will be the human is engaging and the machine learning, and creating the data set for the machine learning to learn off of.

Human-generated data and capturing those relationships that the human can identify is a good thing for machine learning to do.

So if I say, or my project that I want to, business-wise I want to reduce the time it takes, meaning I want to reduce the human involvement, something very big, a general goal like that. It sounds like the first step is to understand the exact, define the exact problem you are exactly trying to solve and it might be a list of 20 problems but all kind of related to each other. And then when you define the problem well then you can then define how does the data have to be captured.


Leave a Reply

Your email address will not be published.