Skip to main content

Artificial Intelligence and Machine Learning Talk with Google Laurence Moroney

Author Profile img: Poulomi Ghosh

By Poulomi Ghosh

May 17, 2022 | 0 Comments

laurence-moroney-artificial-intelligence-and-machine-learning-.jpg

Contents

webinar image

Seminar: Unintentional Antennas and Solving EMI Problems

by Karen Burnham

July 16th, 2024
9:00 am to 4:00 pm PT

We ran into Laurence Moroney of Google during DesignCon 2022 and chatted about artificial intelligence and machine learning. Watch the video to find out all about it.

 

 

 

Laurence Moroney is the lead artificial intelligence advocate at Google. He has a wonderful skill set in debugging a coding problem, and he happens to be one of the experts in artificial intelligence and machine learning. He has written several books on programming and fiction, including the best-seller “AI and Machine Learning for Coders”. Laurence teaches online courses in AI with Coursera, EdX and Harvard.

What are the realities and misconceptions associated with artificial intelligence (AI)?

Laurence: If you imagine the classic hype cycle, it starts with a big peak, and then it falls into a trough and then rises. This peak is called the peak of inflated expectation. One of the things with any new, particularly very disruptive technology, when you hit the peak of inflated expectations, it leads to misconceptions. There is a lot of hyperbole about AI, but our goal is to frameworks that we do, we put out the resources there for developers to bust through the hype so that people can get tangible with these things. And once they do that, then they can see what is possible, and real. This is called falling into the trough of disillusion. When you bust through the hype, you become disillusioned with what you thought was previously possible, but now you begin to understand what is possible and then realities can happen and you can start climbing up into what they call the plateau of productivity and can start building real solutions.

Let me give an example to illustrate it. We did some research with some doctors around a concept called diabetic retinopathy and diabetic retinopathy is the leading cure of blindness globally. There are almost 500 million people with some form of diabetes, and if not treated properly it can lead to blindness.

Suppose, one retina is healthy and one is diseased. You can build a model that can detect it with about 97% accuracy, which is more than most ophthalmologists. Now the goal of this is to help ophthalmologists to scale better, now they can be able to see patients, understand patients, and screen them very quickly in just a few minutes, rather than maybe hours. You can get productive now when you look at the same retina and understand the records of the people who own that retina, for example, their age, and gender. You could have a computer do the same pattern matching.

We built a model that was 97% accurate at determining somebody’s gender by looking at a retina. No human could do that. It is a coin flip. There is something in the data that a computer can spot when it does the pattern matching that a human could not spot and that kind of thing is the promise of AI. There are a lot of hyperboles. Once you understand how the thing works, you can start coming up with those solutions, meet those promises and build entirely new things that people have not previously even thought of or may have thought were infeasible. But it has been made possible with this type of technology.

 

impedance-calculator.jpg

 

How does machine learning’s new programming strategy make AI possible?

Laurence: There is a lot of confusion between artificial intelligence and machine learning. Artificial intelligence is when you make a computer react the way an intelligent living being would. If I show you a picture of a cat, you think it is a cat not a bunch of pixels. If I show you an image of a retina, you probably guess it is a retina, but you don’t think of it as a bunch of pixels. Artificial intelligence will react the same way as you do. It will see a picture of a cat, a picture of a retina. And it will say that it is diseased, or not diseased, male or female, and those kinds of things. Ultimately, artificial intelligence is about getting rid of the hyper-belief, the science fiction. It is kind of like looking at a computer and having it respond the way intelligent beings would.

If you see a road that is curving to the left, you are going to turn left. Machine learning is the technique of creating applications to do that. Going back to the retina example, machine learning has thousands of images of a healthy retina, and images of a diseased retina. A computer, going ahead, tries to figure out the pixels that make it diseased, the pixels that make it healthy consistently by doing kind of pattern matching. It is a lot of brute force. There is a lot of smart stuff behind the brute force and guessing. That is what machine learning is. Artificial intelligence makes a computer respond the way an intelligent being would, machine learning is how you get it there.

Where is AI at present and how will it evolve in the future?

Laurence: I am about six years down the journey of really bringing AI to the masses. When I started this, there was a survey done that said there were 300,000 AI practitioners globally. Now there are about 15 million. When this whole thing started at 300,000, the only way it could be measured was by people who had their names on a published paper that had something to do with AI. Nowadays, we can measure practitioners who are writing code and using machine learning to build artificial intelligence systems.

So we can see mass adoption by developers of all of these new applications that can improve people’s lives and make things more efficient. Let me tell one story. If you turn on the news, it will tell you the AQI today, the air quality index. But that AQI is at the sensor station. It could be miles away from your home and pollution is highly localized. If you live close to a major highway or close to a major road, it is probably much more serious for you, but AQI is not everywhere and sensors are not everywhere.

A group of high school students in India came up with an idea. They got a portable sensor to measure the AQI in multiple areas. They took a photograph of the sky, and determined pollution levels based on the images of the sky. The students got lots of data, had a computer through the fancy pattern matching to match the two of these to each other, wrapped that in an application, and put that on a mobile phone, like an iPhone or an Android phone. And now you can walk outside, point your camera at the sky and determine the level of pollution without any kind of fancy sensors. That is the kind of thing that was built by kids.

Previously to get ideas about air quality, you needed a big company, software engineers, and millions of dollars in investment. Now it is a bunch of kids with a phone and a camera. People can build new solutions, new scenarios with AI. I am particularly excited about it.

Can you tell us some key features of Vertex AI and the goal of implementing this tool?

Laurence: Vertex AI is a cloud service from Google. We like to present AI to end-users in two different ways. Number one is the Google AI ecosystem which is TensorFlow. It is entirely open-source, and free. There is backend stuff that you can run, called Google CoLab. The whole idea is to make access as wide as possible by making things free, easy to use and lowering the bar of entry for people. But we also recognize that when you have an AI model that you want to commercialize and execute at a world scale, a hosted cloud-based service would be necessary for updating, maintaining, and managing your data.

Vertex AI is all about building, managing, maintaining, and having an AI-based model, including a service called a neural architecture search. To build an AI model, you have to figure out what the neural network architecture of that AI model would be, how many layers, what types of layers, and what they do. There is a lot of coding and experimentation. A neural architecture search is like, you throw a bunch of data, it will go and figure out what the neural network architecture that will best execute for that data. It will create that model for you. Then you can deploy that in your infrastructure. Vertex AI is all about what you want to operationalize.

Which is the best strategy to implement machine learning and AI for any organization?

Laurence: The best strategy to start is to think about a solution that you have already. That includes lots of rules in it. Machine learning figures out the rules instead of the programmers. And then number two is to think about the things that are currently infeasible for your organization to do, because it would be too hard to write the code.

And then start gathering data, label that data, and start looking at building models for that data. There is a set in AI called transfer learning. It is like climbing on the shoulders of giants. The model does something very similar to what you want to do, and you can take 90% of that work and change a little bit of it to make it work for you. There is a very popular model called MobileNet and the MobileNet is a computer vision model that can recognize 1000 different types of things. It was built as the name suggests running on mobile. It is highly optimized to run on a cell phone.

Now, if you wanted to build a model to do something like detecting the sky for pollution. Instead of trying to come up with the neuro architecture all by yourself and optimize that to run on the mobile phones, you could do something called transfer learning to use the MobileNet. It is called cutting the head off. It sounds awful, but there is a classification head at the bottom of a model. In this case, the MobileNet is designed to recognize a thousand different classes. If you are building an air pollution model to detect five different levels of severity, you cut the bit off the bottom that does the thousand. You replace it with your five, retrain it doing something called freezing the existing MobileNet. So you can take advantage of all of the work and all of the learning that was done there and come up with a model for yourself very quickly and cheaply, maybe even free. We have an open-source thing called TensorFlow lite model maker, that comes from Google. That is free for you to use, which will do all of that work for you.

How can industries shift their paradigm from traditional technologies to AI?

Laurence: Traditional programming has a problem that you want to solve. You express how to solve that problem using rules and you write those rules in code. For example, when I open my front door, I want the light to turn on. You probably have a sensor on the front door, that sensor, when the door opens, detects that happening. You write a rule that says, if the sensor triggers, then turn on the light. It is a very simple scenario. Think about a much more complex scenario. You have a camera feed and it spots something. And is that a crime or not? What would the rules be that you would write for that? It is incredibly difficult, probably impossible. It is absolutely feasible.

So in situations like that, this is where the paradigm change comes, where instead of you thinking in code, start with the data. It is like thousands of scenes and this is a good scene, thousands of scenes and this is a bad scene. Maybe there are too many people in the room. Maybe there is a crime happening. Maybe somebody fell over or whatever. It starts to pivot to thinking in those terms, getting the right data, labeling the data. And when you write codes, there’s a thing that engineers always have to worry about called corner cases. It is like I thought of these scenarios, but what about another one? It becomes the same thing in data. I have data for a room that looks like this, but what about somebody who wants hot pink lighting in their room? You know, that kind of stuff would be a corner case. So you have to start pivoting to those types of things, and labeling that data first, and then your code comes about how you design the neural network, run that neural network, optimize that, and get your data into a format that is easy to train, make it responsible, fair. There is definitely a big pivot that you need to do.

 

High-Speed PCB Design Guide - Cover Image

High-Speed PCB Design Guide

8 Chapters - 115 Pages - 150 Minute Read
What's Inside:
  • Explanations of signal integrity issues
  • Understanding transmission lines and controlled impedance
  • Selection process of high-speed PCB materials
  • High-speed layout guidelines
post a question
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Talk to a Sierra Circuits PCB Expert today

24 hours a day, 7 days a week.

Call us: +1 (800) 763-7503
Book a Meeting with a Sales Rep
Email us: through our Customer Care form