Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party.
Imagine you’re going to the grocery store to buy ice cream. If you’re an ice cream lover like me, this probably happens regularly. Normally, I go to the store closest to my home, but every so often I opt to go to a different one, in search of my ice-cream white whale: raspberry chocolate chip.
When you’re in a new store searching for your favorite-but-hard-to-find flavor of ice cream, you might not know exactly where it is, but you’ll probably know that you should head toward the refrigerators, it’s in the aisle labeled frozen foods and that it’s probably not in the same section as the frozen pizza.
My ability to find ice cream in a new store is not instinctive, even though it feels like it. It is the result of years of memories navigating the many sections and aisles of different grocery stores, using visual cues like refrigerators or aisle signs to figure out if I am on the right track.
Today, when we hear about “machine learning,” we’re actually talking about how Google teaches computers to use existing information to answer questions like: Where is the ice cream? Or, can you tell me if my package has arrived on my doorstep? For this edition of Ask a Techspert, I spoke with Rosie Buchanan, who is a senior software engineer working on Machine Perception within Google Nest.
She not only helped explain how machine learning works, she also told me that starting today, Nest Aware subscribers can receive a notification when their Nest Hello, using machine learning, detects that a package has been delivered.
What is machine learning?
I’ll admit: Rosie came up with the food metaphor. She told me that when you’re looking for something to eat, you have a model in your head. “You learn what to eat by seeing, smelling, touching and by using your prior experience with similar things,” she says. “With machine learning, we’re teaching the computer how to do something, often with better accuracy than a person, based on past understanding.”
How do you get a machine to learn?
Rosie and her team teach machines through supervised learning. To help Nest cameras identify packages, they use data that they know contains the “right answers,” which in this case are photos of packages. They then input these data sets to the computer so that it can create an algorithmic model based on the images they provided. This is called a training job, and it requires hundreds of thousands of images. “Over time, the computer is able to independently identify a delivered package without assistance,” Rosie says.
How do you figure out what to make a machine learn?
Rosie told me that package detection was one of the most requested features from Nest Hello users. “In particular, we’re trying to solve problems based on what users want,” she says. “Home safety and security is a huge area for our users.” By bringing package delivery notifications to Nest Aware, Rosie and her team have found a use for machine learning that eliminates the tedious task of waiting around for your delivery.
Do you need a massive supercomputer to do machine learning?
That depends on whether you’re creating a machine learning model or using it. If you’re a developer like Rosie, you’ll need some powerful computers. But if you want to see whether there’s a package on your doorstep, you don’t need more than a video doorbell. “When engineers develop a machine learning model, it can take a ton of computing power to teach it what it needs to know,” Rosie says. “But once it’s trained, a machine learning model doesn’t necessarily take up a lot of space, so it can run basically anywhere, like in your smart doorbell.”
Can machines understand some things that we humans can’t?
According to Rosie, yes. “We can often describe the things we’re learning,” she says, “but there are things we can’t describe, and machines are good at understanding these observations.” It’s called black box learning: We can tell the model is learning something but we can’t quite tell what it is.
A great example of this is when a package arrives at your doorstep. Rosie’s team shows the network lots of pictures of packages, and lots of pictures of other things (trees, dogs, bananas, you name it). They tell the network which images are packages and which ones are not. The network is made up of different nodes, each trying to learn how to identify a package on its own. One node might learn that many packages are brown, and another might notice that many are rectangular.
“These nodes work together to start putting together a concept of what a package is, eventually coming up with a concept of ‘packageness’ that we as humans might not even understand,” Rosie says. “At the end, we don’t actually know exactly what the network learned as its definition of ‘packageness,’ whether it’s looking for a brown box, a white bag or something else.” With machine learning, teams can show a network a new picture and it may tell us there’s a package in it, but we can’t fully know exactly how it made that decision.
What’s the best part about working on machine learning?
Rosie, who’s been at Google for over five years, says it’s all about working on the unknown. “We get to work on problems that we don’t know are actually solvable,” she says. “It’s exciting to get started on something while knowing that it might not be feasible.”
So will machine learning be able to identify that raspberry chocolate chip is the best flavor of ice cream ever created? Probably not. We’ll still need human knowledge to confirm that. But machine learning will help us in other ways, like waiting around for a package to be delivered so you can take that precious time to peruse the frozen foods section.