Only Google could describe a mathematical construct into the name of a phone chip and get away with it.
We now know that the Pixel 6 is real, and it will have Google’s own SoC named “Tensor” onboard powering all the things that make a phone more Pixel-ish. The first part is pretty cool — the Pixel 6 certainly looks like that flagship Pixel phone that so many people are intrigued by, and yeah, I want one, too. But the part about it having Google’s own chip onboard is an even bigger deal.
The Pixel 6 could be a flop, or it could turn out to be one of the best Android phones available. We won’t know much about that until we can get our hands on one. But the Tensor chip has a lot more riding on it because Google has such high hopes for it. If successful at doing all the things a smartphone demands, Tensor will be a huge success that very few companies can imitate.
But enough about the Pixel 6. Let’s talk about the Tensor name because it’s about the most Google-y name for phone hardware ever. For someone unfamiliar with Google as a company and all the other stuff it does, calling a chip Tensor is pretty dumb. A Tensor can be a muscle that tenses to make another body part stretch out. We all have them in places like in our ears and mouths. Interestingly enough, Tensor is also a company that makes skateboard parts. It’s even an abstract metaphysical object used to express a defined multilinear concept when implemented in very nerdy Good Will Hunting-level mathematics. Guess which definition Google really loves?
That’s because these particularly complicated mathematics can also be used for artificial intelligence, and Google loves every single thing about AI — even the bad parts. Google even named the first chip it invented the Tensor Processing Unit. Yes, this isn’t the first Google chip to power a computer, and it’s not even the first one named Tensor.
This isn’t the first Google Tensor chip.
Most companies do AI calculations and machine learning through what is known as neural networks. I have a rudimentary grasp of what a neural network is inside our body, and I only know that they exist and sit in the middle of an input and an output. So, for example, if you want your hand to move and grasp something, it moves and grasps things.
Computing neural networks do the same thing: they take one or more inputs, drive them through a set of processes that are in the middle and hidden (meaning only the input computations and the output path can access them), then shuffle a result through some sort of output. So when you hear about programmers feeding photos of cats or chain-link fences into a neural network, those processes in the middle are the ones doing the “learning” and will “remember” what they have learned.
To do the whole neural network thing efficiently, you need to make the learning class (enough of the quotes, we all know computers can’t actually learn) of processors have some sort of AI acceleration if you want predictable and accurate results. NVIDIA, Intel, and AMD have spent a great deal of time and money building out GPUs (Graphics Processing Unit) that act as the perfect AI accelerator, and we can see that with the huge leap in video game graphics the past few years. But, of course, those GPUs can also drive cars or identify cats or lines of chain-link fencing.
Google’s TPU only exists to act as an AI accelerator.
Google likes to Google, so it takes things a step further with its TPU. A TPU has one function: act as an AI accelerator. You’ll find huge banks of them inside server racks in Google cloud sites, where they only live to crunch numbers the right way so that the input (an abstract data array since we’re getting all nerdy up in here) can produce a tangible output. That output can be wrong or right, but the only thing that really matters is that it’s the same every time the same data is fed into the neural network where the TPU cluster can process it. You make the result right by feeding more data in until you recognize the result.
Yes, this is super geeky, and even the nerdiest of us probably don’t understand it all. I know I don’t, but I do understand how it works. In theory, anyway. What’s important is that we all understand that to Google, the word Tensor means a piece of hardware used for AI and machine learning, and all other definitions don’t matter. That’s why the company named its smartphone chip Tensor.
We don’t know most of the details surrounding the Google Tensor chip, but we know one thing — it’s designed to be really good at the computations that make things like live caption, photo enhancement, and real-time translation work. Google actually announced these things in a recent post on Twitter. So it really was designed to make the Pixel 6 more Pixel-ish. I wasn’t joking around when I said it earlier. We hope it does other stuff like play games or surf the web really well, but can we count on it when it comes to any sort of on-device AI?
I’ll take a side order of fries with my abstract data array neural network accelerator, thanks.
If you’re Google and you make a smartphone chip designed primarily to be good at AI above all else, you name it Tensor. But, hey — it’s better than the Google Hidden Abstract Data Array Neural Network Accelerator, right?
Find A Teacher Form:
https://docs.google.com/forms/d/1vREBnX5n262umf4wU5U2pyTwvk9O-JrAgblA-wH9GFQ/viewform?edit_requested=true#responses
Email:
public1989two@gmail.com
www.itsec.hk
www.itsec.vip
www.itseceu.uk
Leave a Reply