Neuroscientist and tech entrepreneur Jeff Hawkins claims he’s figured out how intelligence works—and he wants every AI lab in the world to know about

TruckElectric

Well-known member
First Name
Bryan
Joined
Jun 16, 2020
Threads
769
Messages
2,482
Reaction score
3,273
Location
Texas
Vehicles
Dodge Ram diesel
Occupation
Retired
Country flag
“We’ll never have true AI without first understanding the brain”
Neuroscientist and tech entrepreneur Jeff Hawkins claims he’s figured out how intelligence works—and he wants every AI lab in the world to know about it.
by
March 3, 2021
20151114DSC_7099-crop.jpg
PATRICK T POWERS


The search for AI has always been about trying to build machines that think—at least in some sense. But the question of how alike artificial and biological intelligence should be has divided opinion for decades. Early efforts to build AI involved decision-making processes and information storage systems that were loosely inspired by the way humans seemed to think. And today’s deep neural networks are loosely inspired by the way interconnected neurons fire in the brain. But loose inspiration is typically as far as it goes.

Most people in AI don’t care too much about the details, says Jeff Hawkins, a neuroscientist and tech entrepreneur. He wants to change that. Hawkins has straddled the two worlds of neuroscience and AI for nearly 40 years. In 1986, after a few years as a software engineer at Intel, he turned up at the University of California, Berkeley, to start a PhD in neuroscience, hoping to figure out how intelligence worked. But his ambition hit a wall when he was told there was nobody there to help him with such a big-picture project. Frustrated, he swapped Berkeley for Silicon Valley and in 1992 founded Palm Computing, which developed the PalmPilot—a precursor to today’s smartphones.

But his fascination with brains never went away. Fifteen years later, he returned to neuroscience and set up the Redwood Center for Theoretical Neuroscience (now at Berkeley). Today he runs Numenta, a neuroscience research company based in Silicon Valley. There he and his team study the neocortex, the part of the brain responsible for everything we associate with intelligence. After a string of breakthroughs in the last few years, Numenta changed its focus from brains to AI, applying what it has learned about biological intelligence to machines.

Hawkins’s ideas have inspired big names in AI, including Andrew Ng, and drawn accolades from the likes of Richard Dawkins, who wrote an enthusiastic foreword to Hawkins’s new book A Thousand Brains: A New Theory of Intelligence, published March 2.

I had a long chat with Hawkins on Zoom about what his research into human brains means for machine intelligence. He’s not the first Silicon Valley entrepreneur to think he has all the answers—and not everyone is likely to agree with his conclusions. But his ideas could shake up AI.

Our conversation has been edited for length and clarity.

Why do you think AI is heading in the wrong direction at the moment?

That’s a complicated question. Hey, I’m not a critic of today’s AI. I think it’s great; it’s useful. I just don’t think it’s intelligent.

My main interest is brains. I fell in love with brains decades ago. I’ve had this attitude for a long time that before making AI, we first have to figure out what intelligence actually is, and the best way to do that is to study brains.

Back in 1980, or something like that, I felt the approaches to AI were not going to lead to true intelligence. And I’ve felt the same through all the different phases of AI—it’s not a new thing for me.

I look at the progress that has been made recently with deep learning and it’s dramatic, it’s pretty impressive—but that doesn’t take away from the fact that it’s fundamentally lacking. I think I know what intelligence is; I think I know how brains do it. And AI is not doing what brains do.

Are you saying that to build an AI we somehow need to re-create a brain?

No, I don’t think we’re going to build direct copies of brains. I’m not into brain emulation at all. But we’re going to need to build machines that work along similar principles. The only examples we have of intelligent systems are biological systems. Why wouldn’t you study that?

It’s like I showed you a computer for the first time and you say, “That’s amazing! I’m going to build something like it.” But instead of looking at it, trying to figure out how it works, you just go away and start trying to make something from scratch.

So what is it brains do that’s crucial to intelligence that you think AI needs to do too?

There are four minimum attributes of intelligence, a kind of baseline. The first is learning by moving: we cannot sense everything around us at once. We have to move to build up a mental model of things, even if it’s only moving our eyes or hands. This is called embodiment.

Next, this sensory input gets taken up by tens of thousands of cortical columns, each with a partial picture of the world. They compete and combine via a sort of voting system to build up an overall viewpoint. That’s the thousand brains idea. In an AI system, this could involve a machine controlling different sensors—vision, touch, radar and so on—to get a more complete model of the world. Although, there will typically be many cortical columns for each sense, such as vision.

Then there’s continuous learning, where you learn new things without forgetting previous stuff. Today’s AI systems can’t do this. And finally, we structure knowledge using reference frames, which means that our knowledge of the world is relative to our point of view. If I slide my finger up the edge of my coffee cup, I can predict that I’ll feel its rim, because I know where my hand is in relation to the cup.

Your lab has recently shifted from neuroscience to AI. Does that correspond to your thousand brains theory coming together?

Pretty much. Up until two years ago, if you walked into our office, it was all neuroscience. Then we made the transition. We felt we’d learned enough about the brain to start applying it to AI.

What kinds of AI work are you doing?

One of the first things we looked at was sparsity. At any one time, only 2% of our neurons are firing; the activity is sparse. We’ve been applying this idea to deep-learning networks and we’re getting dramatic results, like 50 times speed-ups on existing networks. Sparsity also gives you more robust networks, lower power consumption. Now we’re working on continuous learning.

It’s interesting that you include movement as a baseline for intelligence. Does that mean an AI needs a body? Does it need to be a robot?

In the future I think the distinction between AI and robotics will disappear. But right now I prefer the word “embodiment,” because when you talk about robots it conjures up images of humanlike robots, which isn’t what I’m talking about. The key thing is that the AI will have to have sensors and be able to move them relative to itself and the things it’s modeling. But you could also have a virtual AI that moves in the internet.

This idea is quite different from a lot of popular ideas about intelligence, of a disembodied brain.

Movement is really interesting. The brain uses the same mechanisms to move my finger over a coffee cup, or move my eyes, or even when you’re thinking about a conceptual problem. Your brain moves through reference frames to recall facts that it has stored in different locations.

The key thing is that any intelligent system, no matter what its physical form, learns a model of the world by sensing different parts of it, by moving in it. That’s bedrock; you can’t get away from that. Whether it looks like a humanoid robot, a snake robot, a car, an airplane, or, you know, just a computer sitting on your desk scooting around the internet—they’re all the same.

How do most AI researchers feel about these ideas?

The vast majority of AI researchers don’t really embrace the idea that the brain is important. I mean, yes, people figured out neural networks a while ago, and they’re kind of inspired by the brain. But most people aren’t trying to replicate the brain. It’s just whatever works, works. And today’s neural networks are working well enough.

And most people in AI have very little understanding of neuroscience. It’s not surprising, because it’s really hard. It’s not something you just sit down and spend a couple of days reading about. Neuroscience itself has been struggling to understand what the hell’s going on in the brain.

But one of the big goals of writing this book was to start a conversation about intelligence that we’re not having. I mean, my ideal dream is that every AI lab in the world reads this book and starts discussing these ideas. Do we accept them? Do we disagree? That hasn’t really been possible before. I mean, this brain research is less than five years old. I’m hoping it’ll be a real turning point.

How do you see these conversations changing AI research?

As a field, AI has lacked a definition of what intelligence is. You know, the Turing test is one of the worst things that ever happened, in my opinion. Even today, we still focus so much on benchmarks and clever tricks. I’m not trying to say it’s not useful. An AI that can detect cancer cells is great. But is that intelligence? No. In the book I use the example of robots on Mars building a habitat for humans. Try to imagine what kind of AI is required to do that. Is that possible? It’s totally possible. I think at the end of the century, we will have machines like that. The question is how do we get away from, like, “Here’s another trick” to the fundamentals needed to build the future.

What did Turing get wrong when he started the conversation about machine intelligence?

I just mean that if you go back and read his original work, he was basically trying to get people to stop arguing with him about whether you could build an intelligent machine. He was like, “Here’s some stuff to think about—stop bothering me.” But the problem is that it’s focused on a task. Can a machine do something a human can do? And that has been extended to all the goals we set for AI. So playing Go was a great achievement for AI. Really? [laughs] I mean, okay.

The problem with all performance-based metrics, and the Turing test is one of them, is that it just avoids the conversation or the big question about what an intelligent system is. If you can trick somebody, if you can solve a task with some sort of clever engineering, then you’ve achieved that benchmark, but you haven’t necessarily made any progress toward a deeper understanding of what it means to be intelligent.

Is the focus on humanlike achievement a problem too?

I think in the future, many intelligent machines will not do anything that humans do. Many will be very simple and small—you know, just like a mouse or a cat. So focusing on language and human experience and all this stuff to pass the Turing test is kind of irrelevant to building an intelligent machine. It’s relevant if you want to build a humanlike machine, but I don’t think we always want to do that.

You tell a story in the book about pitching handheld computers to a boss at Intel who couldn’t see what they were for. So what will these future AIs do?

I don’t know. No one knows. But I have no doubt that we will find a gazillion useful things for intelligent machines to do, just like we’ve done for phones and computers. No one anticipated in the 1940s or 50s what computers would do. It’ll be the same with AI. It’ll be good. Some bad, but mostly good.

But I prefer to think of this in the long term. Instead of asking “What’s the use of building intelligent machines?” I ask “What’s the purpose of life?” We live in a huge universe in which we are little dots of nothing. I’ve had this question mark in my head since I was a little kid. Why do we care about anything? Why are we doing all this? What should our goal be as a species?

I think it’s not about preserving the gene pool: it’s about preserving knowledge. And if you think about it that way, intelligent machines are essential for that. We’re not going to be around forever, but our machines could be.

I find it inspirational. I want a purpose to my life. I think AI—AI as I envision it, not today’s AI—is a way of essentially preserving ourselves for a time and a place we don’t yet know.


SOURCE: MIT Technology Review


AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything”

Thirty years ago, Hinton’s belief in neural networks was contrarian. Now it’s hard to find anyone who disagrees, he says.
by
November 3, 2020
AP_384463506721-e1604359379321.jpg
NOAH BERGER / AP

  • On the AI field’s gaps: "There’s going to have to be quite a few conceptual breakthroughs...we also need a massive increase in scale."
  • On neural networks’ weaknesses: "Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better."
  • On how our brains work: "What’s inside the brain is these big vectors of neural activity."


The modern AI revolution began during an obscure research contest. It was 2012, the third year of the annual ImageNet competition, which challenged teams to build computer vision systems that would recognize 1,000 objects, from animals to landscapes to people.

In the first two years, the best teams had failed to reach even 75% accuracy. But in the third, a band of three researchers—a professor and his students—suddenly blew past this ceiling. They won the competition by a staggering 10.8 percentage points. That professor was Geoffrey Hinton, and the technique they used was called deep learning.

Hinton had actually been working with deep learning since the 1980s, but its effectiveness had been limited by a lack of data and computational power. His steadfast belief in the technique ultimately paid massive dividends. The fourth year of the ImageNet competition, nearly every team was using deep learning and achieving miraculous accuracy gains. Soon enough deep learning was being applied to tasks beyond image recognition, and within a broad range of industries as well.

Last year, for his foundational contributions to the field, Hinton was awarded the Turing Award, alongside other AI pioneers Yann LeCun and Yoshua Bengio. On October 20, I spoke with him at MIT Technology Review’s annual EmTech MIT conference about the state of the field and where he thinks it should be headed next.
The following has been edited and condensed for clarity.

You think deep learning will be enough to replicate all of human intelligence. What makes you so sure?
I do believe deep learning is going to be able to do everything, but I do think there’s going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It’s now used in almost all the very best natural-language processing. We’re going to need a bunch more breakthroughs like that.

And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters, or synapses. What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it’s still tiny compared to the brain.

When you say scale, do you mean bigger neural networks, more data, or both?
Both. There’s a sort of discrepancy between what happens in computer science and what happens with people. People have a huge amount of parameters compared with the amount of data they’re getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better.

A lot of the people in the field believe that common sense is the next big capability to tackle. Do you agree?
I agree that that’s one of the very important things. I also think motor control is very important, and deep neural nets are now getting good at that. In particular, some recent work at Google has shown that you can do fine motor control and combine that with language, so that you can open a drawer and take out a block, and the system can tell you in natural language what it’s doing.

For things like GPT-3, which generates this wonderful text, it’s clear it must understand a lot to generate that text, but it’s not quite clear how much it understands. But if something opens the drawer and takes out a block and says, “I just opened a drawer and took out a block,” it’s hard to say it doesn’t understand what it’s doing.

The AI field has always looked to the human brain as its biggest source of inspiration, and different approaches to AI have stemmed from different theories in cognitive science. Do you believe the brain actually builds representations of the external world to understand it, or is that just a useful way of thinking about it?
A long time ago in cognitive science, there was a debate between two schools of thought. One was led by Stephen Kosslyn, and he believed that when you manipulate visual images in your mind, what you have is an array of pixels and you’re moving them around. The other school of thought was more in line with conventional AI. It said, “No, no, that’s nonsense. It’s hierarchical, structural descriptions. You have a symbolic structure in your mind, and that’s what you’re manipulating.”

I think they were both making the same mistake. Kosslyn thought we manipulated pixels because external images are made of pixels, and that’s a representation we understand. The symbol people thought we manipulated symbols because we also represent things in symbols, and that’s a representation we understand. I think that’s equally wrong. What’s inside the brain is these big vectors of neural activity.

There are some people who still believe that symbolic representation is one of the approaches for AI.
Absolutely. I have good friends like Hector Levesque, who really believes in the symbolic approach and has done great work in that. I disagree with him, but the symbolic approach is a perfectly reasonable thing to try. But my guess is in the end, we’ll realize that symbols just exist out there in the external world, and we do internal operations on big vectors.

What do you believe to be your most contrarian view on the future of AI?
Well, my problem is I have these contrarian views and then five years later, they’re mainstream. Most of my contrarian views from the 1980s are now kind of broadly accepted. It’s quite hard now to find people who disagree with them. So yeah, I’ve been sort of undermined in my contrarian views.


MIT Technology Review
Sponsored

 
OP
OP
TruckElectric

TruckElectric

Well-known member
First Name
Bryan
Joined
Jun 16, 2020
Threads
769
Messages
2,482
Reaction score
3,273
Location
Texas
Vehicles
Dodge Ram diesel
Occupation
Retired
Country flag
This AI has reconstructed actual Rachmaninov playing his own piano piece
2 March 2021, 15:44


Fullscreen
By Rosie Pentreath



In 1919, Rachmaninov’s performance of his own devilish Prelude in C minor was punched into piano roll for posterity. Now you can see him play it. No, really.

In 1919, legendary Russian composer, Sergei Rachmaninov, recorded his own performance of his Prelude in C minor onto piano roll, so it could be preserved as a recording.

The Prelude in question is a very difficult piece, and Rachmaninov was able to play its hardest passages up to speed, and with exhilarating energy.

And now – as well as hearing the performance through the historic piano roll, you can now see the performance (watch above).

Through clever artificial intelligence (AI), Canadian technology company, Massive Technologies, has produced piano-playing hands that allow you to all but see Rachmaninov in action.


The hands are shown in ‘first person’ – i.e. roughly where your own would be, if you were sat down at the keyboard to give the mighty Prelude a go – so it feels like you could even be Rachmaninov.

The technology that has allowed Rachmaninov’s performance to come to life is a type of computer learning that can internalise and replicate music virtually. It’s visualised as a piano player – or, more accurately, a piano player’s arms and hands – and has been devised as a learning tool for pianists keen to see a demo from their teacher.

For the Rachmaninov animation (watch above), the AI has extracted notes from the audio recording from the original 1919 Rachmaninov piano roll, and generated the appropriate hand and body animation based on what it’s ‘seen’ from piano players before.

And although the video shows just the arms and hands of the AI pianist, the programmers ‘attached’ a virtual camera to the virtual pianist’s head to ‘simulate eye gaze and anticipation’, according to the original YouTube post.

Rachmaninov was known for having very big hands and a prodigious technique at the piano, hence the hugely complicated and expansive melodies he wrote for the piano.

SOURCE: Classic FM
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
127
Messages
16,604
Reaction score
27,654
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Then there’s continuous learning, where you learn new things without forgetting previous stuff. Today’s AI systems can’t do this.
This Hawkins guy or the interview is so ten years ago. Because this isn't true.

-Crissa
Sponsored

 
 




Top