纳瓦尔2.19的播客,这一段关于intelligence的定义和意义的探讨,说得太好了,我觉得要看原文,我把原文分享过来:
What is the definition of intelligence? There’s the G factor, which predicts a lot of human outcomes, but the best evidence for the G factor is its predictive power. It’s that you measure this one thing and then you see people get much better life outcomes along the way in things that seem even somewhat unrelated to G.
So I would argue, and I think it’s one of my more popular tweets: the only true test of intelligence is if you get what you want out of life.
This triggers a lot of people because they go to school, they get their master’s degrees, they think they’re super smart. And then they don’t have great lives. They aren’t super happy, or they have relationship problems, or they don’t make the money that they want, or they become unhealthy and this sort of triggers them.
But that really is the purpose of intelligence: for you as a biological creature to get what you want out of life.
Whether it’s a good relationship or a mate, or money or success or wealth or health or whatever it is. So there are people who I think are quite intelligent because you can tell they have high-quality, functioning lives and minds and bodies, and they’ve just managed to navigate themselves into that situation.
It doesn’t matter what your starting point is, because the world is so large now, and you can navigate it in so many different ways that every little choice you make compounds and demonstrates your ability to understand how the world works until you finally get to the place that you want.
Now the interesting thing about this definition—that the only true test of intelligence is if you get what you want out of life—is that an AI fails it instantly, because an AI doesn’t want anything out of life.
The AI doesn’t even have a life—let alone that—but it doesn’t want anything. AI’s desires are programmed by the human controlling it.
But let’s give it that for a second. Let’s say the human wants something and programs the AI to go get it; then the AI is acting as a proxy for the human and the intelligence of the AI can be measured as: did it get that person that thing?
Most of the things that we want in life are adversarial or zero-sum games.
So, for example, if you want to seduce a girl or get a husband, you’re competing with all the other people who are out there seducing girls or trying to get husbands. So now you’re in a competitive situation. The AI has to outmaneuver the other people.
Or if you say, “Hey, AI, go trade on the stock market for me and make me a bunch of money.” That AI is trading against other humans and other trading bots. It’s an adversarial situation. It has to outmaneuver them.
Or if you say, “Hey, AI, make me famous. Write me incredible tweets. Write me great blog posts. Record me great podcasts in my own voice and make me famous,” now it’s competing against all the other AIs.
So in that sense, intelligence is measured in a battlefield—in an arena. It’s a relative construct. I think the AIs are actually going to fail mostly in those regards, or to the extent that they even succeed, because they’re freely available, they will get outcompeted away, and the alpha that will remain would be entirely human.