Thinking Fast and Slow – Daniel Kahneman on the Artificial Intelligence Podcast, Hosted By Lex Fridman

Check out The Artificial Intelligence Podcast Episode Page & Show Notes

Key Takeaways

  • System 1 thinking is fast, relies on learned behavior, and is mostly emotional; system 2 thinking is slow, deliberate, and more logical
  • “In most cases, your reasons have very little to do with why you believe what you believe. The reasons are a story that come to your mind when you need to explain yourself.”Daniel Kahneman
  • We’re still very far from true artificial general intelligence (AGI)
  • Our decisions and actions are governed by our memories of an experience, not the experience itself
  • Once people form an opinion, they’re incredibly reluctant to change it
  • “We have the opinions we have not because we know why we have them, but because we trust some people and not others. It’s much less about evidence than it is about stories.” Daniel Kahneman

Books Mentioned

  • Daniel and Lex discuss Man’s Search for Meaning by Viktor Frankl, particularly Viktor’s thesis that identifying a positive purpose in life can save one from suffering

Intro

System 1 and System 2 Thinking

  • This is described more in Daniel’s book
  • System 1 thinking is fast, relies on learned behavior, and emotional (an idea that comes very quickly)
    • An easy example: When answering the question, “What’s 2+2?” you’d use system 1 thinking 
    • System 1 thinking is mostly effortless
    • When utilizing system 1 thinking, you’re largely matching a new experience against a previously discovered pattern
  • System 2 thinking is slow, deliberate, and more logical (an idea that requires deep thought)
    • For example, when answering the question, “What’s 27×14?” you’d use system 2 thinking
    • With system 2 thinking, there’s some serious effort involved

System 1 & 2 Thinking Applied to Deep Learning

  • “What’s happening in deep learning today is more like the system 1 product than the system 2 product. Deep learning matches patterns and anticipates what’s going to happen, so it’s highly predictive. What deep learning doesn’t have—and many people think this is critical—is the ability to reason, so there’s no system 2 there.”Daniel Kahneman
  • How far can deep learning get with just system 1 advances?
    • “It’s very clear the deep mind has gone way beyond what people thought was possible. I think what’s surprised me most about the developments in AI is the speed —things, at least in the context of deep learning, moved a lot faster than anticipated. The transition from solving chess to solving Go—that’s bewildering how quickly it went.” – Daniel Kahneman
      • That said, we may be approaching the limits, causing progress to begin slowing

Autonomous Driving

  • In the realm of autonomous vehicles, how difficult is it to create a system that can model human beings sufficiently enough to predict whether a pedestrian will cross the road or not?
    • “I’m fairly optimistic about that. What we’re talking about is a huge amount of information every vehicle has that feeds into one gigantic system. Anything that any vehicle learns becomes part of what the whole system knows. When a system multiples like that, there’s a lot you can do. Human beings are very complicated, so the system will make mistakes, but humans also make mistakes.”Daniel Kahneman
      • This is similar to decisions autonomous vehicles make when driving through a roud-a-bout

Can robots and humans collaborate successfully?

  • For example, can system 1 neural networks “borrow” humans for system 2-type tasks?
    • “In any system where humans and the machines interact, the human would be superfluous within a fairly short time.” – Daniel Kahneman

On Explaining Reasoning

  • “In the judicial system, you have systems that are clearly better at predicting parole violations than judges, but they can’t explain their reasoning, so people don’t want to trust them.” – Daniel Kahneman
  • 🎧 But then again, humans can’t explain their reasoning either, although they think they can
    • “In most cases, your reasons have very little to do with why you believe what you believe. The reasons are a story that come to your mind when you need to explain yourself.”Daniel Kahneman
      • So, extrapolating this to AI: AI doesn’t really need to explain itself, it just has to be able to tell a convincing story

The Experiencing Self vs. the Remembering Self

  • The “experiencing self” is the “you” that goes about life, experiencing the day-to-day
    • (The self that lives)
  • Occasionally, you form a memory/story about a past experience – this is the “remembering self”
    • (The self that evaluates life)
    • A point to think on: In our memories, time doesn’t matter in the slightest
  • Here’s why the above matters: Our decisions and actions are governed by our memories of an experience, not the experience itself
  • 🎧 A question to ponder: Suppose you’re planning a vacation, but told at the vacation’s conclusion, you’ll get an amnesic drug which would wipe away your memories of the experience. Your photos would also be destroyed. Would you still take the vacation?
    • The research indicates most people wouldn’t
      • “It turns out, we go on vacations, in large part, to construct memories, not to have experiences.” – Daniel Kahneman
        • (People primarily value the story they tell about an experience > the experience itself)

What makes for a happy life?

  • “I abandoned happiness research because I couldn’t solve that problem” – Daniel Kahneman
  • “One could imagine a life in which people don’t score themselves. It feels as if that would be a better life.” Daniel Kahneman
    • By “scoring,” Daniel is referring to the constant asking of, “How am I doing?”

Man’s Search for Meaning by Viktor Frankl

  • In the book, Viktor’s theorizes that identifying a positive purpose in life can save one from suffering – Does Daniel agree?
    • “Not really. I can see that someone who has that feeling of purpose, meaning, and so on could be sustained by it, but I, in general, don’t have that feeling. I’m pretty sure that if I were in a concentration camp, I’d give up and die.” – Daniel Kahneman
      • Further: “I’m not sure how essential to survival a sense of purpose is, but I do know, when I think about myself, I would have given up.”
      • Also: We have no idea if people like Viktor, who survived a concentration camp, actually survived because they discovered a sense of purpose (it could just be the story they’re telling)

🎧 How to Change Your Mind

  • “One of the things that’s interesting is how difficult it is for people to change their minds. Essentially, once they’re committed, people just don’t change their mind about anything that matters. That’s surprising, but it’s true even with scientists… On things that matter, political or religious, people just don’t change their mind, by and large, and there’s very little you can do about it.”Daniel Kahneman
    • That said, if a particular political or religious leader changed their mind on a relevant topic (like climate change), that would have a significant effect – why?
      • “We have the opinions we have not because we know why we have them, but because we trust some people and not others. It’s much less about evidence than it is about stories.”

What’s the meaning of life?

  • “There’s no answer than I can understand, and I’m not actively looking for one” – Daniel Kahneman
    • Does an answer exist? – “No”

Additional Notes

  • “I’m too old for social networks. I’ve never seen Instagram… I have no idea what Instagram does.” – Daniel Kahneman
  • “There’s a thing I’ve written about called the focusing illusion —that is, when you think about something, it looks very important, more important than it really is.” – Daniel Kahneman
  • Daniel thinks we’re still quite far from seeing a true artificial general intelligence (AGI)
    • “We’re quite far from something that can, in every domain, think like a human, except better”Daniel Kahneman
Artificial Intelligence Podcast : , , , , , ,

More Notes on these topics