Dan Dennett explained that it began as a survival mechanism. Itās important to predict how someone else is going to behave. That tiger might be a threat, that person from the next village might have something to offer.
If we simply wait and see, we might encounter an unwelcome or even fatal surprise. The shortcut that the intentional stance offers us is, āif I were them, I might have this in mind.ā Assuming intent doesnāt always work, but it works often enough that all humans embrace it.
Thereās the physical stance (a rock headed toward a window is probably going to break it) and the design stance (this ATM is supposed to dispense money, letās look for the slot.) But the most useful and now problematic shortcut is imagining that others are imagining.
There used to be a chicken in an arcade in New York that played tic tac toe. The best way to engage with the chicken game was to imagine that the chicken had goals and strategies and that he was āhopingā you would go there, not there.
Of course, chickens donāt do any hoping, any more than chess computers are trying to get you to fall into a trap when they set up an en passant. But we take the stance because itās useful. Itās not an accurate portrayal of the state of the physical entity, but it might be a useful way to make predictions.
Thereās a certain sort of empathy here, extending ourselves to another entity and imagining that it has intent. But thereās also a lack of empathy, because we assume that the entity is just like us⦠but also a chicken.
The challenge kicks in when our predictions of agency and intent donāt match up with what happens next.
AI certainly seems like it has earned both a design and an intentional stance from us. Even AI researchers treat their interactions with a working LLM as if theyāre talking to a real person, perhaps a little unevenly balanced, but a person nonetheless.
The intentional stance brings rights and responsibilities, though. We donāt treat infants as though they want something the way we might, which makes it easier to live with their crying. Successful dog trainers donāt imagine that dogs are humans with four legsāthey boil down behavior to inputs and outputs, and use operant conditioning, not reasoning, to change behavior.
Every day, millions of people are joining the early adopters who are giving AI systems the benefit of the doubt, a stance of intent and agency. But itās an illusion, and the AI isnāt ready for rights and canāt take responsibility.
The collision between what we believe and what will happen is going to be significant, and weāre not even sure how to talk about it.
The intentional stance is often useful, but itās not always accurate. When it stops being useful, we need to use a different model for how to understand and what to expect.
