GPT is already at the point where the easiest way to reason about it is to use our other-minds cognition.
Sure, I know that it’s actually doing linear algebra in a high-dimensional space. I sometimes make use of this knowledge, e.g. when I’m trying to create a test case where it will do something very different from a person
But in the majority of cases … the easiest approach is to ask, if this character that GPT is currently simulation was a person, what would they be feeling? i.e. assume that its emulation of human beings is reasonably accurate, so you can use your existing methods for dealing with people.
GPT and other-minds cognition
GPT is already at the point where the easiest way to reason about it is to use our other-minds cognition.
Sure, I know that it’s actually doing linear algebra in a high-dimensional space. I sometimes make use of this knowledge, e.g. when I’m trying to create a test case where it will do something very different from a person
But in the majority of cases … the easiest approach is to ask, if this character that GPT is currently simulation was a person, what would they be feeling? i.e. assume that its emulation of human beings is reasonably accurate, so you can use your existing methods for dealing with people.