maj
Higher Seminar in Theoretical Philosophy: Asger Kirkeby-Hinrup "What non-conscious AI can tell us about human consciousness" (joint work with Jakob Stenseke)
Lost in expectations of possible future AI consciousness is the fact that we still do not understand consciousness in humans. There is no objective empirical way of measuring the presence or absence of subjective experience in humans. Likewise, we have no theoretical way to determine the presence or absence of subjective experience in humans, because our theories have different predictions. Yet, the prevalent approach has been to use human consciousness as the starting point for measuring consciousness in AI. But how do we justify what we are measuring against? If we do not know how to measure consciousness in humans, how can we know what to look for in AI? The is no solid foundation for this direction of inference. Here, I suggest a different approach. The idea is to pursue what the data allows us to infer rather than focusing on what we want to infer. Our most solid foundation is the widely shared assumption that AI is not conscious yet. This allows inference in the other direction and can tell us something about human consciousness. By evaluating AI properties and abilities along with the assumption that AI is not conscious, we can know which properties and abilities are insufficient for consciousness (call this approach Insufficiency Inference or I-I). Now, this way of arguing is nothing new in itself. However, our unique historical situation with respect to (the possibility of) AI consciousness in the near future provides the ideal conditions where an approach that systematically monitors and rules out abilities and properties may be highly informative. The talk will explore if I-I is a viable approach by considering whether it is A) possible, B) applicable, and C) relevant.