Namit Arora considers the complexity of consciousness and its implications for artificial intelligence.
A conceptual advance for AI came when some researchers recognized that a computer’s model of the world was not real. By comparison, the human ‘model’ of the world was the world itself, not a static description of it. What if a robot too used the world as its model, “continually referring to its sensors rather than to an internal world model”? (Hubert L. Dreyfus, What Computers Still Can’t Do). However, this approach worked only in micro-environments with a limited set of features which could be recognized by its sensors. The robots did nothing more sophisticated than ants. As in the past, no one knew how to make the robots learn, or respond to a change in context or significance. This was the backdrop against which AI researchers began turning away from symbolic AI to simulated neural networks, with their promise of self-learning and establishing relevance. Slowly but surely, the AI community began embracing Heideggerean insights about consciousness.
No comments:
Post a Comment