Don’t Handicap AI without Explicit Knowledge
Thursday September 9, 15:00 - 16:20 UTC
Knowledge representation as expert system rules or using frames and variety of logics, played a key role in capturing explicit knowledge during the hay days of AI in the past century. Such knowledge, aligned with planning and reasoning are part of what we refer to as Symbolic AI. The resurgent AI of this century in the form of Statistical AI has benefitted from massive data and computing. On some tasks, deep learning methods have even exceeded human performance levels. This gave the false sense that data alone is enough, and explicit knowledge is not needed. But as we start chasing machine intelligence that is comparable with human intelligence, there is an increasing realization that we cannot do without explicit knowledge. Neuroscience (role of long-term memory, strong interactions between different specialized regions of data on tasks such as multimodal sensing), cognitive science (bottom brain versus top brain, perception versus cognition), brain-inspired computing, behavioral economics (system 1 versus system 2), and other disciplines point to need for furthering AI to neuro-symbolic AI (i.e., hybrid of Statistical AI and Symbolic AI, also referred to as the third wave of AI). As we make this progress, the role of explicit knowledge becomes more evident. I will specifically look at our endeavor to support human-like intelligence, our desire for AI systems to interact with humans naturally, and our need to explain the path and reasons for AI systems’ workings. Nevertheless, the variety of knowledge needed to support understanding and intelligence is varied and complex. Using the example of progressing from NLP to NLU, I will demonstrate the dimensions of explicit knowledge, which may include, linguistic, language syntax, common sense, general (world model), specialized (e.g., geographic), and domain-specific (e.g., mental health) knowledge. I will also argue that despite this complexity, such knowledge can be scalability created and maintained (even dynamically or continually). Finally, I will describe our work on knowledge-infused learning as an example strategy for fusing statistical and symbolic AI in a variety of ways.
Prof. Amit Sheth (Home Page, LinkedIn) is an Educator, Researcher, and Entrepreneur. He is the founding director of the AI Institute (#AIISC) at the University of South Carolina. Current areas of his research includes knowledge-infused learning and explainable AI, and applications to personalized and public health, social good and preventing social harm, future manufacturing, and disaster management. He is a fellow of the IEEE, AAAI, AAAS, and ACM. His awards include IEEE TCSVC Research Innovation Award, University Trustee Award, 10-year award (Intl Semantic Web Conf), OSU Franklin College Alumni award, and Ohio Faculty Commercialization Award (runner up). For several years through 2018, he was listed among the top 100 most cited computer scientists. Three of the four companies he has (co)founded involves licensing his university research outcomes, including the first Semantic Web company in 1999 that pioneered technology similar to what is found today in Google Semantic Search and Knowledge Graph, and the fourth company (http://cognovilabs.com) at the intersection of emotion and AI.