Conceptualization as a Basis for Cognition — Human and Machine



Original Source Here

Conceptualization as a Basis for Cognition — Human and Machine

A missing link to machine understanding and Cognitive AI

Image by Jackie Niam on Adobe Stock

While most contemporary discussions and classifications of AI capabilities center around what a system can do, I believe the path to higher intelligence and machine cognition relies on what a system can know and understand. Using rich AI knowledge representation frameworks and comprehensive models of the world can increase an AI system’s ability to transform information into deep knowledge, understanding, and functionality. To pursue this path to better AI, it is essential to understand what “understanding” really means for the human brain. Doing so allows for implementing frameworks that enable machine learning to parallel human understanding by integrating modeling and conceptualization with data and task generalization.

Conceptualization: The Basis for Human Thought

Concepts” are the most basic building block in human thinking. Concepts serve as ontological roots for objects that we think about. Concepts represent a persistent set of essential attributes of an object class, which can change and expand with experience. Existing concepts can be abstracted or linked through analogy to additional domains and object classes. Examples of concepts include |dog|, |democracy|, |white|, and |uncle|. Physical or mental objects can be stored as a concept and accrue more data and attributes over time (e.g., |my dog Lucky| and |snow white| versus |off-white|). Even if the referent is invisible or abstract, like |love|, it can still be stored as a concept. Our understanding of the world relies on concepts, attributes of concepts, and relationships between concepts. We use concepts and facts composed of concepts and the relations between them to construct our world model.

Concepts are unique in that they can contain any type of information available to an agent and be formed without prior knowledge. Imagine walking into a classroom, and the teacher says, “today, we will learn about the Quetzal.” A placeholder for a new concept is already formed in the mind of the students without any information other than a name. “It is a small tropical bird,” continues the teacher. A wealth of probable information is now added to the concept — it is probably very colorful, likely lives in forests, and makes interesting sounds. Concepts have elasticity and persistence, and there are boundaries to elasticity that don’t permit the concept to change beyond recognition.

Concepts exist in relation to but independently of the language used to describe them. For example, the word “dog” is an attribute (sometimes called a tag) of the concept |dog|. Even if you use a different word to describe the fluffy barky thing in your neighbor’s yard that keeps you awake, it’s still a |dog|.

As another example, the concept |banana| can start with an abstract characteristic of the various images that represent a banana — green or ripe, whole, or sliced, by itself or interacting with other objects. The concept |banana| can also include history and stories (banana leaf skirts, banana boats), values (nutrition), its maturation over a few days of sitting on the dinner table, or its connotations (goes bananas). The concept |banana| is represented by a name in most languages on earth, but it is not any of those names in particular.

In the proposed model of 6 dimensions of knowledge, a concept is captured as an abstract concept-reference (the 6thdimension) that links all the instantiations and references of the concept. It refers to multimodal aspects such as visual representations, smell of the ripe fruit, the feel as you peel it (for systems with haptic capabilities), sound and taste when you bite into it, and more. It will have multiple dimensions of knowledge in which it appears: descriptive, including taxonomies and facts; procedural, including how to peel and eat it; stories on its heritage; a value associated with having or losing it; etc.

1. Concept of a banana.

The process of creating concepts — conceptualization — occurs in one of three ways:

· Instructor-guided. For humans, this involves learning from a teacher — piano instructor, biology professor, or football coach, for example. In machine-learning terms, this would be supervised learning with labeled information or direct input from the system designer.

· Self-guided. Humans practice passive conceptualization by observing well-formed concepts and internalizing them. Examples include reading a book that introduces and describes new terms, watching two people play volleyball, and deducing the rules and objectives through observation. For a machine-learning system, this would be taking in broadly available text or viewing video where a concept is described and named in order to populate the conceptual data structure.

· Internal. These are concepts that are formed by analyzing your current situation, either implicitly or explicitly. For example, a rock climber assesses which paths are navigable and which are too difficult. From this assessment, the concept of |traversability| is formed (see example in Figure 2). Similarly, a scientist carefully observes to distill experience into a theoretical, conceptual framework. For a machine-learning system, this would be self-discovery (identifying an abstraction that can effectively model the world as the AI system sees it).

2. Abstraction and analogy allow concepts to be re-applied in new domains.

There are many, often conflicting, definitions and theories about what it means to conceptualize. For future AI systems, the following definition of conceptualization can be offered: The ability to abstract and evolve rich concept constructs within a world-view knowledge framework to facilitate broad deduction and generate new knowledge and skills.

Generalization is a Necessary but Insufficient Attribute of Cognitive AI

It can be said that today’s deep learning ignores concepts and considers generalization as the ultimate goal of AI. It’s possible that this approach will lead to cognitive machine learning that is limited in its scope of capacities. Machine-learning systems must learn to conceptualize to reach the goal of creating machines with higher intelligence.

To substantiate this claim, let’s first examine what generalization in artificial intelligence means specifically in the context of artificial intelligence/machine learning (as opposed to the layman’s use of the term), and then explore how that differs from conceptualization.

An overview of generalization: In machine learning, generalization refers to the capability of a trained model to classify or forecast unseen data. A generalized model will normally work for all subsets of unseen data.

Goodfellow, Bengio, and Courville discuss the concepts of overfitting and underfitting. They point out how challenging it is for machine-learning algorithms to perform well on new, previously unseen inputs as central to the generalization problem. The addition of new dimensions or abstractions is out of scope within this view of generalization. For example, a machine-learning model can learn to correctly classify prime vs. non-prime numbers. Still, it will be unlikely to arrive at an abstract definition of a prime number similar to that of a human mathematician.

3. A well-generalized algorithm learns a manifold that fits both training and validation data.

Two important aspects of generalization are interpolation and extrapolation. Interpolation in mathematics is defined as a type of estimation, a method of constructing new data points within the range of a discrete set of known data points. Extrapolation is defined as a type of estimation beyond the original observation range (training set) of the value of a variable based on its relationship with another variable. In machine learning, extrapolation is a system trained on a specific range of data and can predict a different range of data.

4. Extrapolation and interpolation are important aspects of generalization.

Francois Chollet extends the notion of generalization from data to tasks. In his view, the intelligence of a system is measured by its ability to acquire skills over tasks based on their generalization difficulty:

The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.

In his model, Chollet represents all levels of generalization in terms of the ability to perform tasks.

Unlike generalization, concepts are not necessarily directly related to tasks. A human can form a concept without associating it with a task. Thus, latent aspects of intelligence can be represented by internal concepts that are applicable to tasks that are not currently known. Concepts cannot necessarily be viewed solely through behavior demonstrated in a task.

Today’s AI cannot be said to perform conceptualization: Firstly, it is important to note that conceptualization is not the same as classification. In machine learning, classification refers to a predictive modeling problem where a class label is predicted for a given example of input data. In contrast, a concept is a rich, multifaceted set of related knowledge that can continuously expand.

The following criteria define a paradigm of concept formation according to which current machine-learning algorithms do not conceptualize:

· Capacity and diversity. A concept is defined primarily by its essence and then further embellished by its particulars. A concept is not inherently bounded to a particular set of descriptors or values and can accrue almost unlimited dimensions — think of it as a sponge that absorbs relevant knowledge over time and experiences. For example, biology students signing up for their first class on epigenetics may know nothing about the field beyond vaguely recognizing that it sounds similar to “genetics.” As time goes on, the once very sparse concept will become a lot more multifaceted as the students learn about prions, nucleosome positioning, effects of diabetes on macrophage behavior, antibiotics altering glutamate receptor activity, and so on. This example contrasts with deep learning, where a token or object has a fixed number of dimensions.

· Persistence. A concept remains the same, even if some or most of its attributes can change. In contrast, the embedding of a token or object in machine learning is defined by the object’s dimensions (properties). Suppose the properties change through additional training, the latent space embedding changes, and potentially, the distances to other embedding vectors. There could be substantial change to properties in concept-driven knowledge representation without changing the inherent concept as it is reflected in the knowledge base (e.g., its position in ontologies or association with its history). For example, a lawyer can sell his Ferrari and become a monk, changing most of his external attributes yet continuing to be the same individual. When mapped to a feature-based embedding space, a concept might move due to the changes in features of the dimensions observed by an AI system. However, the essence that makes it a particular concept most likely will not change.

· Abstraction. This includes the ability to provide an abstracted organization of information and its implication that can be applied on completely different domains, unrelated to the data domain from which it has emerged. When dealing with the concept of |traversability|, which we introduced earlier in the blog, it could have been learned during a rock-climbing experience. Still, the abstraction and attributes of this concept allow it to be applied in a substantially different domain/space, such as playing a game of Risk or thinking about reaching the person in IT who can finally fix your laptop. This abstraction to a much higher level than the fitting function of deep learning — together with the power to identify analogy and conceptual similarity across different spaces — is a major differentiator of “concept” that is not enabled by deep-learning generalization practices.

Conclusion

A system’s ability to absorb data, abstract it, expand its concepts, and enhance its internal modeling of the world and its reasoning capabilities are leading measures of intelligence. At the same time, measured performance on learned tasks is a lagging measure. An approach integrating underlying modeling and knowledge representation (including conceptualization) with data and task generalization will likely allow for a better path to higher machine intelligence overall.

References

Singer, G. (2021a, April 6). The Rise of Cognitive AI — Towards Data Science. Medium. https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc

Singer, G. (2021, May 6). Understanding of and by Deep Knowledge — Towards Data Science. Medium. https://towardsdatascience.com/understanding-of-and-by-deep-knowledge-aac5ede75169

Wikipedia contributors. (2020, December 9). Conceptualization (information science). Wikipedia. https://en.wikipedia.org/wiki/Conceptualization_(information_science)

Murphy, G. L. (2004). The Big Book of Concepts. Bradford Book.

What is Generalization in Machine Learning? (2021, February 25). DeepAI.Space. https://deepai.space/what-is-generalization-in-machine-learning/

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Amsterdam University Press.

Chollet, F. (2019). On the Measure of Intelligence. ArXiv, abs/1911.01547.

Wikipedia contributors. (2021, August 31). Interpolation. Wikipedia. https://en.wikipedia.org/wiki/Interpolation

Wikipedia contributors. (2021a, August 19). Extrapolation. Wikipedia. https://en.wikipedia.org/wiki/Extrapolation

Brownlee, J. (2020, August 19). 4 Types of Classification Tasks in Machine Learning. Machine Learning Mastery. https://machinelearningmastery.com/types-of-classification-in-machine-learning/

Ye, A. (2020, June 26). Real Artificial Intelligence: Understanding Extrapolation vs Generalization. Medium. https://towardsdatascience.com/real-artificial-intelligence-understanding-extrapolation-vs-generalization-b8e8dcf5fd4b

Sharma, R. (1999). The Monk Who Sold His Ferrari. HarperCollins.

Gadi Singer is vice president at Intel Labs, director of Cognitive Computing Research.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: