AI methods : Taxonomy and Classifications

Artificial Intelligence (AI) methods can be categorised based on different features: their learning style, functionality, techniques, and source of knowledge. This article comprehensively overviews the various approaches to classifying AI methods.

A video summary of this article is provided at : https://youtu.be/sC7KdAi8qo0


       I.  AI Methods classification based on Learning Style

This classification means grouping the AI methods by how they learn from data, that is, the type of supervision or feedback they receive during the learning process. This classification answers the question: "What kind of data or environment does the AI use to learn?"

Briefly, this classification means understanding how an AI algorithm learns from its environment or data:

·        Supervised: Learns from examples with answers.

·        Unsupervised: Learns from unstructured data.

·        Semi-Supervised: Learns with limited labeled data.

·        Reinforcement: Learns through interaction and feedback.


Figure 1. AI Methods classification based on Learning Style



I.1.                         Supervised Learning

The algorithm learns from labelled training data (i.e., data that has input-output pairs), where each training example includes both the input and the correct output (i.e., a "supervisor" provides answers). Use cases include email spam detection (spam vs. not spam), Image classification (cat vs. dog), and predicting house prices.

It aims to predict outcomes for new, unseen data. Examples include the following :

    • Linear Regression (for prediction)
    • Logistic regression (for classification)
    • Support Vector Machines (SVM)
    • Decision Trees
    • Neural Networks (when used with labeled data)

I.2.                          Unsupervised Learning

The algorithm works with unlabeled data and identifies patterns or structures.

It aims to discover hidden patterns, groupings, or features (e.g., group similar items or reduce dimensionality). Use cases include customer segmentation, topic modeling of documents, and anomaly detection. Examples of methods include the following :

    • K-Means Clustering
    • Principal Component Analysis (PCA)
    • Autoencoders
    • DBSCAN

I.3.                       Semi-Supervised Learning

It combines a small amount of labeled data with a large amount of unlabeled data. It learns to generalize better than using labeled data alone.

It aims to improve learning accuracy without the need for extensive labeling, which is costly to obtain. Use cases include improving a text classifier with a few labeled examples and many unlabeled documents, and medical diagnosis systems with limited annotated data.

Examples of methods include the following :

    • Semi-supervised SVM
    • Label propagation algorithms
    • Self-training
    • Graph-based methods

I.4.                         Reinforcement Learning

The algorithm learns through interactions with an environment, receiving rewards or penalties.

It aims to learn a policy for decision-making that maximizes cumulative rewards. Use cases include training an AI to play games (like Chess or Go), robotics (navigating a maze or walking), and  Self-driving cars.

Examples include the following :

    • Q-Learning
    • Deep Q-Networks (DQNs)
    • Policy Gradient Methods
    • Proximal Policy Optimization (PPO) 

Tableau 1. AI Methods classification based on Learning Style

Learning Style

Supervision

Data Type

Goal

Examples

Supervised

Labeled data

Input + output pairs

Predict labels or values

Classification, regression

Unsupervised

No labels

Raw inputs

Find patterns or structure

Clustering, topic modeling

Semi-Supervised

Some labels

Small labeled + large unlabeled

Improve learning with less labeling

Text/image classification

Reinforcement

Reward signals

States + actions

Learn optimal behavior

Game playing, robotics

 

   II. AI Methods classification based on Functionality

This classification groups the AI methods according to the type of task they are designed to perform (i.e., what the algorithm does), regardless of how it works internally. In simple terms, it looks at the AI algorithm's purpose or output.

This classification helps to understand what problem the algorithm is solving (but not how it solves it). It’s focused on the task or goal: classifying things, predicting values, grouping items, detecting anomalies, etc.

Figure 2. AI Methods classification based on Functionality


II.1.                   Classification Algorithms

It aims to categorize input into predefined labels. Data Inputs are features or observations. Data outputs are class labels (e.g., spam or not spam). Use cases include Email filtering, medical diagnosis, and sentiment analysis. Examples of methods include the following :

    • Logistic Regression
    • k-Nearest Neighbors (KNN)
    • Random Forest
    • Naive Bayes

II.2.                     Regression Algorithms

It aims to predict continuous numeric values. Data Inputs are Features or independent variables. Data output is a real number (e.g., price, temperature). Use cases include : stock price prediction, real estate pricing, sales forecasting. Examples of methods include the following :

    • Linear Regression
    • Ridge and Lasso Regression
    • Support Vector Regression (SVR)

II.3.                        Clustering Algorithms

It aims to group data based on similarity without predefined labels. Data inputs are raw, unlabeled data. Data outputs are clusters of similar items. Use cases include: market segmentation, anomaly detection, social network analysis. Examples of methods include the following :

    • K-Means
    • DBSCAN
    • Hierarchical Clustering
    • Gaussian Mixture Models

II.4.                        Dimensionality Reduction Algorithms

It aims to reduce the number of input variables/features while preserving essential information. Data inputs are high-dimensional data. Data output is a lower-dimensional representation.  Use cases include data visualization, noise reduction, and speeding up algorithms. Examples of methods include the following :

    • Principal Component Analysis (PCA)
    • t-distributed Stochastic Neighbor Embedding (t-SNE)
    •  Linear Discriminant Analysis (LDA)
    • Autoencoders (in Deep Learning)

II.5.                        Anomaly Detection

It is also called outlier detection. It aims to identify rare or unusual patterns in data. Data inputs are normal and potentially anomalous data. Data outputs are anomaly flags or scores. Use cases include fraud detection, network security, and equipment failure prediction. Examples of methods include the following:

    • One-Class SVM
    • Isolation Forest
    • Autoencoders
    • Statistical methods (e.g., Z-scores)

II.6.                        Recommendation Systems

It aims to suggest items to users based on preferences or behavior. Data inputs are user-item interaction data. Data output are personalized recommendations. Use cases include E-commerce (Amazon), streaming services (Netflix, Spotify). Examples of methods include the following:

    • Collaborative Filtering
    • Content-Based Filtering
    • Matrix Factorization
    • Deep Learning-based Recommenders

II.7.                        Ranking

It aims to sort items in order of relevance or importance. Data input is a set of items with features. Data output is a ranked list.  Use cases include search engines, job listings, and product relevance. Examples of methods include the following:

    • Learning to Rank (e.g., RankNet, LambdaRank)
    • Gradient Boosted Trees (for ranking)
    • PageRank (Google's algorithm)

II.8.                        Natural Language Processing (NLP) Tasks

It aims to understand and generate human language. Data input is text or speech. Data output can be text, labels, or actions. Use cases include translation, chatbots, summarization, sentiment analysis. Examples of methods include the following:

    • Named Entity Recognition (NER)
    • Text Classification
    • Machine Translation
    • Question Answering
    • Transformers (BERT, GPT)

 Tableau 2. AI Methods classification based on Functionality

Functionality

Purpose

Output

Common Algorithms

Classification

Assign labels

Class

SVM, Decision Trees

Regression

Predict values

Number

Linear Regression

Clustering

Group data

Clusters

K-Means, DBSCAN

Dimensionality Reduction

Simplify data

Features

PCA, t-SNE

Anomaly Detection

Spot outliers

Flags/scores

Isolation Forest

Recommendation

Suggest items

Items

Collaborative Filtering

Ranking

Order items

Sorted list

RankNet, PageRank

NLP Tasks

Understand language

Text/Labels

Transformers, RNNs

 

III. AI Methods classification based on Techniques

It refers to the underlying methods, frameworks, and inspirations used to design and build AI systems — not just how they learn (as in supervised or unsupervised), but how they operate and what principles they use. This classification focuses on how AI works under the hood, i.e., the computational strategy or inspiration behind the algorithm.

This is a classification of AI algorithms that groups them by their core operating principles or inspirations — whether it's statistics (ML), neurons (DL), logic (symbolic AI), nature (evolution/swarm), or probability (Bayesian methods). It’s about how AI thinks, not just what it learns.


Figure 3. AI Methods classification based on Techniques


III.1.                     Symbolic AI

It is also called Good Old-Fashioned AI (GOFAI). It uses explicit rules and logic. To work, humans encode knowledge and rules explicitly. The primary benefit of this approach is thought to be its interpretability and suitability for structured domains (such as legal or medical regulations). Its primary shortcomings, however, are its lack of adaptability, struggles with uncertainty and complexity.

Examples include the following :

    • Expert Systems (e.g., MYCIN for medical diagnosis)
    • Knowledge Graphs (like those used in Google Search)
    • Logic Programming (e.g., Prolog)

III.2.                     Machine Learning (ML)

It Learns from data without being explicitly programmed. It could be classified as supervised, unsupervised, and reinforcement learning.

III.3.                     Deep Learning

It is a subset of ML using Artificial Neural Networks (ANN) with many layers (deep networks).  It learns hierarchical representations from data, especially good with unstructured data like images, audio, and text. Examples include the following :

    • Convolutional Neural Networks (CNNs) for image recognition.
    • Recurrent Neural Networks (RNNs) for time-series or language data.
    • Transformers for Natural Language Processing (NLP) (e.g., BERT, GPT) and text generation.

III.4.                     Evolutionary Algorithms

It is inspired by biological evolution, uses concepts like mutation, crossover, and selection. Examples include the following :

    • Genetic Algorithms
    • Genetic Programming
    • Evolution Strategies

III.5.                     Swarm Intelligence

It is inspired by collective behavior in decentralized systems (such as ant colonies or bird flocks). To work, a simple agents interact locally and self-organize to solve complex problems. Examples include the following :

    • Ant Colony Optimization (ACO)
    • Particle Swarm Optimization (PSO)

III.6.                     Probabilistic AI

It is based on probability theory and statistical inference. It works by modelling uncertainty using probability distributions. Examples include the following:

  • Bayesian Networks
  • Hidden Markov Models (HMM)
  • Naive Bayes Classifier

III.7.                     Neuro-Symbolic AI

It combines symbolic AI (logic and rules) with neural networks (deep learning). It seeks to create AI that can both learn from data and reason using knowledge. Examples include the following:

  • IBM's Neuro-Symbolic Concept Learner
  • Knowledge-infused neural models

Tableau 3. AI Methods classification based on Techniques

Technique

Inspiration

Strengths

Use Cases

Symbolic AI

Human reasoning, logic

Transparent, rule-following

Legal, medical, expert systems

Machine Learning

Data/statistics

Adaptive, general-purpose

Spam detection, stock prediction

Deep Learning

Brain-inspired, data-heavy

High performance on unstructured data

Image recognition, NLP

Evolutionary Algorithms

Natural selection

Optimization in complex spaces

Engineering design, AI tuning

Swarm Intelligence

Collective animal behavior

Robust, distributed

Routing, scheduling

Probabilistic AI

Bayesian inference

Models uncertainty

Medical diagnosis, NLP

Neuro-Symbolic AI

Hybrid reasoning

Combines learning + logic

Explainable AI, cognitive systems

 

IV.  AI methods classification based on the source of knowledge

This kind of classification refers to where the AI system gets the information or experience it needs to function or learn. In other words, it answers the question: "How does the AI system acquire its intelligence or knowledge?" This classification focuses on what feeds the AI’s decision-making process — whether it's human-crafted knowledge, data, interaction, or embedded logic.

Briefly, classifying AI by source of knowledge helps us understand what fuels the AI’s intelligence:

·        Human rules (knowledge-based)

·        Data (data-driven)

·        Experience (interaction-based)

·        A mix of these (hybrid)

This perspective is useful when designing or choosing an AI system, especially when you need to balance interpretability, adaptability, and data availability.


AI methods classification based on the source of knowledge


IV.1.               Knowledge-Based AI (Symbolic AI)

Explicitly encoded rules, facts, and logic are provided by humans.

These algorithms do not rely primarily on datasets but instead use explicit rules, logic, or predefined knowledge to function. Often referred to as Symbolic AI or Good Old-Fashioned AI (GOFAI).

Their characteristics are as follows:

  • Knowledge is encoded manually.
  • Use logic and rules rather than statistical learning.
  • Best suited for structured, well-defined problems.
  • Do not improve automatically over time unless reprogrammed.

Examples of AI approaches include the following:

i. Expert Systems: Use if-then rules to simulate the decision-making of a human expert. Example: A medical diagnosis system using symptoms to infer diseases. MYCIN (medical expert system).

ii. Rule-Based Systems : Operate on logic-based rules. Example: A chatbot with scripted responses based on input keywords. Knowledge Graphs.

iii. Logic Programming : Algorithms based on formal logic (e.g., Prolog). Example: Automated theorem proving.

These methods' limitations include: Hard to scale, requires domain experts, and poor at handling uncertainty or vague patterns

IV.2.               Data-Driven AI

These algorithms rely heavily on data to learn patterns, make predictions, or take actions. Often, the more data they have, the better they perform.

Their characteristics are as follows:

  • Learn from examples.
  • Require training datasets.
  • Performance improves with data quality and quantity.
  • Adaptable and flexible to complex, real-world tasks.

Examples of AI approaches include the following:

                                i. Machine Learning (ML): they could be classified as follows:

    • Supervised Learning: Needs labelled datasets. Example: Training a spam filter using emails labelled as "spam" or "not spam".
    • Unsupervised Learning: Uses unlabeled data to discover structure. Example: Customer segmentation using purchase behaviour.
    • Reinforcement Learning: Learns from interactions (data from experiences). Example: A game-playing AI that improves by playing many games.

                ii. Deep Learning: Requires large datasets (e.g., millions of images for training image classifiers). Example: Image recognition, language translation, speech recognition.

Limitations of such methods include: Needs large datasets, and can be biased or inaccurate if the data is flawed.         

IV.3.               Experience-Based AI (Reinforcement Learning)

These algorithms are based on trial-and-error interactions with an environment, guided by rewards or penalties. The agent learns by taking actions, observing results, and adjusting behavior to maximize reward.

Use cases include game-playing AI (like AlphaGo), robotics (e.g., robot learning to walk), and dynamic decision systems (e.g., traffic signal control).

Examples of methods include Q-Learning, and Deep Q Networks (DQN).

Limitations of these methods include: requiring many interactions (may be slow to learn), performance depends on the environment design.

IV.4.               Hybrid AI (Multiple Knowledge Sources)

These methods are based on a combination of human knowledge, data, and experience.
It integrates rule-based logic with machine learning or reinforcement learning. It tries to get the best of both worlds: human reasoning and data-driven adaptability.
Use Cases include the following:
·        Healthcare: Data-driven predictions + expert rules
·        Autonomous vehicles: Neural networks + driving rules
·        AI assistants: NLP models + rule-based task handling
Examples of methods include the neuro-symbolic systems (e.g., IBM's Watson paths), and the hybrid recommender systems.
The limitation of these methods is that they are more complex to build and maintain, and balancing different sources can be tricky.

Tableau 4. AI methods classification based on the source of knowledge

Type of AI

Source of Knowledge

Key Mechanism

Examples

Use Cases

Knowledge-Based AI

Human-encoded rules

Logic & reasoning

Expert systems, Prolog

Diagnosis, legal AI

Data-Driven AI

Historical/real-time data

Pattern recognition

ML/DL models

Forecasting, vision, NLP

Experience-Based AI

Environment interaction

Rewards & feedback

Reinforcement learning

Games, robotics

Hybrid AI

Combined sources

Integrated techniques

Neuro-symbolic AI

Complex, multi-modal tasks

  


 



Comments

Popular posts from this blog

The solution of the Gandalf Challenges

IDS, IPS, SIEM, and SOC