KR: Knowledge Representation

Knowledge Representation (KR) is a field of artificial intelligence and computer science that focuses on the design of data structures and algorithms to represent, store, and manipulate information efficiently and effectively. The central goal of KR is to enable computer systems to interpret and utilize knowledge in a manner similar to human intelligence. […]

KG: Knowledge Graphs

A Knowledge Graph (KG) is a data structure that represents information in graph form, where nodes represent entities and edges represent the relationships between these entities. Each node and edge can have descriptive attributes, allowing for a rich and contextualized representation of data. Knowledge graphs […]

HMM: Hidden Markov Models

Hidden Markov Models (HMMs) are a class of statistical models used to represent sequences of observations, where the underlying process that generates these observations is modeled as an invisible Markov chain. In an HMM, the actual state of the system is not directly observable; instead, we observe a […]

CRF: Conditional Random Fields

Conditional Random Fields (CRFs) are a type of statistical model from the family of graphical models that are primarily used for label sequence tasks such as named entity recognition, information extraction, and part-of-speech tagging. Unlike Hidden Markov Models (HMMs) and Maximal Entropy Markov Models (MEMMs), CRFs do not assume […]

DBN: Deep Belief Network

A Deep Belief Network (DBN) is a type of deep learning model composed of multiple layers that learn data representation in a hierarchical manner. These layers are usually formed by units known as Restricted Boltzmann Machines (RBMs), which are neural networks with input and output layers, but without connections between the units […]

RBM: Restricted Boltzmann Machine

The Restricted Boltzmann Machine (RBM) is a type of unsupervised probabilistic model, a subset of neural networks, that is used to learn representations of data in a feature space. The RBM architecture consists of two layers: a visible layer (V) and a hidden layer (H). The visible layer is responsible for representing […]

SOM: Self-Organizing Maps

The Self-Organizing Map (SOM) is a type of unsupervised artificial neural network introduced by Teuvo Kohonen. SOM is used for dimensionality reduction and visualization of high-dimensional data in two- or three-dimensional space while maintaining the topological relationships of the original data. The process of […]

MLP: Multi-Layer Perceptron

Multi-Layer Perceptron (MLP) is a type of artificial neural network, composed of multiple layers of neurons, where each layer is fully connected to the previous and next layers. The MLP architecture includes an input layer, one or more intermediate layers (also known as hidden layers), and an output layer. Each neuron in […]

AE: Autoencoder

Autoencoder (AE) is an artificial neural network architecture used primarily for the task of encoding and decoding data. The AE structure consists of two main parts: the encoder and the decoder. The encoder receives the input data and transforms it into a lower-dimensional representation, known as embedding […]

VAE: Variational Autoencoder

Variational Autoencoder (VAE) is a type of generative machine learning model that combines elements of neural networks and Bayesian inference. VAE consists of two main components: an encoder and a decoder. The encoder is responsible for mapping the input data into a latent space, which is a […]