OSL: One-Shot Learning

One-Shot Learning (OSL) is a machine learning paradigm that focuses on a model’s ability to learn from a single sample of data. Unlike traditional supervised learning approaches, which require large, labeled datasets to train accurate models, OSL aims to develop algorithms that can effectively generalize from […]
ZSL: Zero-Shot Learning

Zero-Shot Learning (ZSL) is a machine learning technique that allows a model to make predictions about classes that were not seen during training. Unlike conventional methods that require labeled data from all classes, ZSL leverages auxiliary information, such as textual descriptions or semantic embeddings, to generalize to unknown classes. […]
FSL: Few-Shot Learning

Few-Shot Learning (FSL) is an approach in machine learning and artificial intelligence that aims to develop models capable of learning and generalizing from a very limited number of data samples. While traditional supervised learning methods require large labeled datasets to train accurate models, FSL can achieve satisfactory performance […]
SL: Supervised Learning

Supervised Learning (SL) is a sub-area of Artificial Intelligence (AI) that involves training models using a set of input data (features) and their respective desired outputs (labels). During training, the algorithm learns to automatically map inputs to the correct outputs, seeking to minimize a given error. […]
UL: Unsupervised Learning

Unsupervised Learning (UL) is a branch of Artificial Intelligence (AI) and Machine Learning that deals with systems capable of identifying patterns and structures in data without the need for prior labeling. Unlike Supervised Learning, where models are trained with a known input-output set, in […]
SSL: Semi-Supervised Learning

Semi-Supervised Learning (SSL) is a machine learning approach that combines the use of labeled and unlabeled data to build predictive models. Unlike supervised learning, which requires a large set of labeled data, and unsupervised learning, which does not use labels, SSL exploits the rich information contained in […]
AL: Active Learning

Active Learning (AL) is a machine learning technique in which the model actively selects the most informative data to be labeled by an oracle, usually a human. This iterative process allows the model to learn more efficiently using a smaller set of labeled data. The idea behind AL is to […]
KD: Knowledge Distillation

Knowledge Distillation (KD) is a technique in machine learning that involves transferring knowledge from a large, complex neural network model, called the 'master' or 'teacher', to a smaller, simpler model, known as the 'student'. The central goal of KD is to capture the essence of the master model's decisions, allowing […]
TE: Textual Entailment

Textual Entailment (TE) is a task in Natural Language Processing (NLP) that aims to determine the logical relationship between two textual statements: the first called a 'premise' and the second a 'hypothesis'. TE classifies the relationship between these two statements into three main categories: 'entailment' (when the hypothesis is logically derived from the premise), 'contradiction' […]
TC: Text Classification

Text Classification (TC) is a machine learning and natural language processing process that involves assigning predefined categories to text documents. These categories can include specific topics, emotions, reviews, and more. TC uses algorithms that learn from annotated datasets, where each document […]