Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Glossary Terms

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

A

Accountability

Individuals or organizations are answerable for the outcomes, decisions, and impacts of AI systems they develop, deploy, or operate.

Activity Bias

Occurs when AI systems are disproportionately trained on data from highly active users, causing the model to underperform for less engaged or underrepresented user populations.

Agentic AI

AI systems designed to autonomously pursue long term goals, make independent decisions, and execute complex, multi-turn workflows with minimal human intervention.  

Click on the Agentic AI vs AI Agent article to learn more. 

Aggregation Bias

Bias that occurs when models are built on aggregated data that obscures important variations or differences within subgroups.

AI Acceptable Use Policy (AUP)

A document outlining the allowed and restricted ways in which artificial intelligence technologies can be used within an organization.

Click on the AI Acceptable Use Policy article to learn more.

AI Agent

An automated software entity that perceives its environment through sensors or data inputs, processes that information, and takes autonomous actions to accomplish specific goals.

Click on the Agentic AI vs AI Agent article to learn more. 

AI Governance

The processes, standards and guardrails that help ensure AI systems and tools are safe and ethical. AI governance frameworks direct AI research, development and application to help ensure safety, fairness and respect for human rights.

AI Governance Charter

A formal, foundational document that defines the mission, scope, authority, and accountability structures for managing artificial intelligence within an organization. It serves as the constitution for the AI governance program.

Click on the What is an AI Governance Charter article to learn more. 

AI Management System (AIMS)

A formal organizational framework that establishes policies, processes, roles, and objectives for systematically governing AI development and deployment across an enterprise.

AI Office

The European Commission body responsible for overseeing the implementation and enforcement of AI regulations, including monitoring AI systems and general-purpose AI models across the EU..

AI System Impact Assessment

A structured evaluation process that identifies, analyzes, and addresses potential harms an AI system may cause to individuals, communities, or society.

Aleatoric Uncertainty

The inherent randomness or variability in data that cannot be reduced even with more information, like the natural unpredictability in weather patterns.

Algorithm

A clear set of step-by-step instructions that a computer follows to complete a task or solve a problem. Algorithms define how data is processed, decisions are made, and results are produced, and they are the foundation of software, automation, and artificial intelligence systems.

Algorithmic Bias

Bias introduced or amplified by the design, assumptions, or implementation of a machine learning algorithm, independent of the data.

Artificial General Intelligence (AGI)

An AI system that possesses a wide range of cognitive abilities, much like humans, enabling them to learn, reason, adapt to new situations, and devise creative solutions across various tasks and domains, rather than being limited to specific tasks as narrow AI systems are.

Artificial Intelligence (AI)

The ability of computer systems to perform tasks that normally require human intelligence, such as understanding language, recognizing patterns, learning from data, making decisions, and solving problems.

Asset Registry

Comprehensive inventory of AI system components and artifacts (datasets, models, documentation, logs) used for version control, auditability, and compliance verification.

Attention Visualizers

Tools that visualize attention weights within language models, revealing which input tokens (words, phrases) significantly influence output decisions, widely used in natural language processing models.

Augmented Intelligence (IA)

A collaborative approach where AI enhances human cognitive abilities (learning, decison-making, creativity) rather than replacing them.

Click on the AI vs Augmented AI article to learn more.

Authentication

The security process of verifying that a user, device, or system is genuinely who or what it claims to be before granting access to resources.

Autonomy

An AI system’s ability to independently modify its goals, scope of operation, or behavior without requiring human approval or intervention at each step.

B

Bias

Systemic errors in AI algorithms that result in unfair or discriminatory outcomes for certain groups of people. Bias can be present in training data or introduced during model development.

Click on the AI Bias article to learn more. 

Black-box Model

AI models whose internal workings are opaque or difficult to interpret directly, requiring external explainability methods to understand decision rationale.

C

CE Marking

The official symbol affixed to products indicating that an AI system complies with all relevant EU health, safety, and environmental protection requirements.

Chatbot

A machine-based system or intelligent agent designed to offer conversational user interfaces by emulating human-like interactions through text, voice, or images.

Click on this What is an AI Chatbot article to learn more. 

Chief AI Officer (CAIO)

The executive responsible for the strategic implementation and management of AI technologies within an organization.

Classification Model

A machine learning model that categorizes inputs into discrete classes or categories, such as identifying whether an email is spam or legitimate.

Compliance

Adherence to laws, regulations, standards, contractual obligations, and internal policies.

Concept Drift

Occurs when the statistical properties of the data an AI model was trained on change over time, causing the model’s performance to degrade in production environments.

Confabulation (see also Hallucination)

A phenomenon where generative artificial intelligence (GAI) systems produce confidently stated but erroneous or false content.

Click on this What are AI Hallucinations article to learn more. 

Confirmation Bias

The tendency to favor information that supports pre-existing beliefs while dismissing contradictory evidence, which can lead to flawed AI model design or evaluation.

Conformity Assessment

The process of demonstrating whether the requirements of a high-risk AI system have been fulfilled. 

Continuous Integration / Continuous Deployment (CI/CD)

Automated software development processes enabling frequent updates, rapid testing, and efficient deployment, often used to automate documentation generation and monitoring in AI systems.

D

Data Annotation

The process of adding labels, tags, or other metadata to raw data such as identifying objects in images or categorizing text to prepare it for supervised learning.

Data Augmentation

Artificially expands training datasets by creating modified versions of existing samples through techniques like rotation, cropping, or synonym replacement to improve model robustness.

Data Mining

Computational process that extracts patterns by analysing quantitative data from different perspectives and dimensions, categorizing them, and summarizing potential relationships and impacts.

Data Poisoning

An attack where malicious actors inject corrupted or misleading data into training datasets to compromise the model’s integrity and future predictions.

Data Sampling

The technique of selecting a representative subset from a larger dataset, allowing efficient analysis and model training while preserving the statistical properties of the original data.

Dataset

A structured collection of data examples organized in a consistent format that serves as the foundation for training, validating, or testing machine learning models.

Data Sheet

Comprehensive documentation describing a dataset’s origin, collection methodology, composition, intended uses, and potential biases to inform appropriate use in AI development.

Decision Tollgates

Specific points in the AI lifecycle where committee review and approval are required before proceeding. 

Deep Learning

A subset of machine learning that uses neural networks with many layers (deep architectures) to automatically learn hierarchical representations of data for complex tasks like image and speech recognition.

Demographic Parity

A fairness metric that aims for equal representation of different demographic groups in the outcomes of a machine learning model.

Deployment Stage

When a validated AI system transitions from development into a live production environment where it begins serving real users and processing actual data.

Design an Development Stage

Phase of the AI lifecycle that  encompasses all activities involved in creating the AI system, from architectural planning and model training to system integration and initial testing.

E

Epistemic Uncertainty

Arises from incomplete knowledge or insufficient data, and unlike aleatoric uncertainty, it can potentially be reduced through better information or improved models.

Equalized Odds

A fairness metric that aims for a model to have equal true positive rates and equal false positive rates across different demographic groups.

Equity

Ensuring that machine learning systems provide comparable opportunities or outcomes for individuals or groups, often requiring adjustments to address existing inequalities.

EU AI Act

European Union Artificial Intelligence Act, a proposed regulation aimed at establishing a legal framework for AI in the EU.

EU Declaration of Conformity

A legally required document where a provider formally attests that their high-risk AI system meets all obligations under the EU AI Act.

Evaluation Bias

Bias introduced during the evaluation of machine learning models, such as using metrics that favour certain outcomes or evaluation sets that omit subgroups.

Explainable AI (XAI)

The ability of an AI system to provide understandable reasons for its decisions and predictions, enabling humans to interpret and trust outputs.

Click on the Explainable AI article to learn more.

Explainable Boosting Machines (EBM)

A type of “glass-box” machine learning model that achieves high accuracy with inherent interpretability by using additive contributions of individual features.

Explainability

An AI system’s ability to communicate the reasoning behind its decisions in terms that human can understand, fostering trust and enabling meaningful oversight.

Explaination-Prediction Fidelity

Degree to which explanations accurately reflect the underlying decision logic and behavior of the AI model.

Explaination Stability

Consistency of AI explanations over time, ensuring repeated runs under similar conditions yield comparable explanatory results.

Exploratory Data Analysis (EDA)

The preliminary investigation of datasets using statistical summaries and visualizations to understand data characteristics, identify anomalies, and inform modeling strategies before building AI systems.

F

Fairness

The equitable and unbiased treatment of individuals or groups, ensuring outcomes do not systematically disadvantage certain demographics.

Feature Importance Drift

Changes in the ranking or magnitude of features deemed significant by an AI model, tracked to identify shifting decision factors potentially affecting fairness or accuracy.

Few-Shot Learning

A machine learning approach where models can learn concepts from just a few labeled examples, often 5 or less per category. 

Foundational Model

 A large, general-purpose AI model trained on broad data that can be adapted to a wide range of downstream tasks through fine-tuning or prompting.

G

Generative AI (GenAI)

A subset of Deep Learning that focuses on training Deep Learning models in order to generate high-quality text, images, and other custom content.

Click on the What is Generative AI article to learn more. 

Glass-box Model

AI models designed with transparency and inherent interpretability, allowing humans to clearly understand the internal decision-making process without additional interpretive tools.

Gradient Descent

The verified, correct answer or label for a given data sample, serving as the reference standard against which an AI model’s predictions are measured.

Gradient-weighted Class Activation Mapping (Grad-CAM)

An advanced visualization technique showing where deep neural networks focus when making predictions by computing gradients of the target class.

Ground Truth

The verified, correct answer or label for a given data sample, serving as the reference standard against which an AI model’s predictions are measured.

Guardrails (in AI)

Mechanisms and controls implemented to ensure that AI systems operate safely, ethically, and within acceptable boundaries. This includes filtering inputs and outputs.

H

Hallucination (see also Confabulation)

A phenomenon where generative artificial intelligence (GAI) systems produce confidently stated but erroneous or false content.

Click on this What are AI Hallucinations article to learn more. 

Heteronomy

An AI system that operates under external governance, meaning it requires human oversight, intervention, or approval to function and cannot act entirely on its own.

Hidden Layer

Any layer in a neural network between the input and output layers, where intermediate computations and feature transformations occur during data processing.

Human-in-the-loop (HITL)

A configuration where a human must verify or sign off on an AI-generated decision before it is acted upon.

Human-out-of-the-loop (HOOTL)

A configuration where an AI system operates and takes action without any human intervention.

Hyperparameter

A configuration setting, such as learning rate, batch size, or network depth that is set before training begins and controls how the learning algorithm operates.

I

Inception Stage

The initial phase of the AI lifecycle where the concept is defined, feasibility is assessed, and stakeholders commit to developing a proposed AI system.

Inference

The logical reasoning process by which an AI system draws conclusions, makes predictions, or generates outputs based on learned patterns, known facts, or established rules.

Input Layer

The first layer of a neural network that receives raw data and passes it to subsequent layers for processing.

Inputation

A data preprocessing technique that fills in missing values with estimated substitutes such as averages or predicted values to maintain dataset completeness without discarding records.

Internet of Things (IoT)

An infrastructure of interconnected entities, people, systems and information resources together with services that process and react to information from the physical world and virtual world.

ISO/IEC 42001

An international standard for AI management systems, providing a framework for establishing, implementing, maintaining, and continually improving an AI management system.

J

Jailbreak

An attack technique that uses specially crafted prompts to bypass the safety guardrails and content restrictions built into AI models, particularly large language models.

K

L

Large Language Model (LLM)

A sophisticated type of generative AI designed to emulate the structure and characteristics of language to generate derived synthentic content.

Language Model 

A computational construct trained using statistical methods to identify patterns in written or spoken language in order to predict or classify words, text, or speech.

Learning Rate

Determines the size of the steps taken during model optimization, balancing the trade-off between training speed and the risk of overshooting optimal solutions.

Least Privilege

The principle of restricting access by allowing only the minimum set of privileges necessary for a job. 

Life Cycle

Of an AI system encompasses all stages from initial concept and design through development, deployment, operation, and eventual decommissioning or retirement.

Local Interpretable Model-Agnostic Explainations (LIME)

A post-hoc explanation method providing interpretability by perturbing individual inputs and analyzing changes in the output to produce a locally faithful, simplified explanation.

M

Machine Learning (ML)

A subset of artificial intelligence that allows computer systems to learn from data and improve their performance over time without being explicitly programmed with fixed rules. Instead of relying on prewritten instructions, machine learning models identify patterns in data to make predictions or decisions.

Click on the What is Machine Learning article to learn more.

Machine Learning Operations (MLOps)

A set of practices that automates and standardizes the entire machine learning lifecycle, from building and training models to deploying, monitoring, and managing them in production for continuous improvement.   It combines machine learning development (Dev) with operations (Ops), creating reliable, scalable, and efficient workflows for delivering AI applications, much like DevOps does for software. 

McNamara Fallacy

The cognitive error of over-relying on quantitative metrics while ignoring qualitative factors that may be equally or more important for sound decision-making.

Measurement Bias

Bias arising from how data is collected, recorded, or used as a proxy, leading to systematic inaccuracies for certain groups.

Model 

A simplified representation, whether mathematical, statistical, or logical of a real-world system or phenomenon that allows AI to simulate, predict, or understand complex behaviors.

Model Context Protocol (MCP)

An open standard, like a universal translator, that lets Large Language Models (LLMs) connect and communicate with external data, tools (like search, databases, calculators), and services, moving them beyond their training data to access real-time info and perform actions.

Click on the What is Model Context Protocol (MCP) article to learn more.

Model Card

A transparent document providing information on a model’s purpose, training data, capabilities, and performance metrics.

Click on the What is a Model Card article to learn more. 

Multimodal Model

An AI system that processes and understands multiple types of data (modalities) simultaneously, like text, images, audio, and video, to perform complex tasks, mimicking human perception for deeper context and more holistic understanding.

N

Narrow AI

Systems designed and optimized to excel at specific, well-defined tasks, such as image recognition or language translation, rather than possessing general-purpose intelligence.

Natural Language Processing (NLP)

A field of AI focused on enabling computers to understand, interpret, generate, and respond to human language in meaningful and useful ways.

Neural Network

A computing system inspired by the human brain, consisting of interconnected nodes (neurons) organized in layers that process information and learn patterns from data.

NIST AI RMF

National Institute of Standards and Technology Artificial Intelligence Risk Management Framework, a voluntary framework for managing risks related to AI.

Notified Body

An accredited organization authorized to conduct independent conformity assessments of high-risk AI systems under EU regulations.

Notifying Authority

A national regulatory body responsible for accrediting and monitoring conformity assessment organizations under the EU AI Act framework.

O

Operation and Monitoring Stage

Involves ongoing oversight of a deployed AI system, including performance tracking, incident response, and continuous improvement activities.

Output Layer

The final layer of a neural network that produces the model’s predictions, classifications, or other results based on the processed information.

Overfitting

Occurs when a model learns the training data too precisely, including its noise and anomalies, resulting in excellent training performance but poor generalization to new data.

P

Personally Identifiable Information (PII)

Any data that can be used to identify a specific individual, such as names, addresses, social security numbers, or biometric data, requiring careful protection under privacy regulations.

Predictive Parity

A fairness metric that aims for a model to have similar positive predictive values across different demographic groups.

Progressive Disclosure

Interface design approach revealing AI explanations incrementally, starting from simple explanations for general users and progressively providing detailed information for advanced users upon request.

Prompt Engineering

The practice of designing, refining, and optimizing instructions (prompts) to guide AI models, especially Large Language Models (LLMs), to generate desired, accurate, and useful outputs, acting as a translator between human intent and AI understanding by providing clear context, examples, and structure to steer the model’s behavior.

Prompt Injection

A technique where a hacker enters a text prompt into a chatbot designed to enable unintended or unauthorized actions.

Protected Charateristics

Attributes of individuals or groups that are legally protected from discrimination, such as race, ethnicity, gender, religion, and disability, age, sexual orientation.

Proxy Variable

A feature in a dataset that is correlated with a protected characteristic and can unintentionally introduce bias if used in a model.

Q

Quality Management System (QMS)

A formalized framework of policies, processes, procedures, and resources that an organization uses to consistently meet customer requirements, regulatory standards, and enhance satisfaction by improving efficiency, reducing waste, and driving continuous improvement.

R

Recourse Mechanisms

Clear, actionable procedures enabling users to challenge, appeal, or inquire about AI system decisions, critical for regulatory compliance (EU AI Act Article 13, GDPR).

Regression Model

A machine learning model that predicts continuous numerical values, such as forecasting sales figures or estimating property prices.

Reinforcement Learning

The learning of an optimal sequence of actions to maximize a reward through environment interactions.

Reinforcement Learning from Human Feedback (RLHF)

A training technique that uses human preferences and feedback to fine-tune language models, aligning their outputs with human values and expectations.

Reliability

An AI system’s consistency in producing expected, correct results repeatedly over time under normal operating conditions.

Representation Bias

A type of bias that occurs when certain groups are under-represented or over-represented in the training data, leading to poor performance for the under-represented groups.

Resilience

An AI system’s capacity to quickly recover from failures, attacks, or disruptions and return to normal operational status with minimal impact.

Responsible AI

The approach of creating, implementing,m and utilizing AI systems with a focus on positively impacting employees, businesses, customers, and society as a whole, ensuring ethical intentions and fostering trust, which in turn enables companies to confidently scale their AI solutions. 

Retirement Stage

The final lifecycle phase focused on safely decommissioning an AI system, preserving necessary records, migrating users, and responsibly disposing of associated resources.

Retrieval Augmented Generation (RAG) 

The retrieval of data from outside a foundation model to augment prompts by adding relevant retrieved data in context.

Risk Card

Systematically documents potential risks associated with an AI model, including failure modes, misuse scenarios, and mitigation strategies.

Risk Management Framework

A structured approach to identifying, assessing, and controlling potential risks within an organization.

Robotics

The interdisciplinary field concerned with the design, construction, programming, and operation of robots for applications ranging from manufacturing to healthcare to exploration.

Robustness

An AI system’s ability to maintain consistent, reliable performance even when faced with unexpected inputs, environmental changes, or adversarial conditions.

S

Saliency Maps

Visual representations highlighting areas in the input data (images, text) most influential in a model’s decision-making, typically used in computer vision and natural language processing.

Scenario Planning

A risk management technique that explores hypothetical situations where an AI system might fail, be misused, or cause unintended harm to prepare appropriate responses.

Semi-Supervised Machine Learning

Machine learning that makes only use of both labelled and unlabelled data during training.

Sentiment Analysis

An NLP technique that identifies and extracts subjective information from text, determining whether content expresses positive, negative, or neutral opinions.

SHapley Additive exPlanations (SHAP)

A unified measure of feature importance based on cooperative game theory, offering local (individual prediction) and global (model-wide) explanations.

Small Language Models (SLM)

AI models that process and generate human language but are significantly smaller and more efficient than large language models (LLMs), featuring fewer parameters (millions to billions vs trillions).

Stakeholder Engagement

Documented process of involving affected stakeholders (users, regulators, consumers) throughout the AI development lifecycle to ensure explanations meet diverse needs and foster trust.

Statement of Applicability

A compliance document that lists all relevant security and governance controls, explaining which are implemented and justifying any exclusions.

Subsymbolic AI

Including neural networks, learns patterns implicitly from raw data rather than using explicit rules, making it powerful for complex tasks but often harder to interpret.

Supervised Machine Learning

Machine learning that makes only use of labelled data during training.

Symbolic AI

Uses explicit rules, logic, and human-readable symbols to represent knowledge and perform reasoning, making its decision-making process more interpretable and traceable.

T

Temperature (AI Temperature)

A hyperparameter in generative AI that controls the randomness, creativity, and predictability of the model’s output. It adjusts the probability distribution of potential next words, making the model more or less likely to choose less common options. 

Test, Evaluation, Verification, & Validation (TEVV)

Structured approach comprising testing, evaluating performance, verifying system correctness, and validating that systems meet intended purposes and regulatory requirements.

Token

The basic unit of text that language models process, which can represent a word, subword, character, or punctuation mark depending on the tokenization method.

Tokenization

The process of breaking text into smaller units (tokens) that a language model can process, converting human-readable text into numerical representations.

Training Data

The labeled dataset used to teach a machine learning model by exposing it to examples from which it learns patterns and relationships.

Training Model

Training process to determine or to improve the parameters of a machine learning model, based on a machine learning algorithm, by using training data.

Transparency (Organization)

Companies openly communicate their AI-related activities, decisions, and policies to stakeholders in a clear and accessible manner.

Transparency (System)

Relevant technical information, including design choices, capabilities, limitations, and performance metrics is made available to users and stakeholders.

Trustworthiness

The demonstrated ability of an AI system to reliably meet stakeholder expectations through verifiable evidence of safety, fairness, and performance.

U

Underfitting

Occurs when a model is too simple to capture the underlying patterns in the data, resulting in poor performance on both training and new data.

Unsupervised Machine Learning

Machine learning that makes only use of unlabelled data during training. 

V

W

Watermarking

Embeds hidden, detectable signals into AI-generated content, such as text, images, or audio, enabling identification of synthetic content and supporting authenticity verification.

X

XAI (Explainable AI)

The ability of an AI system to provide understandable reasons for its decisions and predictions, enabling humans to interpret and trust outputs.

Click on the Explainable AI article to learn more.

Y

Z

Zero-Shot Learning

A type of machine learning technique where a model is able to recognize and classify objects or perform tasks it has never seen before, based on the knowledge it has learned from other related tasks or objects.