Alignment

The challenge of ensuring that AI systems behave in ways consistent with human values, intentions, and expectations—particularly in domains where those values are subjective or contested.

Artificial Intelligence (AI)

The development of computer systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language understanding.

Cognitive Science

The interdisciplinary study of the mind and its processes, drawing on psychology, neuroscience, linguistics, philosophy, and computer science.

Explainability

The degree to which the internal workings or outputs of an AI system can be understood by a human. Also called interpretability or transparency.

Human-Robot Interaction (HRI)

The study of how humans and robots communicate, collaborate, and coexist, with attention to social, cognitive, and physical dimensions of their interactions.

Machine Learning

A subset of AI in which systems learn from data and improve their performance on specific tasks without being explicitly programmed for each case.

Multimodal Interaction

Communication between humans and AI systems that involves multiple channels or modes, such as speech, text, gesture, vision, and touch.

Natural Language Processing

A field of AI focused on enabling computers to understand, interpret, and generate human language in useful ways.

Participatory Design

A design approach that actively involves the people who will be affected by a system in the process of designing it, ensuring their needs, values, and perspectives shape the outcome.

Trust Calibration

The process by which a person adjusts their level of trust in an AI system based on its performance, transparency, and reliability over time.