Connect with us

Entertainment

The ultimate AI glossary to help you navigate our changing world

Published

on

Ever get bogged down by confusing AI terms?

In the past year, countless AI-infused products and services have become available, offering a dizzying variety of features frequently wrapped in hard-to-discern jargon.

With this handy glossary, you’ll now know the difference between AI and AGI, what really happens when ChatGPT “hallucinates,” and know what it means when you hear GPT-4 described as an LLM with a transformer model built using deep neural networks. Let’s dive in.

Agent

An agent, in the context of AI, is a model or software program that can autonomously perform some kind of task. Examples of agents range from smart home devices that control temperature and lighting, to sensors in robot vacuums and driverless cars, to chatbots like ChatGPT that learn and respond to user prompts. Autonomous agents that carry out complex tasks are often cited as examples of what the next leap forward in AI might look like.

AGI (Artificial general intelligence) 

AGI is a type of program or model that has the full intellectual capabilities of a human, i.e. general intelligence. AGI has abilities like reasoning, common sense, abstract knowledge, and creativity. Essentially it is able to autonomously perform tasks without human instruction. True AGI doesn’t exist yet, but experts believe it could be achieved in the near future (although opinions vary on exactly when). Companies like OpenAI, DeepMind, and Anthropic are committed to trying to create AGI.
See also: Strong AI

Algorithm 

An algorithm is a set of rules or instructions for a computer program to follow. In the context of AI, algorithms are the basis for building its intelligence. Think of an action within the human brain being broken down into a series of steps. Algorithms mimic that process by building a series of if-then statements.

Alignment

Alignment refers to how successfully AI is able to achieve goals not explicitly included in a prompt or request. These can include accuracy, security, and harm prevention. If an AI is misaligned, it’s veering off its intended uses and applications, i.e. it gives wrong or inappropriate responses. Alignment is a big part of the ethics conversation, because a model that doesn’t have proper alignment has the potential to spread misinformation, create cybersecurity threats, and share dangerous or harmful information. 

AI (Artificial intelligence)

AI is the blanket term for technology that can automate or execute certain tasks designed by humans. Lately, when people discuss AI (“AI is going to destroy humanity” or “AI is going to replace our jobs”) they’re talking about AGI and generative AI. But AI is a huge concept that encompasses many technologies we’ve been using for years, such as algorithms that recommend content or products, self-driving cars, or voice assistants. 

Black box

Certain AI models are sometimes referred to as black boxes, which means users are unable to see or understand the inner workings of the technology. The black box problem has become especially relevant in the generative AI conversation, since businesses like OpenAI and Google are notoriously secretive about how they work. But also, because generative AI is somewhat autonomous, even the developers don’t fully understand how the algorithm generates outputs. As ethicists and policymakers call for more accountability and transparency of AI businesses, it has raised the importance of cracking open the black boxes. 

Chatbot

A chatbot is a type of program or model that can have a conversation with a human, like ChatGPT, but the term can also refer to customer service chatbots that provide alternatives to talking to a customer service rep on the phone or over text. Chatbots like ChatGPT, Bard, Bing, and Character.AI have all received attention for their ability to hold sophisticated human-like conversations with users, but chatbots have been around for a while. ELIZA is considered the first chatbot, developed in 1966 by MIT scientist Joseph Weizenbaum.

Deep learning

Deep learning is a subset of machine learning that mimics the way humans learn. Powered by neural networks, deep learning uses multiple layers of algorithms to understand complex and abstract concepts, like conversations and images. Applications of deep learning include facial recognition technology, chatbots like ChatGPT, and driverless cars

Diffusion model

A diffusion model is a machine learning model that can generate a response similar to the one it’s trained on. In technical terms, it is a Markov chain trained using variational inference. Markov chain and variational inference are mathematical terms used to predict sequences and approximate information within large amounts of data. But what you need to understand is that diffusion models are what make AI image generation possible. Stable Diffusion, OpenAI’s DALL-E, and Midjourney are all examples of products that use diffusion models.

Generative AI

Thanks to OpenAI’s launch of ChatGPT, generative AI has entered the mainstream. Generative AI is a type of AI that can create text, images, video, audio, and code based on prompts from a user. Generative AI is what powers AI chatbots like ChatGPT.

It works by initially learning patterns from data (see Training below), and continues to learn as it builds on new data from prompts. Generative AI typically exists in the form of a chat interface, like ChatGPT, Bing, and Bard, so it can have a back-and-forth conversation with the user. The launch of ChatGPT created such a frenzy because it was an easy and accessible way for people to understand and harness the capabilities of generative AI. For all of its benefits, generative AI’s widespread use has dangers since it tends to hallucinate, or confidently make things up. 

Unlike simpler forms of AI, generative AI is able to create entirely new content from training data, as opposed to a sensor with finite objectives or a voice assistant that parrots pre-existing information.

Ethicists and policymakers have called for the regulation of generative AI because of its potential to spread misinformation, entrench biases, or enable cybercrime. Generative AI models also use datasets scraped from the web, which has raised concerns about privacy and copyright violations. Its ability to quickly generate content and automate tasks has also raised concerns about replacing jobs, particularly in the media and entertainment industries

GPU (Graphics processing unit)

A GPU is a powerful chip or graphics card that is able to process multiple complicated computations. GPUs were initially developed to process images and graphics, as the name suggests, but they’ve been adapted for AI for their ability to handle the massive amount of computing power required by machine learning. It’s estimated that ChatGPT uses 20,000 GPUs and will eventually need 30,000 graphics cards to power its model.

Hallucination

Generative AI, particularly text-based chatbots, has a tendency to make things up. It’s described as “hallucination,” because generative AI can sometimes go off on a total tangent, confidently talking about something that isn’t true.

For example, a generative AI chatbot might hallucinate by saying Steve Jobs was a close-up magician, popular in Las Vegas during the Rat Pack era. But more commonly (and worryingly), generative AI chatbots hallucinate subtly, by mixing in fact with fiction. For example, it might say Steve Jobs was the founder of Apple (true), who oversaw the launch of the iPhone (true), for which he was declared Time‘s Person of the Year (not true). 

This happens because generative AI models work by predicting words based on probabilistic relation to the previous word. It isn’t capable of understanding what it’s generating. Let that be a reminder that ChatGPT might act sentient, but it’s not.

Jailbreaking

Jailbreaking a chatbot is getting it to do something outside of its intended uses. Using a certain kind of prompt, a jailbreak enables the user to bypass rules or guardrails, essentially tricking it into doing something it’s not supposed to do according to its alignment. Jailbreaking can range from making a chatbot say offensive or inappropriate stuff just for fun, to making it share dangerous and actionable information, like how to make napalm. 

Large language model (LLM)

A large language model is an AI software program that is trained on vast amounts of data to understand and generate text. LLMs string together sentences by predicting the next word based on probability. Since LLMs have been trained with so much data — basically the entirety of the internet — they are very successful at generating human-like text using this method. OpenAI’s GPT models, Google’s PaLM models, and Meta’s Llama models are all examples of LLMs. GPT-3.5 and GPT-4 are what power ChatGPT and PaLM 2 and for Bard.

Licensed data

Licensed data is information from the web that is purchased or accessed by a business or organization for the purpose of training AI. You might hear instances of businesses saying they trained their models using licensed data. That’s to say the data was legally obtained.

The issue of licensed data has come up a lot recently due to the mass amounts of data needed to train AI models like ChatGPT. It gets murky in legal terms, because of the argument about what constitutes public domain, the original creator’s intent, and how businesses should be allowed to use that data.

Machine learning

Machine learning is a method within artificial intelligence where a model is trained on data to learn and improve over time. Machine learning models use data to recognize patterns, classify information, and make predictions. Examples include filtering out spam emails (classification learning), using housing data to predict the price of a house (regression learning), or identifying images of dogs (deep learning).

AI and machine learning are terms often used interchangeably, but machine learning is a subset of AI, that’s defined by being trained with data to build its intelligence.

Model

You’ve likely heard this term thrown around a lot in relation to AI. A model is a program or algorithm designed for a specific purpose. An AI model is a general term for a program designed to replicate and/or automate certain tasks.

Natural language processing (NLP)

The reason why ChatGPT’s responses read as so eerily human is because of natural language processing. The term refers to the discipline of training a model on text and speech so that it understands and expresses itself like a human might. Natural language processing also involves linguistics research so that models can understand the complexities and nuances of language. 

Neural network

Inspired by the way the human brain works, neural networks are algorithms composed of artificial “neurons” or nodes that communicate with each other. Each connection between two neurons carries a certain value or “weight”. These weights automatically make certain assessments about inputs, and the neuron will “fire” if a certain threshold is reached, which will pass along information to other neurons in the network. Neural networks power deep learning

Open-source

Open-source means the source code of a software program is open and free to the public (whereas a black box is closed). That means developers can use, modify, and build their own products with it. An open-source AI model is seen as a way of democratizing the development of AI, which is often shrouded in secrecy. Unlike Google and OpenAI’s models which are closed-source, Meta recently released an open-source LLM (Llama 2). Other open-source models include Falcon, MPT, and RedPajama.

Parameter

A parameter is a variable in an LLM that can be weighted or adjusted during training to determine a specific outcome. You may have heard about parameters in relation to how powerful an LLM is — for example, GPT-4 has 1.7 trillion parameters. The more parameters an LLM has, the more complex it is and the more capacity it has to learn.

Think of parameters as settings on a high-quality camera. On the camera you can adjust light, film speed, zoom, aperture, change out the lenses, etc. and every configuration produces a slightly different result. Now multiply that by billions or trillions, and that’s what parameters do.

Prompt

A prompt is a request or question that a user sends to a chatbot. There’s a whole subculture devoted to ensuring the greatest response from large language models. Whether it’s for code-generation, jailbreaking, or just getting the greatest answer you’re looking for, prompts depend on clarity, conciseness, context, and intent.
See also: Prompt engineer.

Prompt engineer

With the rise of generative AI chatbots, there’s suddenly demand for expertise in crafting the right prompts. That’s where prompt engineering comes in. A prompt engineer is someone with deep knowledge of LLMs who can optimize the greatest prompts for different purposes. That could be ensuring the chatbot successfully understands the request, or poking around the model to find threats and vulnerabilities. 

Prompt injection attack

The rise of LLMs has led to a new kind of cyberattack called prompt injection attacks. Prompt injection, similar to jailbreaking, is the act of using a carefully crafted prompt to manipulate models like ChatGPT for nefarious purposes. Through prompt injection, hackers exploit a vulnerability within a chatbot to share confidential information, or bypass the model’s guardrails. Attackers use this directly, by interacting with the chatbot, or indirectly by hiding a prompt within a plugin or webpage, to secretly access personal or payment information. 

Recommendation algorithm/system

Before the rise of ChatGPT, AI was already a big part of our lives. One of the more ubiquitous examples of this is the recommendation algorithm (or system). It’s a term for a machine learning algorithm that makes recommendations based on user data and behavior. Your recommended shows on Netflix, products on Amazon, videos on YouTube and TikTok, and posts on Instagram are some examples of recommendation algorithms at work. 

Strong AI

Strong AI is another term for AGI or artificial general intelligence. It is a theoretical (for now) form of artificial intelligence that can autonomously “think” and act like a human.

Token

A token is a unit of information within a large language model. It can be a word, part of a word, punctuation mark, or a segment of code — essentially the most fundamental form of something with meaning. When you hear that an LLM is trained on a certain number of tokens or a pricing model costs a certain number of cents per 1,000 tokens, this is what it’s referring to. 

Training

Training is the process of feeding data to a machine learning model. There are two types of training: supervised learning and unsupervised learning. Supervised learning is training a model with data that has already been labeled or classified in some way, whereas unsupervised learning uses unlabeled data which forces it to learn patterns and associations on its own. Each type of training has its own strengths and weaknesses. LLMs like GPT-4 use a combination of both unsupervised and supervised learning. 

Training data

Training data is data that’s used to train a machine learning model. Training data for LLMs consists of huge amounts of data called datasets, that are largely scraped from the internet.

There are public datasets like Common Crawl and the Wikipedia database, and private or proprietary datasets gathered by businesses like Google that have the resources to do so. Its dataset, MassiveWeb, for example, was created by Google-owned DeepMind and includes social media and blogging sites like Reddit, Facebook, YouTube, and Medium. So if you’ve ever posted on one of those sites, your data was probably used to train Bard. 

Transformer 

Did you know that GPT (as in ChatGPT) stands for generative pre-trained transformer? It’s not just a tongue-tying acronym. A transformer is a type of neural network that powers the deep learning model used for generative AI. It works by embedding words (tokens) with context, using a “self-attention” mechanism, which enables it to predict what the next word might be. Otherwise, the model would just perceive words as bits of data without any association to each other. 

The transformer model, which kicked off the development of products like ChatGPT, originates from a paper by Google and University of Toronto researchers in 2017.

Advertisement Find your dream job

Trending