Skip to main content

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Application programming interface (API) is a way to programmatically access (usually external) models, data sets, or other pieces of software.

Artificial intelligence (AI)

Artificial intelligence (AI) is the ability of software to perform tasks that traditionally require human intelligence.

Deep learning is a subset of machine learning that uses deep neural networks, which are layers of connected “neurons” whose connections have parameters or weights that can be trained. It is especially effective at learning from unstructured data such as images, text, and audio.

Fine-tuning is the process of adapting a pretrained foundation model to perform better in a specific task. This entails a relatively short period of training on a labeled data set, which is much smaller than the data set the model was initially trained on. This additional training allows the model to learn and adapt to the nuances, terminology, and specific patterns found in the smaller data set.

Foundation Models (FM) are deep learning models trained on vast quantities of unstructured, unlabeled data that can be used for a wide range of tasks out of the box or adapted to specific tasks through fine-tuning. Early models include GPT-4, PaLM, DALL·E 2, and Stable Diffusion.

Generative AI is AI that is typically built using foundation models and has capabilities that earlier AI did not have, such as the ability to
generate content. Foundation models can also be used for non-generative purposes (for example, classifying user sentiment as negative or positive based on call transcripts) while offering significant improvement over earlier models. For simplicity, when we refer to generative AI in this article, we include all foundation model use cases.

Graphics processing units (GPUs) are computer chips that were originally developed for producing computer graphics (such as for
video games) and are also useful for deep learning applications. In contrast, traditional machine learning and other analyses usually run
on central processing units (CPUs), normally referred to as a computer’s “processor.”

Large language models (LLMs) make up a class of foundation models that can process massive amounts of unstructured text and learn the relationships between words or portions of words, known as tokens. This enables LLMs to generate natural language text, performing tasks such as summarization or knowledge extraction. GPT-4 (which underlies ChatGPT) and LaMDA (the model behind Bard) are examples of LLMs.

Machine learning (ML) is a subset of AI in which a model gains capabilities after it is trained on, or shown, many example data points.
Machine learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt and can become more effective in response to new data and experiences.

MLOps refers to the engineering patterns and practices to scale and sustain AI and ML. It encompasses a set of practices that span the
full ML life cycle (data management, development, deployment, and live operations). Many of these practices are now enabled or opti-
mized by supporting software (tools that help to standardize, streamline, or automate tasks).

Prompt engineering refers to the process of designing, refining, and optimizing input prompts to guide a generative AI model toward
producing desired (that is, accurate) outputs.

Structured data

Structured data are tabular data (for example, organized in tables, databases, or spreadsheets) that can be used to train some machine
learning models effectively.

Transformers are key components of foundation models. They are artificial neural networks that use special mechanisms called
“attention heads” to understand context in sequential data, such as how a word is used in a sentence.

Unstructured data lack a consistent format or structure (for example, text, images, and audio files) and typically require more advanced techniques to extract insights.

Close Menu

Contact

buck.field@fieldoperative.com

Field Operative

Creating new organizational intelligence via standards-based, transformative information projects, i.e.: Revolution, by the book

Box 1958
235 Puppy Smith
Aspen, CO 81612-1958

T: (970) 300 1019
E: buck.field@fieldoperative.com