Try Pienso using your own data, in a two-week complimentary pilot.  Book your pilot scoping call.

The Primer: A.I. Terms You’ve Been Pretending to Know (But Now You Do)

Your home has electric lights in it. To use them, you don’t need to know how light bulbs, electricity, or power plants work — you just need to know how to flick a switch, replace a light bulb, and maybe flip a circuit breaker every once in a while.

Similarly, you don’t need a PhD in computer science to start putting A.I. to work.

By now, you’ve encountered terms like Artificial Intelligence (A.I.)Machine LearningDeep Learning, and Natural Language Processing (NLP). If you’re reading this, you’re hungry to disambiguate these concepts and cut to the chase.

And OpenAI’s recently viral ChatGPT demo has also catapulted curiosity around Generative AI applications — creating both certainty that A.I. is ready to use, right now and confusion about what exactly A.I. is.

So this article is meant to provide a primer on need-to-know A.I. concepts and terminology — explaining Pienso’s niche along the way. (Don’t worry, we’ll keep it brief.)

A diagram of overlapping disciplines

Data Science and Text Analytics

If your work crosses paths with “business intelligence,” there’s a good chance you already know these terms:

  • Data science – a discipline that combines subject matter expertise with math, statistics, and computer science to search for insight hiding in large amounts of information
  • Text analytics – a subfield of data science focused on finding insights in text data (Also called: text analysistext mining)

Text data is anything made of words: news articles, scholarly journals, social media posts, call transcripts, emails, books… you get the idea.

And in recent years, data scientists undertaking text analytics have begun to use increasingly sophisticated statistical techniques that fall under the guise of “A.I.”

A.I., Machine Learning, and Deep Learning

Before we talk about how A.I. works, let’s define some category labels:

Artificial Intelligence (A.I.) – Despite (or because of) its wide use, this term has no universally agreed definition! As a result, many believe “A.I.” is more marketing-speak than a meaningful technical classification. But this definition from Elements of AI can be helpful: “a sub-field of computer science devoted to creating systems that simulate human mental capabilities of autonomous decision-making and adaptivity.”

The category “A.I.” encompasses both the possible and impossible:

  • Artificial General Intelligence – This is the “A.I.” we know from science fiction: Machines that think and act just like people, capable of “generalizing” across diverse domains of knowledge. While AGI makes for compelling fiction, that’s all it is: light-years from reality.
  • Artificial Narrow Intelligence – This is the A.I. in use today: systems that replicate one specific task of human intelligence (e.g. the ability to answer, “does this picture have a cat in it?”). Besides computer vision, it includes fields like natural language processing, robotics, Machine Learning, and more.

When you see the term “A.I.” used in a business or real-world context, know that it’s being used as shorthand for “narrow A.I.” — sometimes also called “weak A.I.” 

Machine Learning – a subfield of (narrow) A.I., Machine Learning involves building systems that ingest information (data) to improve performance on a given task (rather than being programmed explicitly).

Deep Learning – The cutting edge of Machine Learning, Deep Learning leverages recent advances in computing power and data accessibility to create ML systems that are exceptionally complex, powerful, and accurate.

☝ to summarize: Artificial Intelligence (A.I.) is any attempt to use computer science to replicate human mental capabilities; Machine Learning describes narrow, achievable A.I. that improves the more data it receives; and Deep Learning is Machine Learning’s cutting edge.

Machine Learning: Key Concepts

  • training data – Information that is fed into a Machine Learning system as input. Training data can either be “labeled” or “unlabeled.”
  • labels – metadata that accompanies each item in a data set, which can be used to guide the Machine Learning process. (Whether or not your training data comes with labels will inform what Machine Learning approach you take.)
An example of a data set — with a labels metadata column included.

algorithm – a finite sequence of instructions, typically used to solve a class of specific problems or to perform a computation.

model – When an algorithm has been “taught” via training data, the output is a model. Trained models can then be “deployed” to classify new data (provided it is reasonably similar to the training data). 

☝ in short: training data is fed into a base algorithm, which is trained until a satisfactory model is produced.

How Training Works

There are different techniques for training ML models, but most fit into these categories:

supervised learning – Each piece of training data is explicitly labeled, so the algorithm knows what labels it should apply to future data.

  • Ex: If I were training a machine vision model to tell me whether a picture contains a cat or not, I would give it a pile of thousands of images: some labeled CAT = TRUE, the rest labeled CAT = FALSE. With enough accurately labeled data, I should be able to train a competent cat-or-not classifier model.

unsupervised learning – Training data is not labeled, so the algorithm detects clusters of similar data without any human guidance on what it’s looking for.

  • Ex: If I fed a bunch of different animal pictures to an unsupervised machine vision algorithm, it might still end up grouping all the cat pictures together… or not! It might classify all my pictures by color, or whether the animals have fur, or some other commonalities beyond human interpretation. But these unguided, machine-detected categories might still be interesting…

semi-supervised learning – A combination of the two approaches above, where labels are applied by a human supervisor to a partial subset of the training data at some point in the training process.

  • Ex: After feeding my animal pictures into my unsupervised algorithm, I look through the clusters it has found: I’ll label this first one “mammals,” the second “birds,” a third “insects,” throw the fourth in the trash, and so on. Then I’ll tell the algorithm to keep these labels in mind and try again.

👉 Pienso takes a semi-supervised approach to Machine Learning, where you can start with unlabeled training data, then respond to machine-detected clusters in that data, and eventually produce a powerful Deep Learning model capable of accurately classifying new text data.

Wrap-Up

In this article, we’ve discussed: 

  • Artificial Intelligence (A.I.) is any attempt to use computer science to replicate human mental capabilities. This term encompasses both Artificial General Intelligence (machines that can think just like people; still purely science fiction) and Artificial Narrow Intelligence (purpose-specific systems; AI in the real world; a.k.a. “weak A.I.”).
  • Machine Learning is a subfield of A.I. that involves building systems that leverage information (data) to inform and improve performance on a given task — rather than being programmed explicitly.
  • Machine Learning is conducted by feeding training data into an algorithm, which is trained until a satisfactory model is produced.
  • Training data can either be labeled or unlabeled, which informs what training approaches are available.
  • Deep Learning is Machine Learning’s cutting edge, driven by advances in computing power and data accessibility. (Pienso uses Deep Learning.)
  • Pienso uses a Machine Learning approach called semi-supervised learning, where you can start with unlabeled training data, then (use your human expertise to) respond to machine-detected clusters in that data, and eventually produce a powerful Deep Learning model capable of accurately classifying new text data.

In the next installment of The Primer, we’ll cover: Large Language ModelsFoundation Modelsfine-tuning, and Generative AI vs Analytical AI applications.

Industry
Topic