Part 1 of 5

What is Artificial Intelligence?

⏱ 35-45 min read ☆ Foundational

Introduction

Artificial Intelligence (AI) has moved from science fiction to boardroom priority in just a few years. As a professional working with or around AI systems, understanding what AI actually is - and isn't - forms the foundation for making informed decisions about its adoption, governance, and risk management.

This part will equip you with the conceptual vocabulary needed to engage meaningfully with technical teams, evaluate AI proposals, and understand the capabilities and limitations of AI systems.

A Brief History of AI

The quest to create intelligent machines stretches back centuries, but the formal field of AI began in the mid-20th century. Understanding this history helps explain why AI has followed a pattern of "boom and bust" cycles, and why the current wave may be different.

1950s - The Birth of AI
Alan Turing proposes the "Turing Test" for machine intelligence. The term "Artificial Intelligence" is coined at the Dartmouth Conference in 1956. Early optimism predicts human-level AI within 20 years.
1960s-1970s - First AI Winter
Initial enthusiasm meets reality. Limitations in computing power and algorithms lead to reduced funding and the first "AI Winter" - a period of diminished interest and investment.
1980s - Expert Systems Boom
Rule-based "expert systems" achieve commercial success. Companies invest heavily in AI, but the systems prove brittle and expensive to maintain, leading to a second AI winter in the late 1980s.
1990s-2000s - Quiet Progress
Machine learning research continues with less hype. Statistical methods gain prominence. IBM's Deep Blue defeats world chess champion Garry Kasparov in 1997.
2010s-Present - The Deep Learning Revolution
Breakthroughs in deep learning, combined with massive data availability and powerful hardware, enable unprecedented AI capabilities. Image recognition surpasses human performance. Large language models emerge.

Governance Insight

Understanding AI's cyclical history helps organizations avoid both excessive hype and unwarranted skepticism. Past AI winters were caused by overpromising - a lesson that remains relevant when evaluating vendor claims today.

Narrow AI vs. General AI

One of the most important distinctions in AI is between systems that exist today and theoretical future systems. This distinction is crucial for setting realistic expectations and appropriate governance frameworks.

Narrow AI (ANI)

Also called "Weak AI" or "Applied AI"

  • Designed for specific tasks
  • Excels within defined boundaries
  • Cannot generalize to new domains
  • All current AI systems
  • Examples: ChatGPT, image recognition, recommendation engines

General AI (AGI)

Also called "Strong AI" or "Human-Level AI"

  • Hypothetical systems with human-like reasoning
  • Could transfer knowledge across domains
  • Would possess common sense
  • Does not yet exist
  • Timeline estimates vary widely

Key Point

Every AI system you will encounter in your professional work is narrow AI. Even the most impressive systems - including large language models that seem to "understand" language - are sophisticated narrow AI systems optimized for specific tasks. They lack true understanding, common sense, or the ability to genuinely reason outside their training.

Symbolic AI vs. Connectionist AI

Throughout AI's history, two major approaches have competed for dominance. Understanding these paradigms helps explain both the capabilities and limitations of modern AI systems.

Symbolic AI (Classical AI)

The symbolic approach attempts to encode human knowledge explicitly using rules, logic, and symbols. Think of it as programming a computer with "if-then" rules created by human experts.

Example: A symbolic medical diagnosis system might have a rule: "IF patient has fever AND cough AND fatigue THEN consider influenza." These rules are created by interviewing medical experts and encoding their knowledge.

Strengths

  • Explainable - you can trace reasoning
  • Predictable behavior
  • Works well with limited data
  • Easy to audit and verify

Limitations

  • Expensive to create and maintain
  • Brittle - fails on edge cases
  • Cannot learn from experience
  • Struggles with ambiguity

Connectionist AI (Neural Networks)

The connectionist approach is inspired by the brain's neural structure. Instead of explicit rules, these systems learn patterns from large amounts of data. This includes all modern "deep learning" systems.

Example: A neural network for medical diagnosis would be trained on millions of patient records, learning patterns that correlate symptoms with diagnoses without being explicitly programmed with rules.

Strengths

  • Learns patterns from data
  • Handles ambiguity well
  • Can discover unexpected patterns
  • Improves with more data

Limitations

  • Requires massive data
  • Often a "black box"
  • Can learn biases in data
  • Unpredictable failures

Modern Reality

Today's most capable AI systems are primarily connectionist (neural networks), but hybrid approaches combining both paradigms are gaining interest. For governance purposes, the "black box" nature of neural networks creates unique challenges for explainability, auditability, and accountability.

Defining AI: A Practical Framework

Rather than debating philosophical definitions, professionals benefit from a practical understanding. AI refers to computer systems that perform tasks typically requiring human intelligence, including:

  • Perception: Interpreting images, speech, text, and other sensory inputs
  • Reasoning: Drawing conclusions from available information
  • Learning: Improving performance through experience
  • Planning: Creating sequences of actions to achieve goals
  • Natural Language: Understanding and generating human language

The AI Effect

There's a phenomenon called the "AI Effect" where once a task is mastered by machines, it stops being considered "real" AI. Spell checkers, search engines, and chess programs were once cutting-edge AI. This moving goalpost helps explain why AI sometimes seems both overhyped and underappreciated simultaneously.

Key Takeaways

  • AI has experienced multiple cycles of hype and disappointment - maintain realistic expectations
  • All current AI is "narrow AI" - powerful within specific domains but lacking general intelligence
  • Symbolic AI is explainable but brittle; connectionist AI is powerful but opaque
  • Modern AI primarily uses neural networks (connectionist approach), creating governance challenges around explainability
  • The definition of AI keeps expanding as capabilities that once seemed intelligent become routine
  • Understanding these fundamentals enables more productive conversations with technical teams