Artificial Intelligence -- what is this thing about which we are hearing so much?
Artificial intelligence, per Merriam Webster is "a branch of computer science dealing with the simulation of intelligent behavior in computers." (see footnote 1, below)
IBM defines it as "a field [of study] which combines computer science and robust datasets, to enable problem-solving." (see footnote 2, below)
Stanford University's Human-Centered Artificial Intelligence Center states that AI is "the science and engineering of making intelligent machines ... machines that can learn, at least somewhat like human beings do." (see footnote 3, below)
"Artificial Intelligence" image created by Mike Mac Marketing, found on Wikimedia Commons with a CC-BY 2.0 License
Other definitions you should know, too:
Machine Learning is artificial intelligence running "computer programs that learn from examples and from experience." (see footnote 4, below)
Narrow AI is different than General AI because it "is created to solve one given problem." (see footnote 6, below) This kind of AI is sometimes labeled "Weak AI."
General AI, or Artificial General Intelligence (AGI), is "a more advanced form of artificial intelligence that can learn and adapt to its environment." (see footnote 7, below). Many science-fiction AI are considered AGI such as the character Sonny in I, Robot; the character J.A.R.V.I.S. in Iron Man; or the character Cyberdyne Systems Model 101 in The Terminator. (see footnote 8, below)
Natural Language Processing, or NLP, is a branch of AI allowing computers to understand text and speech in the same way humans do. Baumann & Schuler (2023) specify that "natural language processing tasks range from text searches (such as web searches) to interaction with spoken language (such as with Siri, Alexa, or similar voice-controlled agents)." (see footnote 9, below)
Large Language Models (LLM) are the technology underlying generative AI. Tech writer Bob Sharp's article describes what these models are, how they are created and how they work, and he lists some of their limitations, too. Wired's David Nield's article discusses large language models and how they work.
Reinforcement Learning on Human Feedback (RLHF) is the activity occurring within AI-generated text and images when human beings, in the roles of supervisors and end users, comment on AI output to help it become more accurate.
Neural Networks mimic the work of the human brain's neuron network. Neural networks in AI "learn to perform better by consuming input, passing it up through the ranks of neurons, and then comparing its final output against known results, which are then fed backwards through the system to alter how the nodes perform their computations." (see footnote 10, below)
Deep Learning is another sub-field of machine learning using neural networks to complete its tasks, by "processing multiple layers of programmed algorithms ... and then learning complicated concepts ... through experience." (see footnote 5, below) Google's search-engine algorithm uses neural-network processing.
A Data Corpus is a collection "of data on which it (generative AI) is trained, then [it] respond[s] to prompts with something that falls within the realm of probability as determined by that corpus." (see footnote 11, below) This corpus usually is large in size and contains many kinds of files; in order for a corpus to be the best possible, it should be high-quality, vast, clean, and without biases. (see footnote 12, below)
A Data Lake is a storage repository holding lots of raw data in its native formats until it is needed for analytical applications. In this kind of repository, the raw data sits in a flat architecture (in files or object storage) for more flexibility with management, usage, and storage.
A Data Warehouse is a storage repository holding lots of raw data in its native formats, but this information is stored in hierarchical dimensions and tables as contrasted to the information held in data lakes.
The Black Box Problem is the situation that arises when an AI has done so much work on its own that its developers no longer understand how it is making its decisions and predictions. Heller (2021) says, "For any AI decision that has an impact - not only a life-and-death impact, but also a financial impact or a regulatory impact -- it is important to be able to clarify what factors went into the model's decision." (see footnote 13, below)
AI Hallucinations are items created by an AI model that "generates output that deviates from what would be considered normal or expected based on the training data it has seen." (see footnote 14, below)
What are some examples of Artificial Intelligence?
Maps and navigation aids
Facial detection and recognition programs
Text / Autocorrect editors
Search-and-Recommend algorithms
Chatbots
Social media
Graphic chatbot image created by Mohamed Hassan, on Pixabay, with CC0 Public Domain
Footnotes:
1. Merriam-Webster online dictionary, accessed 03/09/2023
2. IBM.com, accessed 03/09/2023
3. Manning, 2020
4. World Almanac Education Group, Inc., 2020, para. 12-13
5. World Almanac Education Group, Inc. 2020, para.13
6. Athena Information Solutions. 2023, para. 2
7. Athena Information Solutions, 2023, para. 3
8. Crane, 2023, p. 2
9. Baumann & Schuler, 2023, para. 1
10. Tyson, 2023, para. 2-4
11. Fruhlinger, 2023, para.5
12. Crane, 2023, p. 3
13. Heller, 2021, para. 3
14. Petkauskas, 2023, para. 11