Skip to Main Content

Artificial Intelligence in the Classroom

This Research Guide discusses artificial intelligence, and specifically chatbots: What they are, how they are seen in the higher-education classroom, and the academic-integrity issues arising from student use of AI tools in the classroom

What is ChatGPT?

ChatGPT is a specific chatbot built as an online assistant that can talk back-and-forth with a user.  Its creator, OpenAI, described ChatGPT in 2022 by saying, "We have trained a model called ChatGPT which interacts in a conversational way.  The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests." (see footnote 1, below)

One important thing to note when trying to understand ChatGPT is that it does not understand what it is composing.  Instead, the chatbot is guessing the most likely word that follows the previous word, and the most likely word after that, and so on. 

Warner (2022) describes the inner workings of ChatGPT in layman's terms, saying:

"It’s important to understand what ChatGPT is, as well as what it can do. ChatGPT is a Large Language Model (LLM) that is trained on a set of data to respond to questions in natural language. The algorithm does not “know” anything. All it can do is assemble patterns according to other patterns it has seen when prompted by a request. It is not programmed with the rules of grammar. It does not sort, or evaluate the content. It does not “read”; it does not write ... You give it a prompt and it responds with a bunch of words that may or may not be responsive and accurate to the prompt, but which will be written in fluent English syntax." (see footnote 2, below)

 

What are the limitations of ChatGPT?

OpenAI is upfront about the limitations of ChatGPT 1.0. 

"The model can create sentences that are plausible, but don't make sense.  

The tool can answer one way based on the construction of the query, and then when that query is tweaked, it may give a different answer. 

The model tends to be verbose.

The model does not ask clarifying questions but instead makes a guess at what the questioner is asking for.

The developers have also included an important intentional limitation: 'While we have made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using our Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now'." (see footnote 3, below)

 

Footnotes:

1.OpenAI, 2022, subtitle
2. Warner, 2022, para. 10
3. OpenAI, 2022, para. 14-18