Using AI as a Learning Tool
Expectations, boundaries, and examples
Purpose of this policy
Tools that use artificial intelligence are now widely available, and many of you already use them when learning technical material. This course allows the use of AI tools as a learning aid, similar to a tutor or reference source. At the same time, this course requires that submitted work reflect your own understanding and decision-making.
This page explains how AI tools may be used productively in this course, and where their use becomes inappropriate.
AI as a learning tool
Using AI as a learning tool means using it to clarify ideas, terminology, or examples that you do not yet understand.
Ideas An idea is a concept or principle discussed in the course materials, such as how a process works, why a particular approach is used, or what a tool is designed to do. Clarifying an idea means using AI to help you understand the purpose or reasoning behind something you have already encountered in class, rather than asking for a final answer.
Terminology Terminology refers to the specific words and phrases used in technical subjects. These terms often have precise meanings that differ from everyday usage. Using AI appropriately includes asking for explanations of unfamiliar terms so you can understand how they are used in context, especially when reading textbooks, documentation, or assignment instructions.
Examples Examples are demonstrations that show how an idea or concept is applied in practice. Using AI to clarify examples means asking for help understanding why an example works, what each part is doing, or how it relates to the underlying concept. It does not mean copying an example and submitting it as your own work.
In all cases, using AI as a learning tool should help you build understanding, not replace the effort of reading, experimenting, or reasoning through a problem yourself.
AI as a crutch
AI tools can support learning, but they can also be used in ways that reduce learning. This policy uses the term “crutch” to describe situations where the tool is doing the thinking and decision-making that the student is expected to practice.
Using AI as a crutch means relying on it to produce work or reasoning that you do not understand well enough to explain and defend. The issue is not that the tool was used; the issue is that the tool replaced the learning process.
Inappropriate uses include:
Asking AI to complete an assignment or lab task for you Example: “Write the solution for this assignment.”
Submitting AI-generated code, answers, or explanations that you cannot explain in your own words Example: Copying an explanation of why a solution works without being able to describe the steps yourself.
Treating AI output as authoritative without verification Example: Accepting an answer that conflicts with course materials, documentation, or results you can reproduce.
Using AI to bypass the learning objective Example: Asking for the final answer when the point of the activity is to practice reasoning, debugging, designing, or interpreting results.
Using AI to write or script reflections that are meant to describe your own understanding Example: Asking AI to generate a script or talking points for a reflection video instead of explaining the ideas in your own words.
In these cases, AI is being used to avoid the intended learning work rather than to support it.
Why this matters
AI can produce output that appears correct even when it contains errors or is based on incorrect assumptions. Because of that, it is important to treat AI output as a starting point for learning, not as an authority.
This course emphasizes:
- verifying results,
- understanding workflows,
- and explaining decisions clearly.
If you rely on AI answers without testing them or understanding them, you are likely to miss errors and build misunderstandings. Those gaps tend to surface later in the course when tasks require independent problem solving, accurate interpretation, and clear explanations in graded work.
Expectations for submitted work
All submitted work must reflect your own understanding.
This means:
- You should be able to explain any code you submit.
- You should be able to describe why you made particular choices (model, metric, preprocessing).
- In reflections, your explanations should sound like you, not like a generic answer generator.
You are not required to disclose routine AI use for clarification or studying. However, if your work suggests that you do not understand what you submitted, that will be treated the same way as any other lack of understanding, regardless of the source.
If you use an AI tool, ask yourself:
- Could I explain this answer to another student without looking it up?
- Did I test or verify what the tool suggested?
- Did the tool help me understand, or did it just give me something to submit?
If the tool helped you understand, you are using it appropriately. If it replaced your thinking, you are not.