Guidance for the Responsible Use of Artificial Intelligence (AI) in Adams 12 Five Star Schools
Adams 12 Five Star Schools recognizes that Artificial Intelligence (AI) can support student achievement and learning when used responsibly.
This guidance outlines our expectations for student and staff use of AI, such as what work can and cannot be aided with this technology and outlines our expectations for respectful behavior. As we learn more about AI, and as the technology evolves, we will make necessary adjustments to this guidance.
The following information guides our students, staff, and school communities on the appropriate, responsible, and safe use of AI, particularly generative AI tools, in classroom instruction, school management, and systemwide operations. Generative AI has potential benefits for education and risks that must be thoughtfully managed.
- Terminology
- Scope
- Guiding Principles for AI Use
- Responsible Use of AI Tools
- Prohibited Use of AI Tools
- Special Consideration: Advancing Academic Integrity
- Special Consideration: Security, Privacy, and Safety
- Review
- Version History
Terminology
-
Artificial Intelligence (AI) is computer programming that learns and adapts, with systems taught to mimic intelligent human behaviors.
-
Machine learning (ML) is a subset of AI. It is the technique that allows machines to learn autonomously from data. Think of it as teaching a computer to be creative based on examples it has seen.
-
Large language models (LLMs) are machine learning models that can understand, predict, and generate human language.
-
Generative AI (gen AI) refers to the use of AI to create new content, like text, images, music, audio, code, and videos. This includes tools such as Gemini, Copilot, ChatGPT, Mid-Journey, and Dall-E.
Scope
This guidance applies to all students, teachers, staff, administrators, and third parties who develop, implement, or interact with AI technologies used in our education system. It covers all AI systems used for education, administration, and operations, including, but not limited to, generative AI models, intelligent tutoring systems, conversational agents, automation software, and analytics tools. This guidance complements existing District Policies and practices on technology use, data protection, academic integrity, and student support, including but not limited to:
- District Policy 5030: Student Use of Cell Phones and Other Personal Electronic Devices
- District Policy 5035: Student Use of Technology and the Internet
- District Policy 5000: Student Code of Conduct
- District Policy 4185: Staff Use of Technology
Guiding Principles for AI Use
The following principles guide the appropriate, responsible, and safe use of AI and address current and future educational goals, teacher and student agency, academic integrity, and security. We commit to adopting internal procedures to operationalize each principle.
-
Vision for Teaching & Learning: We use AI to help all of our students achieve their educational goals. We will use AI to help us reach our community’s goals, including improving student learning, teacher effectiveness, and school operations. We aim to make AI resources universally accessible, focusing especially on bridging the digital divide among students and staff. We are committed to evaluating AI tools for biases and ethical concerns, ensuring they effectively serve our diverse educational community.
-
Digital Safety & Responsible Use: We reaffirm adherence to existing policies, practices, regulations, and expectations. AI is one of many technologies used in our schools, and its use will align with existing regulations to protect student privacy and protect against harmful content. We will not share personally identifiable information with consumer-based AI systems. Honesty, trust, fairness, respect, and responsibility continue to be expectations for both students and teachers. Students should be truthful in giving credit to sources and tools and honest in presenting work that is genuinely their own for evaluation and feedback.
-
AI Literacy for Staff & Students: We educate our staff and students about AI. Promoting AI literacy among students and staff is central to addressing the risks of AI use and teaches critical skills for students’ futures. Students and staff will be given support to develop their AI literacy, which includes how to use AI, when to use it, and how it works, including foundational concepts of computer science and other disciplines. We will support teachers in adapting instruction in a context where some or all students have access to generative AI tools.
-
Educational Equity & Opportunity: We explore the opportunities of AI and address the risks. In continuing to guide our community, we will work to realize the benefits of AI in education, address risks associated with using AI, and evaluate if and when to use AI tools. We will pay special attention to misinformation and bias and ensure accessibility for all students, so that every student receives what they need to develop their full academic and social potential.
-
Accountability of Application: We maintain student and teacher agency when using AI tools. AI tools can provide recommendations or enhance decision-making, but staff and students will serve as “critical consumers” of AI and lead any organizational and academic decisions and changes. People will be responsible and accountable for pedagogical or decision-making processes where AI systems may inform decision-making.
-
Continuous Improvement: We commit to auditing, monitoring, and evaluating our use of AI. Understanding that AI and technologies are evolving rapidly, we commit to frequent and regular reviews and updates of our policies, procedures, and practices. We will thoroughly evaluate existing and future technologies and address any gaps in compliance that might arise.
Responsible Use of AI Tools
Adams 12 Five Star Schools recognizes that responsible uses of AI will vary depending on the context, such as a classroom activity or assignment. It’s important for teachers to be clear on if, when, and how AI tools will be used. Tools selected must be approved for use to ensure compliance with applicable laws and regulations regarding data security and privacy. Appropriate AI use should be guided by the specific parameters and objectives defined for an activity.
With this in mind, we encourage a “first draft thinking” approach that fosters a safe environment for students and educators to explore and experiment with AI tools, while emphasizing the protection of personally identifiable information and guarding against bias. This approach provides opportunities for innovative learning experiences that align with educational goals and adhere to legal and ethical standards. Below are some examples of responsible uses that serve educational goals.
Always review and critically assess outputs from AI tools before submission or dissemination. Staff and students should never rely solely on AI-generated content without review.
Student Learning
- Aiding Creativity: Students can harness generative AI to spark creativity across diverse subjects, including writing, visual arts, and music composition.
- Collaboration: Generative AI tools can partner with students in group projects by contributing concepts, supplying research support, and identifying relationships between varied information.
- Communication: AI can offer students real-time translation, personalized language exercises, and interactive dialogue simulations.
- Content Creation and Enhancement: AI can help generate personalized study materials, summaries, quizzes, and visual aids, help students organize thoughts and content, and help review content.
- Tutoring: AI technologies have the potential to democratize one-to-one tutoring and support, making personalized learning more accessible to a broader range of students. AI-powered virtual teaching assistants may provide non-stop support, answer questions, help with homework, and supplement classroom instruction.
Teacher Support
- Standards Analysis and Clarity of Grade-Level Instruction: AI can support educators by unpacking standards and providing examples of learning trajectories and learning intentions. These materials can be used as a third point of reference for collaborative planning and data-informed instructional decision making. AI can also generate lesson plan shells that teachers can use as the starting point for rich lessons aligned to grade-level learning goals.
- Assessment Design and Analysis: In addition to enhancing assessment design by creating questions and providing standardized feedback on common mistakes, AI can create diagnostic assessments to identify gaps in knowledge or skills and enable rich performance assessments in curricular areas that lack these supports. AI-created assessments can assist teachers with data-informed decision making and instruction with analysis of data and trends. Teachers are ultimately responsible for evaluation, feedback, and grading, and therefore must critically evaluate all AI-generated feedback and grades for both formative and summative assessments to ensure accuracy, fairness, and alignment with learning objectives.
- Content Development and Enhancement for Differentiation: AI can assist educators by differentiating curricula, suggesting lesson plans, generating diagrams and charts, and customizing independent practice based on student needs and proficiency levels. Teachers should critically evaluate the materials produced by AI to ensure equitable access to grade-level standards for every student.
- Continuous Professional Development: AI can guide educators by recommending teaching and learning strategies based on student needs, personalizing professional development to teachers’ needs and interests, suggesting collaborative projects between subjects or teachers, and offering simulation-based training scenarios.
- Research and Resource Compilation: AI can help educators by recommending books or articles relevant to a lesson and updating teachers on teaching techniques, research, and methods. Teachers are responsible for ensuring resources suggested by AI are high-quality instructional materials, aligned to grade-level standards.
School Management and Operations
- Communications: AI tools can help draft and refine communications within the school community, deploy chatbots for routine inquiries, and provide instant language translation.
- Operational Efficiency: Staff can use AI tools to support school operations and streamline administrative processes, including scheduling courses, automating inventory management, increasing energy savings, and generating performance reports.
Prohibited Use of AI Tools
As we work to realize the benefits of AI in education, we also recognize that risks must be addressed. Below are the prohibited uses of AI tools and the measures we will take to mitigate the associated risks.
Student Learning
-
Bullying/harassment: Using AI tools to manipulate media to impersonate others for bullying, harassment, or any form of intimidation is strictly prohibited. All users are expected to employ these tools solely for educational purposes, upholding values of respect, inclusivity, and academic integrity at all times.
-
Overreliance: Dependence on AI tools can decrease human discretion and oversight. Important nuances and context can be overlooked and accepted. Teachers will clarify if, when, and how AI tools should be used in their classrooms, and teachers and students are expected to review outputs generated by AI before use.
-
Plagiarism and cheating: Students should not copy from any source, including generative AI, without prior teacher approval and adequate documentation. Students should not submit AI-generated work as their original work. Students will be taught how to properly cite or acknowledge the use of AI where applicable. Teachers will be clear about when and how AI tools may be used to complete assignments and restructure assignments to reduce opportunities for plagiarism by requiring personal context, original arguments, or original data collection. Existing procedures related to potential violations of our Academic Integrity Policy will continue to be applied.
-
Unequal access: If an assignment permits the use of AI tools, the tools will be made available to all students, considering that some may already have access to such resources outside of school.
Teacher Support
-
Discrimination and Societal Bias: AI systems will not be used in any way that discriminate against students based on race, ethnicity, gender, religion, disability, or any other protected characteristic. AI tools trained on human data will inherently reflect societal biases in the data. Risks include reinforcing stereotypes, recommending inappropriate educational interventions, or making discriminatory evaluations, such as falsely reporting plagiarism by multilingual learners. It’s important for staff and students to understand the origin and implications of societal bias in AI. Educators must be accountable and responsible for the thoughtful application of AI technology, and humans will review all AI-generated outputs before use, including but not limited to grades, disciplinary actions, IEPs, etc..
-
Diminishing student and teacher agency and accountability: While generative AI presents useful assistance to amplify teachers' capabilities and reduce teacher workload, these technologies will not be used to supplant the role of human educators in instructing and nurturing students. We want to avoid scenarios wherein “teachers use AI to design assignments, students prompt AI to complete the assignment, and teachers prompt AI to grade it…This could be the worst case scenario for AI use in schools.” The core practices of teaching, mentoring, assessing, and inspiring learners will remain the teacher's responsibility in the classroom. AI is a tool to augment human judgment, not replace it. Teachers and staff must review and critically reflect on all AI-generated content before use, thereby keeping “humans in the loop.”
-
Privacy concerns: AI tools will not be used to monitor classrooms for accountability purposes, such as analyzing teacher-student interactions or tracking teacher movements, which can infringe on students’ and teachers' privacy rights and create a surveillance culture.
-
Copyright infringement: Use caution when uploading published work (which can include students’ work) into AI as this can be considered a form of copyright infringement.
School Management and Operations
-
Compromising Privacy: The education system will not use AI in ways that compromise teacher or student privacy or lead to unauthorized data collection, as this violates privacy laws and our system’s ethical principles. See the Security, Privacy, and Safety section below for more information.
-
Noncompliance with Existing Policies: We will evaluate AI tools for compliance with all relevant policies and regulations, such as privacy laws and ethical principles. AI tools will be required to detail if/how personal information is used to ensure that personal data remains confidential and isn't misused.
Special Consideration: Advancing Academic Integrity
While it is necessary to address plagiarism and other risks to academic integrity, we will use AI to advance the fundamental values of academic integrity - honesty, trust, fairness, respect, and responsibility.
- Staff and students can use AI tools to quickly cross-reference information and claims for authenticity and veracity, though they must still be critical of the output.
- Advanced AI tools can increase fairness by identifying and minimizing teacher biases in grading and assessments.
- AI can adapt materials for students with different learning needs, supporting educators with responding to individual differences.
An AI Acceptable Use Scale is an important part of an adoption plan to help build common understanding, clear expectations, and common language around the use of AI by students. A scale such as this can help build the common understanding and language to ensure fair and equitable treatment of issues of suspected plagiarism or cheating with AI in the K12 setting. Below is an example of an AI acceptable use scale (original source).
Please note, as of 7.25.24, MagicSchool is the only stand-alone LLM approved for student use. However, numerous approved tools have built in AI capability. Check the Approved Technology List for updates.
Additional Recommendations for Advancing Academic Integrity
-
Teachers might allow the limited use of generative AI on specific assignments or parts of assignments and articulate why they do not allow its use in other assignments.
-
Teachers should explore AI platforms to understand what these platforms can and can't do. For example, teachers should test planned questions/assignments in an AI platform and evaluate the results.
-
Teachers should create milestones and checkpoints in major assignments at key points in the process. For example, annotated bibliography, thesis statement, problem statement, multiple drafts with teacher provided feedback, outlines, project plans, research notes, multiple essay drafts, etc. Scaffolding the creation of assignments helps learners plan and chunk work while making the use of AI enhancements unnecessary.
-
Encourage accountability through peer editing, group assignments and multiple drafts.
-
Incorporate formative assessments with actionable feedback at multiple points. Set expectations that peer-, teacher-, and self-feedback is incorporated into future drafts and final products. Evaluate student work in relation to how well they have incorporated feedback provided.
-
Augment essays, reports, and project assignments with presentations or performance elements. This will require learners to explain their work and answer questions in real-time.
-
Teachers will not use technologies that purport to identify the use of generative AI to detect cheating and plagiarism, as their accuracy is questionable. AI detection tools have a high false-positive rate, flagging things as AI written that are not. For example, they are also more likely to incorrectly flag writing by multilingual users as AI-generated.
-
If a teacher or student uses an AI system, its use must be disclosed and explained. As part of the disclosure, students may choose to cite their use of an AI system using one of the following resources:
-
Consider alternative assessments, including AI-resistant prompts. Suggestions include case study analysis, creation of infographics, discussing an essay in a conference format, in-class written assessments, podcasts, game creation, and more.
-
Become familiar with your course and/or school’s academic honesty policy and consider how it may apply to how students use AI to support their learning in your class.
-
As an eductor model best practices using AI which include citing when AI has been utilized.
For more resources on adjusting teaching and learning to uphold academic integrity:
- Combating Academic Dishonesty from the University of Chicago
- Promoting Academic Integrity in your Course from Cornell University
- Strategies for Teaching Well When Students Have Access to Artificial Intelligence (AI) Generation Tools from George Mason University
Special Consideration: Security, Privacy, and Safety
We will implement reasonable security measures to secure AI technologies against unauthorized access and misuse. All AI systems deployed within the District will be evaluated for compliance with relevant laws and regulations, including those related to data protection, privacy, and students’ online safety.
Staff and students are prohibited from entering confidential or personally identifiable information (PII) into non-district approved AI tools, such as those without approved Data Privacy Agreements (DPA). Sharing confidential or personal data with an AI system could violate privacy if not properly disclosed and consented to.
Examples of PII can include:
- Name
- Birthdate
- Grade
- Email Address
- Health Information
- Address(s)
Educators should avoid including PII about students when interacting with AI. For example, if using AI to generate feedback on a writing assignment, do not include the students’ names, school, or grade in the AI prompt.
Review
This guidance will be reviewed annually, or sooner, to ensure it continues to meet the District’s needs and complies with changes in laws, regulations, and technology. We welcome feedback on this guidance and its effectiveness as AI usage evolves.
Version History
Because AI is an emerging technology and is changing rapidly, as are laws and rules governing its use, this is a living document and it will be updated as needed to reflect changes that take place in this very fluid environment. There are many exciting use cases on the horizon, and as new use cases are evaluated and found safe and effective for use in education, we will update the website.