Introduction
Artificial intelligence (AI) is a rapidly advancing technology, actively changing how we teach, learn, work, and live. This Policy Statement sets forth principles regarding the use of AI in education and specifies the Association’s role in supporting and advocating for students and educators in this domain.
Definitions
For purposes of this Policy Statement, the following definitions apply:
- Algorithmic bias: “Systematic, unwanted unfairness in how a computer detects patterns or automates decisions,” U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations (Washington, DC, 2023), https://www2.ed.gov/documents/ai-report/ai-report.pdf. Go to reference often based on characteristics and identities such as age, class, culture, disability, ethnicity, gender, location, nationality, political affiliation, race, religious background background and practices and/or sexuality.
- Artificial intelligence (AI): Machine-based systems designed around human-defined objectives to perform tasks that would otherwise require human or animal intelligence.
- AI literacy: Understanding what it means to learn with and about AI while gaining specific knowledge about how artificial intelligence works, the skills necessary to master AI tools, and how to critically navigate the benefits and risks of this technology.
- Data governance: A set of practices that ensures that data assets are formally managed throughout a system/enterprise and that define the roles, responsibilities, and processes for ensuring accountability for and ownership of data assets.
- Educators: People employed by an institution dedicated to pre-K–12 or higher education.
- Generative AI: Artificial intelligence tools that generate text, images, videos, or other content based on existing data patterns and structures.
- Transparency: Open disclosure of how AI systems work, including how they reach decisions and the data used to do so.
- 153 U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations (Washington, DC, 2023), https://www2.ed.gov/documents/ai-report/ai-report.pdf.
Principles
PRINCIPLE 1
Students and educators must remain at the center of education
Learning happens, and knowledge is constructed through social engagement and collaboration, making interpersonal interaction between students and educators irreplaceable. Chan and Tsi, "The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education?"; McKay and Macomber, "The Importance of Relationships in Education: Reflections of Current Educators"; National Academies of Sciences, How People Learn II: Learners, Contexts, and Cultures. Go to reference The use of AI should not displace or impair the connection between students and educators, a connection that is essential to fostering academic success, critical thinking, interpersonal and social skills, emotional well-being, creativity, and the ability to fully participate in society. AI-enhanced tools that undermine any of these critical aspects of teaching and learning should not be employed.
We envision AI-enhanced technology as an aid to public educators and education, not as a replacement for meaningful and necessary human connection. To move AI forward as an additive resource and tool, professionally and socially diverse educators (across race/ethnicity, gender, disability status, positions, and institutional levels) must be included in decision-making – inclusive of AI vetting, adoption, deployment, and ongoing use – to guarantee that these tools are used to improve job quality and enhance performance.
AI technology tends to reflect the perspectives—and biases—of the people who develop it. Furthermore, developers may not notice when their tools are biased against or do not adequately reflect the needs of people who differ from them demographically or in other ways. Notably, technology developers are overwhelmingly younger, White, cisgender, heterosexual, male, and people without disabilities. Stack Overflow, 2022 Developer Survey (2022), https://survey.stackoverflow.co/2022/. Go to reference Actively involving a diverse and intersectional array of educators, including those with disabilities, in the development, design, and evaluation of AI systems ensures technology that is not only compliant with accessibility standards but also genuinely user-centric. Including the diverse and intersectional perspectives and experiences of people who are Native, Asian, Black, Latin(o/a/x), Middle Eastern and North African, Multiracial, and Pacific Islander, LGBTQ+, and from all economic backgrounds and abilities is essential if this technology is to be effective in its educational purpose.
Artificial intelligence should not be used to undercut educators by exposing them to unnecessary surveillance, undermining their rights, or taking over core job functions that are best done by humans. These tenets should be reflected in and protected through collective bargaining, labor-management collaboration, and state laws.
AI-informed analyses and data alone should never be used for high-stakes or determinative decisions. While such data might be included among several other factors, the degree of its importance, weight, and reliability must be carefully considered in matters concerning items such as, but not limited to: employee evaluations; student assessment, placement, graduation, and matriculation; disciplinary matters; diagnoses of any kind; and matters of safety and surveillance. These decisions must rely primarily on the professional expertise and judgment of humans, who must consider equity, diversity, access, human rights, and other appropriate contextual considerations. See also, NEA’s Policy Statement on Teacher Evaluation and Accountability. Go to reference
PRINCIPLE 2
Evidence-based AI technology must enhance the educational experience
Artificial intelligence should only be adopted once there is data supporting a tool’s appropriateness and efficacy with potential users and, for instruction-focused AI, its alignment with high-quality teaching and learning standards and practices. This evidence should come either from research conducted and reviewed by independent researchers or from industry-sponsored research that adheres to the same standards of methodology and peer review as independent research. If such research is unavailable, AI may be adopted on a pilot or trial basis if the evidence is being collected and analyzed in a timely manner, with an agreement in place to cease the use of the technology if the results of the research do not show the intended benefits or do not serve educational goals.
Close attention must be paid to the needs of our most vulnerable learners, including students with disabilities, early learners, and emergent multilingual learners. AI technology must not conform to a purely ableist and privileged standard that neither serves nor adapts to the educational needs of students with disabilities. User cases that aid in the development of effective AI tools in education must be based on a range of disabilities (i.e., learning disabilities, hearing impairments, visual impairments, etc.). While some AI technology may improve accessibility and enhance these students’ educational experiences, these students are susceptible to harm if AI is used inappropriately. There must be dedicated research and the establishment of clear guidance to help our schools ensure that AI-enabled technology is effective and appropriate for these students.
It is critical that systems, processes, and structures are created to ensure intentional and ongoing attention is paid to the extent to which biases built into AI technology and uses of AI-generated data further perpetuate racial injustice and social inequities in education. AI tools need to be carefully evaluated by educators, Native communities and communities of color, and rural communities to ensure these tools reflect the diversity of students’ backgrounds and experiences and proactively avoid inequitable access to high-quality technology and internet access. We must also ensure these tools do not subject students who are Native, Asian, Black, Latin(o/a/x), Middle Eastern and North African, Multiracial, or Pacific Islander to higher surveillance than their White peers, perpetuate school-to-prison and school-to-deportation pipelines, or create an over-reliance on content and assessment delivered by AI-enhanced technology rather than that of qualified educators.
Assessment of AI efficacy must not end after a tool is adopted. Innovations in technology, pedagogy, and content are ongoing, and AI tools must be reassessed regularly by educators to ensure they continue to provide the intended benefits and have not created unanticipated problems. Educators must be involved in both the initial and ongoing assessment of AI tools so that AI is used only if it will enhance, rather than detract from, students’ educational experiences and their well-being. Educator involvement is critical to ensure that AI is implemented in ways that are effective, accurate and appropriate for learners at all levels.
PRINCIPLE 3
Ethical development/use of AI technology and strong data protection practices
Artificial intelligence is far from flawless and requires human oversight, checks, and balances. Primary areas of concern include algorithmic bias, inaccurate or nonsensical outputs, violations of student and educator data privacy, and the considerable environmental impact of AI energy use. AI tools must be carefully vetted prior to deployment and monitored after implementation to mitigate these hazards, guarantee ongoing transparency, and confirm that tools comply with current local, state, and federal laws. States, local districts, and higher education institutions should evaluate (and strengthen where necessary) their existing data governance plans prior to adopting AI tools. Particular attention must be paid to AI tools that aim to play any role in assessing/evaluating students or educators or would have monitoring or surveillance functions. AI tools proposed for any of these purposes should be approached with caution; evaluated, understood, and agreed to by appropriate interest holders (including students, educators, and families); and used with the understanding that AI data models and programming are biased, incomplete, quickly become outdated, and can result in unreliable and harmful results, particularly for Native students, students of color, and students with disabilities.
Educators, parents, and students must be made aware of what and how AI tools are used in schools and on campuses. Educators must receive ongoing learning opportunities that enable them to identify ethical hazards and how to handle them effectively if they arise. Institutional structures, such as review boards or scheduled audits, should also be put in place to enforce high-quality standards for the use of AI. Data collected through AI should be subject to protocols providing transparency about the types of data being collected and how the data is stored, utilized, and protected. These protocols must also clearly articulate whether and to what degree AI is used for any form of monitoring or surveillance in educational settings and how this data will be governed. Additionally, these protocols must ensure the proprietary rights of students and educators in their original work.
Although these technologies operate in virtual spaces, AI and the cloud will consume increasing amounts of energy and require larger quantities of natural resources, which has the potential to increase greenhouse gas emissions. At present, generating a single image using a powerful AI model consumes as much energy as fully charging your smartphone.5 While it is nearly impossible for researchers to evaluate the full extent of the negative environmental impacts of AI technologies, decision-makers in school settings should be aware of the connection between AI and the environment and be mindful of environmental impacts throughout the planning and implementation phases.
PRINCIPLE 4
Equitable access to and use of AI tools is ensured
Gaps in educational opportunities, resources, and funding negatively affect student outcomes and are exacerbated for students living in rural areas, those who are Native, Asian, Black, Latin(o/a/x), Middle Eastern and North African, Multiracial, or Pacific Islander, and those who are LGBTQ+. This has become clear regarding educational technology, an area where students and educators in under-resourced schools and institutions have struggled to achieve equity. Deploying AI tools will further widen this digital divide if measures are not taken to guarantee access to all students and educators, from early childhood to higher education, regardless of ZIP code. Education systems must not only provide AI tools but also guarantee the technical support, devices, and internet infrastructure necessary to reliably access and use AI in the classroom and at home.
Artificial intelligence must also be used in equitable ways in schools and on campuses. To ensure all students – regardless of race/ethnicity, disability status, emergent multilingual learner status, or location – have access to learning opportunities that use AI to promote active learning, critical thinking, and creative engagement, we have to be intentional and proactive to prevent our biases from impacting how students experience AI technology. Educators must be cognizant of the potential for some students, particularly high-need learners, including students with disabilities and emergent multilingual learners, to be relegated to using AI only for rote memorization, standardized assessment, or finding answers to factual questions. Policies and procedures must be in place to guarantee that all students—not only the most advantaged or most advanced—are able to take full advantage of AI technology.
PRINCIPLE 5
Ongoing education with and about AI: AI literacy and agency
Effective, safe, and equitable use of AI technology in education requires that students and educators become fully AI literate and develop a greater sense of agency with this technology. The use of artificial intelligence extends into countless aspects of our personal and professional lives, and AI literacy must be part of every student’s basic education and every educator’s professional preparation and development.
Artificial intelligence is a vital component of the computer sciences but extends far beyond the computer science curriculum. Curricular changes should be made to incorporate AI literacy across all subject areas and educational levels so that all students understand the benefits, risks, and effective uses of these tools. These student learning experiences should be developmentally appropriate, experiential (allowing students to engage with various forms of AI-enhanced technology), and help students think critically about using AI-enhanced technology.
Educators must be afforded high-quality, multifaceted, ongoing professional learning opportunities that help increase their AI literacy and understand what, how, and why specific AI is being used in their educational settings. Learning opportunities must be provided to educators in all positions and at all career stages. Educators must know how to use AI in ways that are pedagogically appropriate within their content areas and for all learners, including early learners, students with disabilities, and emergent multilingual learners. These learning opportunities must also help educators research and assess available evidence about effective AI uses in education; understand AI bias and know strategies for reporting and mitigating the harmful impacts of AI bias; and understand the ethical and data privacy hazards associated with AI-enabled technology and appropriate policies and standards in use by their educational institutions. Educators should be positioned to lead professional learning about the use of AI tools in educational settings.
- 154 Chan and Tsi, "The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education?"; McKay and Macomber, "The Importance of Relationships in Education: Reflections of Current Educators"; National Academies of Sciences, How People Learn II: Learners, Contexts, and Cultures.
- 155 Stack Overflow, 2022 Developer Survey (2022), https://survey.stackoverflow.co/2022/.
- 156 See also, NEA’s Policy Statement on Teacher Evaluation and Accountability.
Association Advocacy and Action
NEA believes that artificial intelligence has the potential to transform the educational experience for our students and the professional experience of educators. Therefore, it is imperative that NEA plays a leading role in ensuring that the transformation is a positive one.
The expansive role that artificial intelligence plays in our education systems continues to grow, and it will impact us all in ways that we have yet to fully understand. NEA and its state and local affiliates should call for and actively engage in coalitions, research, commissions, and committees studying and making recommendations about AI adoption, effectiveness, and safety in education. Artificial intelligence technology offers intelligence without consciousness, and NEA must ensure that the interpersonal human connection between students and educators is of primary importance, along with well-being, safety, equity, and access.
Racial and social justice are deeply held core values of the Association, and we cannot tolerate a wider spread of discrimination, inequity, and injustice in our education systems for any reason, including for reasons related to biases in artificial intelligence algorithms. Students and educators with disabilities, Native people and people of color, or those who represent marginalized groups and identities are more likely to be negatively impacted by biased and incomplete AI data and tools and the decisions that can result from them.
Understanding the technology is critical but it is absolutely essential for all educators and administrators to have ongoing opportunities for the types of professional development described in the NEA Policy Statement on Safe, Just, and Equitable Schools (2022). See NEA’s Policy Statement on Safe, Just, and Equitable Schools. Go to reference That is, educators and administrators must have quality professional opportunities that allow them to develop “cultural competence and responsiveness, including awareness of one’s own implicit biases and trauma, understanding culturally competent pedagogy, and becoming culturally responsive in one’s approach to education and discipline/behavior.”
This skill and knowledge will position educators and administrators to be able to select inclusive AI tools while also applying their pedagogical expertise to ensure the tools are effective and meet the needs of their diverse learners. Further, this knowledge can help educators see and understand biases that may result from AI tools and develop appropriate remedies or approaches to help students succeed.
The NEA will advocate at the federal, state, and local levels to prevent the design, adoption, and use of AI tools and data that are unsafe or harmful, and the Association will be vigilant in applying its core beliefs to its advocacy.
NEA will advocate at the federal, state, and local level for the environmental impacts of AI to be considered in decision-making processes around the development and application of AI tools. Further, NEA will ensure any of its own materials, tools, or professional learning opportunities related to AI consider and cover its environmental impact.
NEA will advocate at the federal, state, and local levels for the ethical, safe, and appropriate use of effective AI tools and related data and for equitable access to this technology. Further, NEA will develop guidance to help affiliates and members advocate in bargaining and non-bargaining contexts. A critical component of the Association’s advocacy must be to ensure that the voices of students and educators with disabilities, Native People, People of Color, and those representing marginalized groups and identities are meaningfully engaged in policy development, rulemaking, and implementation efforts. Working in partnership with allies, particularly students and parents, will further strengthen the Association’s ability to influence positive policy and practice.
NEA, in partnership with allied organizations, should also develop high-quality learning opportunities for its members on AI literacy, how to use AI in instructional contexts, and issues of AI ethics and equity. These opportunities should be multifaceted in terms of their format to have the greatest reach.