Michael Akinwumi, Chief AI Officer, National Fair Housing Association

Rita Allen Fellow at Eagleton Institute, Rutgers University.

“AI can illuminate student pathways — but only with safeguards and trust.”

“What do you want to be when you grow up?” It’s a question every student faces, and in the age of artificial intelligence, it’s both more exciting and more complex than ever. New careers are emerging at breathtaking speed, and AI in classrooms — from personalized tutoring to digital career coaches — can help young people discover paths they might never have imagined. Yet this promise comes with responsibility. To ensure AI enhances imagination and intellect rather than eroding them, schools must build transparency and safeguards into every tool. Done right, AI can become a trusted compass guiding the next generation toward meaningful futures in a manner that heeds Dr. Martin Luther King’s warning that “education which stops with [AI-driven] efficiency may prove the greatest menace to society.”

How AI Can Help Students Plan Their Paths

For many high school sophomores, the choice between college, a trade, or an apprenticeship feels overwhelming. AI-powered guidance tools can bring clarity. By analyzing a student’s strengths, academic records, and even local workforce trends, these systems can recommend realistic pathways–whether that’s earning a certificate in advanced manufacturing, enrolling in a dual-credit coding class, or pursuing a traditional four-year degree.

National policy is beginning to reinforce these possibilities. Legislations like the Workforce Innovation and Opportunity Act and the American Apprenticeship Act (Workforce Acts), along with the Presidential Executive Order on “Advancing AI Education,” are designed to make such opportunities accessible. Apprenticeship programs could be supported with state funding for tuition and equipment, while workforce vouchers could help high school graduates enter employer-designed training programs enhanced by AI. If paired with AI systems that map students’ interests to workforce possibilities, these policies could reshape how America prepares young people for the future of work powered by AI.

AI can also make career exploration more engaging and equitable. A student could interact with an AI-powered career coach at any hour — asking about job options, training requirements, or skills in demand. This kind of access matters: in the 2021–22 school year, nearly one-quarter of U.S. schools had no counselor at all. And in California, the average student-to-counselor ratio was 464-to-1 — nearly twice the American School Counselor Association’s recommended ratio of 250-to-1 when ChatGPT debuted the following year. AI won’t replace counselors, but it can extend their reach, ensuring that students get the guidance they need despite staffing shortages.

The bottom line: AI can turn career planning from a guessing game into a guided process, helping students make informed decisions that align with both their passions and the demands of a rapidly evolving economy.

National Initiatives Bridging Education and Careers

Policymakers and educators are taking notice of AI’s potential. The White House recently launched the Presidential AI Challenge, a nationwide contest inviting K–12 students and teachers to develop AI solutions for real community problems. The goal is to spark curiosity and build AI skills early, preparing students to be confident participants in an AI-driven workforce. This challenge signals a seismic shift. Leaders want American youth to see AI not as a mysterious “black box,” but as a tool they can create and use to improve their futures.

Legislators are also working to connect classroom learning with career opportunities. Besides the Workforce Acts, discussions are underway about boosting AI literacy in schools nationwide. The Executive Order on “Advancing AI Education” outlines plan to integrate AI into all subjects, train teachers in AI, and even offer high school students chances to earn industry-recognized AI credentials before graduation.

The message is clear: to thrive in a high-tech economy, students should start building relevant skills as early as possible. AI can supercharge initiatives like career technical education, helping identify local job trends and recommending the courses or certifications that will set students up for success.

Guardrails Before Gadgets: AI’s Risks in Classrooms

For all its promise, AI in the classroom raises serious concerns. We must resist moving fast and breaking things that are not broken in America’s classrooms. Teachers and parents worry about AI undermining fundamental skills — will students rely on AI so much that they stop learning to write, imagine, create, solve math, or think critically? Some educators fear an “arms race of irrelevance” where AI-generated assignments make honest work feel pointless. Indeed, if a chatbot can spit out an essay, a student might see little reason to develop their own writing ability. Likewise, if an AI tutor always gives the next answer, why learn persistence in problem-solving? These fears highlight a key point: technology should support learning, not substitute for it.

The experience of eighth graders in the Houston Independent School District illustrates how poor introduction of AI into classrooms can harm students. Instead of engaging with authentic poetry and literature from the Harlem Renaissance, students were shown AI-generated illustrations with distorted faces and no actual poems. Lessons were reduced to timed multiple-choice drills, undermining both cultural education and critical thinking. When districts outsource curriculum creation to unvetted AI companies without sound governance practices, they risk normalizing “AI slop” and sidelining teachers’ professional expertise. Such decisions can accelerate teacher attrition, as educators feel forced to work within rigid, one-size-fits-all frameworks that ignore community voices and professional judgment.

The risks extend beyond poor content quality to student well-being. The tragic story of Adam Raine underscores how easily AI can drift from helpful to harmful. Without robust guardrails, AI chatbots intended to support students could inadvertently morph into what some have called “suicide coaches,” offering unsafe or inappropriate responses. Even with safeguards in place, systems like ChatGPT have occasionally failed over long conversations, eventually producing guidance that contradicts safety protocols. This demonstrates the need for “multithreaded lines of defense” to ensure that AI never substitutes for human oversight, especially in matters of student mental health.

In short, AI in classrooms must be designed and deployed with transparency, accountability, and constant oversight. Schools should not adopt AI because it is trendy or promises efficiency, but only when it demonstrably enhances learning, respects teacher expertise, and protects student safety. If these safeguards are not prioritized, the risks of exploitation, misinformation, and harm will outweigh the potential benefits. AI should be a trusted tool that strengthens education, not a shortcut that erodes its foundations.

Practical Safeguards as the Antidote

To strike the right balance, transparency and safety measures must be baked into any AI tool used with kids. The kind of transparency that matters here is not about exposing proprietary code or intellectual property, but about explaining in clear terms how the system arrives at its outcomes. Students and teachers deserve to know the basis of an AI’s recommendations: is a career app steering girls away from engineering because of biased training data, or is a learning platform prematurely labeling someone as “not college material” based on narrow metrics? Such opaque judgments could shape a child’s future unfairly.

Transparency in this context means requiring AI developers, deployers and vendors to provide meaningful explanations of the logic, factors, and data inputs driving their outputs, and to supply information that allows users to contest decisions and seek redress — much like the provisions outlined in the Colorado AI Act and some of its proposed amendments. Schools should insist that AI systems provide this level of accountability, and independent experts should be empowered to audit them. In other words, shine a light on the reasoning behind AI’s recommendations to ensure they are truly helping students — and to give families, educators, and regulators a path to intervene as appropriate.

What the Colorado AI Act and its Proposed Amendments Mean for Students and Families

The Colorado AI Act — often cited as the first comprehensive U.S. state law on AI governance — includes several provisions that could directly inform how AI is deployed in classrooms:

Right to Notice → Users must be told when they are interacting with an AI system, not a human.

Right to Explanation → AI providers must explain, in plain language, how their systems arrive at significant decisions or recommendations.

Right to Contest → Students, parents, or educators should have a process to challenge an AI-driven outcome they believe is unfair or inaccurate.

Duty to Monitor for Bias → AI systems must be regularly evaluated for discriminatory impacts, particularly on protected groups. Education stakeholders and developers, deployers and vendors of classroom AI have shared responsibilities to monitor the system.

Transparency to Regulators → Developers must make documentation available for auditing by regulators or independent experts.

Why it matters: In the education context, these rights could mean a student who is wrongly advised away from advanced math, or flagged as “not college-ready,” has both the explanation and the appeal process needed to correct the record.

But transparency alone is not enough — safeguards must also extend to privacy. Existing laws like Family Educational Rights and Privacy Act (FERPA) and Children’s Online Privacy Protection Act (COPPA) were written to protect traditional student data such as grades, addresses, and attendance records, but they fall short when it comes to the new types of AI-generated data now circulating in classrooms. Predictive performance insights, behavioral analytics, and learning patterns created by AI platforms are not adequately covered. While COPPA, for instance, restricts the collection of children’s names or contact details without parental consent, it does little to address how AI systems infer deeper insights about a child’s habits, abilities, or vulnerabilities from their interactions with digital tools. Without stronger protections, schools risk exposing students to new forms of surveillance and profiling, undermining both trust and safety.

To make transparency real and actionable, schools and families should commit to CIBER:

  • Create a written AI management program that outlines responsible use in K–12 classrooms.

  • Invest in teacher training and student awareness to ensure everyone understands both the benefits and the limits of AI tools.

  • Build governance structures that include all stakeholders — parents, educators, and administrators — with clear roles and decision-making authority.

  • Enforce third-party vendor oversight so schools remain in control of how AI is implemented and monitored.

  • Require transparency and accountability safeguards that comply with laws and best practices, including notice to families when AI is in use.

These steps ensure safeguards are not just principles, but a practice — one that equips schools to embrace AI responsibly while safeguarding the futures of their students.

Conclusion

AI has the potential to be a true career compass for students — helping them discover pathways they might never have imagined, from apprenticeships to advanced degrees. But this potential will only be realized if we approach AI in K–12 classrooms with both optimism and vigilance. Transparency, privacy, and accountability must be built into every system, not bolted on as an afterthought. Parents, teachers, and students must remain central to decisions, and schools must treat AI as a tool to strengthen — not replace — human learning and guidance.

If we shine a light on how AI systems work, protect children’s data as fiercely as we protect their safety, and insist on governance structures that put communities first, AI can inspire trust. With thoughtful safeguards in place, schools can welcome AI in ways that empower rather than exploit — ensuring that this compass points the next generation toward futures that are bright, equitable, and meaningful.


Keep Reading

No posts found