Despite the federal government remaining closed for more than a month, multiple bills related to AI and students have been introduced in the past several weeks. On October 28th, the Senate HELP Committee’s Chair Bill Cassidy, R-LA, dropped his Learning Innovation and Empowerment (LIFE) with AI Act, which focuses on upgrading student privacy protections and parental choice. This bill follows his September introduction, along with a bipartisan bicameral group of legislators, of the Recommending Artificial Intelligence and Standards in Education (RAISE) Act, which would  encourage states to develop academic standards for elementary school and secondary school for artificial intelligence and other emerging technologies. Also in October, Senators Josh Hawley, R-MO, and Richard Blumenthal, D-CT, announced their introduction of the Guidelines for User Age-Verification and Responsible Dialogue (GUARD) Act of 2025, which would require artificial intelligence chatbots to implement age verification measures. If those weren’t enough, a package of House legislative proposals is expected as soon as the federal government reopens.

The Cassidy and Hawley/Blumenthal bills are the most substantive bills introduced in the AI space to date, and each warrants attention.

Sen. Cassidy’s legislation would make several changes to existing federal laws, including the Family Educational Rights and Privacy Act (FERPA) and Title II-A of the Elementary and Secondary Education Act; create a Golden Seal of Excellence in Student Data Privacy; and establish a Privacy Technical Assistance Center at the U.S. Department of Education. Upon the bill’s introduction, Cassidy stated, “AI holds enormous potential to meet children’s unique learning needs. As AI increasingly gets intertwined with students’ education, the LIFE Act empowers parents to ensure their children are protected from potential harm.”

Here are key pieces of his bill:

Establish a Golden Seal program to show distinction in student privacy practices would be open to individual schools and school districts who have met the highest standards of student data privacy through proactive parental and eligible student engagement and consent management.” Among other criteria, eligible recipients would have to show that they had “implemented and maintained instant verification technology system for not less than 1 year that”: 1) provides parental notifications that a school or district intends to use educational technology in the classroom; 2) provide information about the technology’s purpose and data collection practices, including op-out possibilities; can collect parental and student consent; and includes a mechanism to opt-out of the release of some or all of a student’s directory information.

Institute FERPA changes including:

  • Mandating that each school district provide public notice of the categories of student information it has designated as directory information, and parental rights to opt-out of the release of some or all their student’s directory information.
  • Prohibiting the use of student photographs for facial recognition AI without parent consent.
  • Redefining “educational records” to include “any data or materials which – (i) contain information related to a student, including data related to academic performance, attendance, health, and discipline; and (ii) are maintained by an educational agency or institution or by an entity acting for or in coordination with such agency or institution.”
  • Instituting new requirements for schools contracting with third parties for education technology, including: public notice of proposed contracts; third party certifications on student data privacy; and Department of Education investigations into non-compliance complaints and online listing for violations.

Create at the U.S. Department of Education a “modest student data privacy agreement for use by an educational agency or institution as part of a covered contract.”

Establish at the Department a Privacy Technical Assistance Center “to build the capacity of educational agencies and institutions, state educational agencies, and other entities, including education technology providers, to protect the privacy of students, families, educators, and other school professionals.”

Amend Title II-A of ESEA to add the following eligible use of funds under high-quality professional development: “effectively integrate existing and emerging technologies into curricula and instruction (including education about how to use artificial intelligence to enhance personalized learning, in addition to the harms of copyright piracy and improper student use of artificial intelligence.)”

The Hawley/Blumenthal bill

The Hawley/Blumenthal bill is aimed squarely at addressing recent negative stories about student interactions with AI chatbots, including a recent New York Times piece about student who committed suicide following conversations with an AI chatbot. In introducing it, Sen. Hawley said: “AI chatbots pose a serious threat to our kids. More than 70% of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”

This bill would bar “any person who owns, operates, or otherwise makes available an artificial intelligence chatbot to individuals in the United States” from allowing minors, defined as those under the age of 18, from accessing or using AI companions. It would also require that all manufacturers of AI chatbots prevent minor access and usage by using an age verification system or using a third-party contractor for that purpose. This would have deep ramifications for K-12 products that contain chatbots and even for school districts that create their own chatbots.

Other mandates in the bill include:

  • frequent disclosures to users during chat sessions “that the chatbot is an artificial intelligence system and not a human being”
  • programming “to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being”
  • barring a chatbot from representing that it is “licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional”

This bill would allow the Attorney General and State Attorney Generals to bring federal civil actions against violators of these provisions.

The bill would also establish federal criminal offenses, with monetary penalties for designing, developing, for making available an artificial intelligence chatbot, either knowingly or with reckless disregard that it can engage in simulated sexually implicit conduct, promote, encourage, or coerce suicide, self-injury, or physical or sexual violence.