Welcome back to Think Forward. I hope the dog days of summer are treating you well. In today's post, we tackle the question many skeptics are asking: is AI destined to fail? Regular readers may know that we at CRPE think this is the wrong question.
AI skeptics might find satisfaction in the provocative headline of a recent interview with Ben Riley in The 74: “AI is Another Ed Tech Promise Destined to Fail.” Riley is the Gary Marcus (a well-known AI skeptic) of education, questioning the efficacy of new technologies in the classroom.
While Riley's caution is understandable, dismissing the potential of AI in education overlooks its current benefits. AI is already helping educators work more efficiently and potentially more effectively. A new paper from my ASU colleagues highlights how educators integrate Gen AI into their classrooms. The study finds that teachers participating in Facebook forums say that AI saves them hours on planning, lesson development, and assessments. Teachers also say AI aids in differentiating lessons for students at varying levels and empowers students to seek information and feedback independently.
New platforms that seek to provide real-time coaching feedback for teachers show promising early results. Additionally, some new research suggests that AI-powered tools can save teachers time. New tools are coming, fast and furiously, that promise to make learning more engaging for students.
AI is still in its early stages, but these are significant advancements, and we dismiss them at our peril. Moreover, AI is here to stay, soon to be “deeply embedded in the fabric of our society.” Arguably, it is already transforming other sectors, such as health care, with a robust research community and growing evidence base. Instead of dismissing AI as just another fad, we must focus on optimizing its use in our classrooms. We need to understand better what AI can and can’t do. We need a clear educational vision and engage with the ed tech community to see if AI can help achieve it.
However, Riley's point about aligning technology with how children actually learn is crucial. The history of ed tech is filled with overhyped tools that failed to deliver. Riley rightly emphasizes the need for rigorous research to assess the efficacy of AI learning tools.
Traditional research methods may lag behind the rapid development of AI, but that doesn’t mean we should abandon promising technologies. This isn't a debate about whether or not to adopt AI tools—they're already in classrooms, embraced by teachers who see their value. Our task is to learn how to test and scale these technologies effectively while keeping “DO NO HARM” as a guiding principle.
How can we achieve this? I decided to embrace an AI tool myself and ask ChatGPT what education can learn from other sectors about assessing rapidly evolving Gen AI applications. The response was straightforward but useful:
Conduct pilot studies before widespread release.
Encourage peer-reviewed research.
Establish ethics review boards to protect students.
Develop standards and guidelines for AI use and evaluation in education.
Set clear criteria for assessing the efficacy and safety of AI tools before wide implementation.
Continuously monitor the impact of AI tools on student learning and well-being.
Set up reporting systems for educators, students, and parents to provide feedback and report issues.
These steps are practical and achievable, though they will require changes in how we conduct education research. We could establish test sites in low-stakes settings like after-school and summer programs to increase participant recruitment. A coordinated research agenda and a special-purpose peer-reviewed journal could help organize and disseminate knowledge about what works. Interim outcome measures could inform practice when timely assessment of test score improvements isn’t feasible.
I believe in the transformative potential of Gen AI in education, but it's not guaranteed. It also takes coordination in setting new goalposts and measuring effectiveness. Researchers, funders, and ed tech leaders must work together to ensure AI tools are beneficial and safe for students. This must be our first step toward doing no harm while embracing the reality and potential of what lies ahead.
New Models
The first AI-powered school models are emerging. A new company, Eureka Labs, is starting a “new kind of school that is AI native.” I can’t tell from the announcement on X if this is a postsecondary or K-12 school, but their first product will be “the world's obviously best AI course, LLM101n. This is an undergraduate-level class that guides the student through training their own AI, very similar to a smaller version of the AI Teaching Assistant itself.”
Note that a for-profit company operates Eureka Labs, which reminds me of the ill-fated Alt-School. Nonetheless, the concept is interesting and worthy of following and (ahem) research. Here’s a description from the founder:
How can we approach an ideal experience for learning something new? For example, in the case of physics one could imagine working through very high quality course materials together with Feynman, who is there to guide you every step of the way. Unfortunately, subject matter experts who are deeply passionate, great at teaching, infinitely patient and fluent in all of the world's languages are also very scarce and cannot personally tutor all 8 billion of us on demand.
However, with recent progress in generative AI, this learning experience feels tractable. The teacher still designs the course materials, but they are supported, leveraged and scaled with an AI Teaching Assistant who is optimized to help guide the students through them. This Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. If we are successful, it will be easy for anyone to learn anything, expanding education in both reach (a large number of people learning something) and extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).
New Policy
Several states have released AI guidance documents this summer: New Jersey and North Dakota in June and Wyoming and Minnesota this month. This brings the total number of states with official guidance released to 18 plus D.C. (19, if you include Arizona-specific guidance released by Northern Arizona University)—so, still less than half of all states. CRPE’s previous reporting found that states have been slow to put out advice and that guidance language can differ from state to state. The range of topics in this latest crop shows how state teams continue to diverge in what they prioritize—and what they are proposing schools do.
North Dakota’s guidance includes implementation roadmaps and checklists by role.
New Jersey offers a technical assistance webinar on foundational concepts and recommends links to free applications and online learning resources.
Wyoming focuses on policy development and building out a “guidance team.”
Minnesota’s guidance is light on content but links to six other states’ guidance documents.
The feds also have a new guidance document out. While a lot is happening in the administration right now (understatement of the year), K-12 education needs a steady stream of national guidance and policy around AI, lest we leave all this to districts and states to figure out on their own.
Final Words
I’m reading Sal Khan’s new book Brave New Words: How AI Will Revolutionize Education (and Why That's a Good Thing). Here’s a quote from the book (I’ll have more to say about it once I finish it!):
"AI’s potential to make education more personalized and accessible is the greatest opportunity for improving learning in our lifetime. The real question is not if AI will transform education, but how we will shape that transformation." - Sal Kahan
Ben Riley condemns AI Chatbots because a school-based product called Khanmigo cannot solve this problem correctly—
"If a pen and crayon together cost $2.50, and the crayon costs $2 less than the pen, how much do the pen and crayon each cost?"
Apparently, it stumbled over adding 2 and 2.5.
I am puzzled why Ben didn't just ask OpenAI's GPT-3.5, which was freely available in April, when he tested Khanmigo. I provided the same prompt, verbatim, to ChatGPT and got this—
Let's denote the cost of the pen as p and the cost of the crayon as c.
According to the problem, we have two pieces of information:
1. The total cost of the pen and crayon is $2.50.
2. The crayon costs $2 less than the pen.
We can set up the following equations based on this information:
p + c = 2.50
c = p - 2
We can substitute the second equation into the first equation to solve for p:
p + (p - 2) = 2.50
2p - 2 = 2.50
2p = 4.50
p = 2.25
Now that we have the cost of the pen, we can find the cost of the crayon:
c = p - 2
c = 2.25 - 2
c = 0.25
So, the pen costs $2.25 and the crayon costs $0.25.
ChatGPT has no problem adding 2.25 and 0.25. Nor any problem explaining it to a student. Feel free to ask ChatGPT "What is the solution for x in 4x^(2)-5x-12=0?"
I might be missing the point here, but it seems Ben is nailing the problem squarely on the head: He's confining himself to the School World and missing out on what the Real World already offers.
I find it disturbing that in both articles — this one and The 74's — that short shrift is given to students actually learning how to use AI chatbots. That's actually an overstatement: there really is no focus on expanding the knowledge and skills of the student.
Even the articles listed at the bottom of this page — Welcome to Think Forward: Learning with AI, Think Forward: AI and educational assessments, and Can we educate every child "to the max"? — assume that children should be taught essentially the same knowledge/skills they acquire in today's classrooms. It's legitimate as an organization or writer to focus only on a specific segment of the business — teacher tech in this case — so criticizing the lack of discussion of young humans learning this new technology to use in higher ed or work might be unfair (I'll click through and read the other CRPE articles to more fully understand the mission of this substack).
Admittedly, the article's title : "Is AI 'Destined to Fail'?" : prompted me to think it is about teaching/education fads and my observations over 20 years of teaching of teacher resistance to change in the classroom kicked in. It's hard not to notice that the classrooms of the 2020s look little different from Miss Landers's 1950s classroom. The social and technological achievements and tools that transformed retail stores, airplanes, communications, mass media, family, etc. have missed schools, leaving them essentially unchanged. Unfortunately, this means students today are pretty much living in the same educational structure their great-grandparents "enjoyed".
There's a reason the word "fad" is used in this article.
Employers are anxious to hire talent that have ChatGPT experience. Gartner estimates that 70% of white collar workers use ChatGPT daily in their work tasks. Yet, whole school divisions are banning access to AI chatbots for both students and staff. Color me more than a little puzzled.