How close are we to Artificial General Intelligence?
Now? Soon? Never? Plus more on new tools, research, and commentaries.
We’ve seen a lot of upheaval in the education space over the past few weeks. At CRPE, we are trying to stay focused on the future and are working on developing bold, evidence-based ideas that people can agree on. Our newly launched Phoenix Rising series is a forum for collecting and discussing such ideas—watch for much more soon. In the meantime, Gen AI is taking us full throttle into the future, like it or not. Here are some of the most intriguing things I’ve come across lately.
Ezra Klein’s Chilling Prediction on AGI
The prognostications about if and when we might see Artificial General Intelligence (AGI) are all over the place. Some experts say “Never,” others say, “Maybe 20 years from now.” Still others say, “Soon!” It’s hard not to roll your eyes and dismiss the whole question as abstract and unknowable. But it caught my attention that Ezra Klein made this statement to open his latest interview with Ben Buchanan, former special adviser for artificial intelligence in the Biden White House.
For the last couple of months, I have had this strange experience: Person after person — from artificial intelligence labs, from government — has been coming to me saying: It’s really about to happen. We’re about to get to artificial general intelligence.
What they mean is that they have believed, for a long time, that we are on a path to creating transformational artificial intelligence capable of doing basically anything a human being could do behind a computer — but better. They thought it would take somewhere from five to 15 years to develop. But now they believe it’s coming in two to three years, during Donald Trump’s second term.
They believe it because of the products they’re releasing right now and what they’re seeing inside the places they work. And I think they’re right.
If you’ve been telling yourself this isn’t coming, I really think you need to question that. It’s not web3. It’s not vaporware. A lot of what we’re talking about is already here, right now.
I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare. We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country is going to get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace.
And while there is so much else going on in the world to cover, I do think there’s a good chance that, when we look back on this era in human history, AI will have been the thing that matters.
The whole conversation is fascinating and worth watching in its entirety. There is an interesting discussion about AI being the first transformational technology that the government did not fund and another about the lack of good thinking around the likely impact of AGI on jobs and job displacement. I loved the mention of this quote from President Kennedy:
For space science, like nuclear science and all technology, has no conscience of its own. Whether it will become a force for good or ill depends on man. And only if the United States occupies a position of pre-eminence can we help decide whether this new ocean will be a sea of peace or a new terrifying theater of war.
- John F. Kennedy
On this, the fifth anniversary of the start of the Covid-19 pandemic, I have been looking back on my tweets from March 2020. In one, I called for people to prepare for the possibility that schools would have to close. I warned that failing to prepare would mean months of lost learning time. I feel the same sense of urgency around AI. We do not know how Gen AI will develop, or whether AGI will occur and when. But shame on us if we decide to wait until it’s upon us to try to make policy, design for the outcomes we want, and anticipate what the changes will mean for the future of work, education, and society.
New Research
On the theme of whether or not AGI is around the corner…a new research paper shows that AI’s ability to complete long, complex tasks has been doubling every seven months since 2019. Our friend John Bailey wrote a great summary.
Another study reports interesting, albeit dated, teacher survey results from 2023 about teacher attitudes toward Gen AI and professional development preferences. Key findings:
Teachers’ resistance to AI chatbot use increased with grade level taught, suggesting that more attention needs to be paid to addressing grade-specific pedagogical concerns, particularly at higher grade levels.
More experienced humanities teachers tended to be more conservative about allowing students to use AI chatbots for tasks associated with writing.
Teachers’ top concern for both students and teachers is overreliance on AI, though this was mentioned more often in relation to students. In addition, 13.58% of teachers identified misinformation as a concern for themselves, while only 3.17% identified that for students. While 13.62% of teachers believed chatbots could increase their work efficiency, only 4.19% thought so for students.
Though teachers are eager to learn about AI, professional development needs varied across grade levels, suggesting PD should be targeted.
High school teachers are particularly focused on acquiring knowledge they can use to develop classroom rules that guide students’ responsible AI use while ensuring the technology enhances learning outcomes.
Future research should prioritize developing empirically validated frameworks that address teachers’ pedagogical concerns while systematically leveraging AI’s potential to enhance student learning outcomes.
New Policy
The UK government released safety guidelines for GenAI products designed for use in schools. A nice set of expectations for schools and school systems to consider when adopting new AI tools.
And this story from EdWeek offers a helpful reminder about how important it is to engage parents in discussions about AI policy.
Too Good To Be True?
Alpha School, a private school in Texas is, according to Fox News, “seeing student test scores soar to new heights following the implementation of an artificial intelligence (AI) tutor.” The school, which charges $40,000 per year for tuition and has selective admission, asserts students work for only two hours a day on core curriculum. I’m eager to know more, but AI tutoring has not shown such results elsewhere, so I’d like to see an independent and rigorous evaluation before the school expands further.
New Tools
A custom chatbot to give evidence-based educational advice
I recently came across an interesting custom ChatBot from the Learning Agency that draws from evidence in the Doing What Works Library (and the federally funded What Works Clearinghouse) to create a search engine meant to inform classroom practice.
I asked the Bot:
What is the best strategy to reduce chronic absenteeism?
It answered:
Implementing a rewards system that provides incentives for attendance, participation, and completion of tasks, along with offering support services for students with attendance issues, promoting parental involvement, and partnering with community organizations to provide additional incentives and resources for students.
Not a bad answer, given what we currently know about effective strategies for reducing absenteeism. By the Learner Agency’s own admission, the tool is far from perfect. It hallucinates, and the database that feeds it has a very strict standard of evidence and is, therefore, limited in scope. Not to mention, federal funding and continuation for this database and the research that feeds it are in serious doubt right now!
That said, I am fascinated by how AI could leverage existing and future evidence of what works in education. We have seen many such tools go unused in an education system where there is little consequence for failing to use evidence-based practices and little infrastructure to support access to and implementation of research-based strategies. That said, teachers do want to use effective approaches. I’d like to see much more attention to how AI could bridge the gap between research and practice.
Something to Ponder
This fascinating podcast, How AI Could Change the Future of Music, is worth a listen. It says a lot about how AI may impact creative fields in both cool, Ethan-Mollick-co-intelligence ways and problematic, job-and-intellectual-property-replacement ways. I really appreciated the hosts’ delving into how AI intersects with the creative process:
A lot of people that consider themselves… AI skeptics tend to be very critical of the outputs…but where I see AI being sneakily effective and ultimately potentially transformative is as an input. Maybe we’ll see AI become a universal input to the creative economy across white collar work and writing and music and all sorts of idea generation.'
Final Words
Most of the advice I’ve heard for how institutions should prepare for AGI. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without AGI.
Kevin Roose (host of the podcast Hardfork) in The New York Times: “Powerful AI is coming. We’re not ready.”