The magic of empowering great educators with AI
Updated: May 3
Whether it was the PC revolution, the internet revolution, or the mobile revolution, I have not seen as much change happen so quickly as what we’ve seen in the past four months since the release of ChatGPT. The advances in generative AI are simultaneously amazing and worrisome, and it’s up to us as a society to make sure we are guiding it in a positive direction.
As someone who is focusing on AI in education, I’m particularly worried that there’s a missing voice in this revolution: the educator.
When you walk into the classroom of a talented teacher, you immediately sense the artistry at play. They are not just giving students the right information at the right time, they bring enthusiasm, an ability to manage a classroom, an ability to connect with their students where they are, and an ability to inspire. Much like a conductor, they orchestrate the right experience to help their students to flourish. AI can be one of these tools, but it’s only effective if it’s used in the context of a student’s broader educational experience.
When I moved from Google into education 18 months ago, I didn’t realize how little I knew about what a teacher actually does. Like many technologists, I assumed that since I was successful in school, I’d have the knowledge to build something that would help other students be successful. Realizing the fault in that assumption and continuously learning from educators has made all the difference. I’m now much more attuned to what does and doesn’t work to help a student.
With the pace at which AI is evolving, we’re skipping a critical step of involving educators and learning from them. Instead, technology is being created and educators are being forced to adapt to it. Our ultimate goal is to help students thrive. For us to have that impact, we need to bring educators into the process of creating this technology.
So, how do we do this?
First, we need to focus on building AI technologies that empower and amplify the educator. In my role at Google, I found that AI products that had the most impact were ones where the AI informed or helped an expert rather than worked directly for an end user: For example, alerting a doctor about something unusual in a radiology image or parsing a mortgage document for a lender. AI is based on probabilities, so, by design, it will never be 100% accurate. Having an expert in the decision loop helps make the process more informed and lets the expert, not the AI, judge what’s appropriate or not.
This principle is even more important in education. Because we are serving students, the damage of giving them wrong information can be catastrophic. We may lead them to learn the wrong skills or hurt their confidence in their abilities. We need to exercise caution. On the other hand, if we use AI to help teachers create lessons, videos, and content, so that they can use their own expert judgment to orchestrate what goes in front of a student.
Also, we need to deeply involve teachers in the product development process. In tech, a product manager is the voice of the user in the development process. I took this to heart at my new company, Kyron Learning. Over half of our team has experience teaching in a classroom, and that has made all the difference. I’ve hired former teachers not only to build our educational content but also as product managers, engineers, and customer success specialists. Mixing together AI experts with great teachers has made our product better, and we learn from each other every day. From our customers’ reactions, I know that the product now is much better than the original vision.
Finally, we need to have a set of principles to guide how we build AI for good in the classroom. When I started working on AI products six years ago, I quickly learned that AI has more potential for profound good and profound damage than any other technology I’ve ever been involved with. The difficulty is that only 10% of the use cases are clearly good, 10% are clearly bad, and 80% are up to interpretation and dependent on how you build the AI. For example, AI technology that determines whether to lend money to a customer. If used in the right way, it can open up opportunities to deserving people who wouldn’t otherwise get a loan. If used in the wrong way, it can encode decades of bias and systematically deny loans based on these biases. It all depends on how it’s built and used.
At Kyron Learning, we’ve started by drafting a set of teaching principles, and these are what we used to guide our AI technology. We are going to keep posting more about our learnings to help move this conversation forward.
But, no one company can solve this alone, and we need to bring others together to create a clear set of education industry principles to evaluate any use case. Digital Promise and the Engage AI Institute are getting the conversations rolling. We need more–more entrepreneurs and more educators–rolling up their sleeves and immersed in these conversations. As entrepreneurs, we pride ourselves on staying focused–but we can’t use “focus” as an excuse to close our eyes to the long-term implications of AI in learning.
AI can provide meaningful support to learners worldwide. I believe we can do so in ethically responsible ways. But we won’t find those paths forward unless we make room at the table for the people who know our students best: our educators. I believe our role starts by pro-actively seeking out educators and inviting them into dialogue about how to build this next generation of learning tools.
Do you agree? Want to tackle this challenge–together?