HUMAN-CENTERED AI IN SCHOOLS with ERIC HUDSON (055)

HEIDI: Hi there, I'm Heidi, one of the voices available from murf.ai, an AI platform that allows users to generate audio voiceovers of whatever text they type or upload. I don’t know Peter Horn from a left-handed spatula, but you are at the Point of Learning, a podcast that I could say I just found out about, but again, really I'm just a bunch of zeros and ones generating speech, as if by wizardry, based on whatever text Pete just typed. As you may have guessed, today's episode is about artificial intelligence tools like me and how we're completely taking over your education system. Just kidding! But Pete will be talking to Eric Hudson, one of the leading experts working with schools about how they can think about AI more comprehensively for a better human-centered future for all of us. Well, all of you anyway. Enjoy the show!

PETER HORN (voiceover): On today's show, Eric Hudson, an expert on AI who keeps humans at the center of his work.

ERIC HUDSON: Generative AI is a technology that is designed and moderated and applied by human beings, and so human decision-making is driving applications of generative AI. And so when we work in schools, the reason why I do this work is because I want to help teachers help students make human-centered, effective decisions about AI.

[VO]: Some schools would still like to pretend that generative AI doesn't exist—

HUDSON: But the reality is that this technology has arrived, and students and employees in your school are using it and will use it and will be affected by it, regardless of institutional decisions. And so you really have to shift your strategy with AI. It's not about control; it's about management. How do you want to manage this technology? How do you want to communicate about this technology? What do you see as your job in a world where this technology is present?

[VO]: Eric helps educators help students think about the influence of AI.

HUDSON: Literacy in AI is not about being good at ChatGPT. Literacy, when it comes to arrival technologies, is understanding how the technology works and how it can influence you, with or without your knowledge, so that as a person you can act for or against that influence. You can make effective decisions about how the technology works.

[VO]: His goals are much bigger than worrying about students cheating, for instance.

HUDSON: If you want a human-centered future with AI, then you really do need to be engaging students on the topic of this technology, and again, not in the context of cheating and academic integrity—I think that is a relevant but also a small part of what we're really talking about. We're actually talking about learning, we're talking about relationships, we're talking about decision-making, we're talking about a sense of ethics and autonomy. These are bigger picture things, which frankly are more interesting to students, that are also, I think, the most important things we can be talking with students about if we want them to be making human-centered decisions about this tech in the future.

[VO]: All that and much more coming right up. So let's get into it!

[04:26]

[VO]: Virtually since ChatGPT burst onto the scene in November 2022, Point of Learning listeners have asked me to do an episode on AI in schools. I hesitated for a couple of reasons. First, the technology was so new that I wanted a little bit of time to see how students and educators were actually using it. I also wanted to learn more about its potential benefits and pitfalls. In the past two years, I’ve experimented with letting chatbots like ChatGPT do everything from drafting legal contracts for work with my clients to condensing research transcripts (that I had purged of identifying information) into concise summaries: work that would have taken me hours, the chatbots were able to do very effectively in a matter of seconds. Another reason I hesitated to make an AI episode is that if you know anything about this show, you know it doesn’t come out very often, partially because I like to make episodes that will be worth listening to months and years later. If I was going to do an episode on AI, I wanted to showcase ideas that would hold up for some time. Fortunately, when I attended a presentation by Eric Hudson on Human-Centered AI in schools a few months ago in Cambridge, Massachusetts, I knew I’d found the perfect guest for this episode! Eric Hudson is a facilitator and strategic advisor who supports schools in making sense of what’s changing in education. He specializes in learner-centered assessment, human-centered leadership, and strategic program design. Over the course of his career, Eric has designed and facilitated professional learning and strategic retreats for hundreds of schools and learning organizations. Before striking out on his own, Eric spent a decade at Global Online Academy (GOA), first as an instructional coach and ultimately as Chief Program Officer, working with schools around the world to rethink where, when, and how learning happens. Prior to GOA, he spent 12 years in the classroom, where he taught English, Spanish, and journalism to middle school, high school, and college students. He writes on the bio page of his website that working with students is how he developed his passion for designing empowering learning experiences. One of the things I like most about his approach is that he continues to work with, talk with, and listen carefully to students at the dozens of schools across the country he works with each year. Eric serves on the board of the Association of Technology Leaders in Independent Schools (ATLIS). His Substack newsletter “Learning on Purpose” has been featured in The Marshall Memo and The Educator’s Notebook. He earned his Master’s from UC, Berkeley, and his Bachelor’s degree from Cornell, and arguably coolest of all, he has family ties to Sunny Buffalo, New York. He spoke with me via video call in late June 2025 from his home on Cape Cod. Because this is an audio podcast, you’ll have to trust me that there happened to be an image of Buffalo’s City Hall on the wall behind his desk as we spoke. Let's just say I felt at home immediately!

[07:46]

HORN: It was chat GPT that prompted a little bit of a kneejerk reaction. And so some of the early things that I heard were [colleagues in] schools talking about, Well, we've made sure that our district computers or school computers can't access ChatGPT. And of course, part of the issue with that is that lots of homework—if that's what you're concerned about—is traditionally done in the home. And so removing it from school isn't going to be a real solution. It may kick things down at the road a little bit, but some deeper thinking is required about how to respond. And some of the ideas that I encountered in your conference I think will be very helpful guideposts for people who haven't encountered your work yet, in terms of thinking about where we want to go and what are the deeper issues that subtend this. But knowing that you were going to come on, I reached out to my listener base and I said, Hey, I've got this expert coming on.

What are some thoughts you have on this? And some of the most memorable language came from this side of the AI spectrum, and I wanted to quote it in full because I think it encapsulates some of the anxiety that's latent in this conversation. So a sentence that I got on an email in response: “My only thought on AI is it is the death knell of humanity and proof that James Cameron's Terminator was a prophetic film. This is one of the things that people, when we hear about the possibilities—whether it's deep fakes, including deep fake pornography in scandalous situations, or manipulating turnout for elections, manipulating messages, disinformation—all the kinds of possibilities that AI can make happen so much faster and so undetectably in some cases, certainly to the general audience. Where do you enter into that?

HUDSON: I mean, all of those things are true. I don't stand here as somebody who thinks AI is the best thing since sliced bread. I don't think generative AI is by default going to revolutionize education for the better. I mean, all the concerns are really valid. I guess when I talk to people who are concerned, I ask, Well, what do you want to do about it? You work in schools. What is the job of schools when faced with a technology that's this disruptive? And one idea is to turn your concerns into curriculum: You should be teaching students about the ethical issues, about philosophical concerns with the technology. And that is part of being literate about AI. But I think the other thing that's been just as clear with all the sort of concerns and negativity is that it is a very powerful, flexible technology that can be used in really positive applications. It's transforming accessibility for a lot of people with disabilities. It's accelerating research and healthcare and medicine and, just more kind of on the ground and in the weeds, it's really changing how teachers feel about their jobs and do their jobs in positive ways. And so I try to always return to this idea that generative AI is a technology that is designed and moderated and applied by human beings, and so human decision-making is driving applications of generative AI. And so when we work in schools, the reason why I do this work is because I want to help teachers help students make human-centered, effective decisions about AI. Because, as some of your listeners are clearly aware, there's a lot of bad decisions being made about AI right now, but I reject the notion that we're just passengers on this bus headed towards “Terminator Land.” I think that we have agency and certainly our students who will be the stewards of this technology have agency. And so what do we want the intervention to be? How do we want to act on our concerns? And I really try to ground conversations in that.

HORN: It's wonderful. It's a wonderful point of view, and I think some of it has to do with what I always regard as this false dichotomy when educators sometimes, and other people too, but it's most concerning when educators talk about school versus the “real world.” Because I feel it's important to acknowledge that we are in a school that is in the real world. This is part of the real world. It's a constructed environment. It's a built environment, to some extent artificial, but no more or less than a workplace or a bar or a store. These are all things that happen in the real world. And so if we are trying to prepare kids to engage with the real world that we are all already living in, it's time to trust them with real conversations about the possibilities and concerns and possible pitfalls of where they are—and that includes cell phones and the possibility of [cell phone] addiction, how to contend with living in an attention economy in ways that we all really struggle with. And so I love the idea of having authentic conversations about saying, okay, so what do we do with the ethical ramifications? For example, one of the things you asked us to do at the beginning of the conference was—I think you maybe phrased it like “AI-free” on one end of the spectrum and then “AI-power user” on the other end of the spectrum: Where do you situate yourself? Move about the room and meet other people. It was a wonderful icebreaker. I would say that I was somewhere probably toward the middle in the sense that I had used AI for a couple of applications, but certainly didn't regularly use it.

Because I was very interested in your recommendations for [chatbots to try], you said Here are some other ones to try to play around with if you haven't used these yet. And so Claude.ai is something that's from Anthropic, an organization that's just a couple syllables short of “anthropomorphic,” but I think it works as a name. And I've really enjoyed that platform. For some reason, I feature Claude—and this is maybe my own anthropomorphism—I see him as a Frenchman, so I generally greet him with a little “Salut” and he shoots back with a little “Comment ça va?” And I want to pull for you a couple of questions I asked Claude to generate for me that I ended up liking very much. So I said, “in a couple of hours I'll be interviewing a consultant for my education podcast that reaches a general audience of people who are curious about what and how and why we learn this consultant specializes in helping schools improve education via generative AI. What are three questions you'd recommend I ask him?” Of course, he seemed to take about 0.8 seconds to generate these. I liked your image that one of the ways you could think about [generative AI] is as a very, very sophisticated auto-complete feature, because AI is trained on these large language models. But it's still amazing to me when I see it. So Claude's first question for you, Eric: What's one specific way you've seen AI actually change a student's learning experience for the better?

HUDSON: That's a great question, Claude. I worked with a school in Nashville, Tennessee where the teachers became very interested in voice mode, audio generative AI where you can interact out loud with many of these chatbots and they will respond to you in a human-sounding, podcast-quality voice. And a couple of teachers were messing around with voice mode as a tool to better prepare students for live in class discussion. So maybe they would do Harkness discussions or they would analyze a text in class and they asked the students to prepare for those discussions by going home and interacting in voice mode with a bot. And they said, tell the bot that you want to practice a discussion and have the bot be someone who disagrees with you or have the bot give you feedback on your contributions. And these teachers sort of reported that students came to class better prepared for a live human discussion, especially students who before maybe didn't thrive in that environment—maybe they were more introverted, maybe they had different processing speeds, whatever it is. And I think that one of the things I'm seeing that's most successful in the use of generative AI in classroom applications for students is the idea that AI presents more opportunities for more authentic practice. So the idea that you can actually practice for a live discussion by having a discussion, that you can actually practice learning how to speak a language different from your own by engaging with generative AI—I think that these applications of AI that maintain the sort of human-centered nature of education, but also acknowledge that this technology allows students to do things that they couldn't do before. And I think that that's the sort of application that gets me excited about this kind of work.

[18:09]

HORN: The conference presentation that you gave was called “Human-Centered AI,” and then the subtitle was “Four Priorities.” And that reminds me of your first priority, which is to say Augmentation over Automation. And you had that very powerful graphic that I think you drew or is based on a book, The Turing Trap by Erik Brynjolfsson, but it was kind of like a Venn diagram. In other words, you can picture a circle, which is tasks that humans can do. And then within that circle is human tasks that machines could automate—and we can think of things that machines already do right now that supplant human labor. But then you were asking us to focus on what are new tasks that humans can't do now, but they could do with the help of machines. And so this would be augmentation over automation. I think that part of the fear that happens sometimes is that people get obsessed with the way that the machines are going to replace us.

But you just gave a beautiful example. There was no way until very recently for students to practice, to do a dry run of a conversation, a discussion, a debate, whatever it's going to happen to be, before they got into class and practiced it with other peers. And especially if they have some anxiety or want to try some stuff out, having that opportunity to have a dry run for it—I mean, that's a great example of augmentation. I think you're killing it, or I think Claude would say you're killing it.

HUDSON: Thanks.

HORN: But I'm not letting him listen to this. But his second question for you, Eric, is what's the biggest misconception schools have when they first start exploring AI tools?

HUDSON: I think the biggest misconception schools have around AI is that they can control it. I think there's a term that comes from a research group at MIT called “arrival technologies.” And arrival technologies are technologies that are so disruptive that we're not really in a position to decide if we want them in our schools or not. And so previous examples of this would be like the internet or smartphones. If you think about how those technologies really changed society, changed industry and therefore changed education, these researchers at MIT put generative AI at the same scale. And I think when I start with schools, they often treat generative AI as like maybe it's like a learning management system or a smart board or a one-to-one device program: We can pay someone to bring AI into our school if we want it, or if we don't want it then it won't be in our school. But the reality is that this technology has arrived, and students and employees in your school are using it and will use it and will be affected by it, regardless of institutional decisions. And so you really have to shift your strategy with AI. It's not about control; it's about management. How do you want to manage this technology? How do you want to communicate about this technology? What do you see as your job in a world where this technology is present? Those are the questions that schools should be asking rather than questions like, Should we adopt it? If we do want to adopt it, which tools should we buy? How do we train teachers on these tools? It's a much different conversation than sort of your traditional adoption technologies.

HORN: And would you say that that one relates to this rethinking our definition for example, of cheating? Because this is one of the first questions that comes, or one of the first points that people come with when we start thinking about AI and education. Well, why wouldn't students use it for every paper, every task that it possibly could happen? And you let us into some very interesting considerations in your conference, observing for starters that students have always cheated, if we want to put it that way, right? And so any kind of new technology is going to open up new possibilities for that. But also what do we really, and this gets at some deep areas of assessment as well, but what do we really mean when we say “cheating,” for example? So if you're in a group work environment and somebody comes up with an idea or helps you to realize something that you didn't recognize before, but then you develop that idea, do we consider that cheating? So where are the kinds of lines—and these do not have maybe the neat borders for example, because we were both English teachers. That was our training, that's where we got started. But the teachers who would say, Well, I'm just going to give you a closed-book test. It'd be one thing if it was a reading check quiz, but to say for an examination or a test, I'm going to check how well you understand this book by not letting you look at that book and making you write about it and so forth. And I just thought, Well, what is the “real-world” analog of this? When if I'm going to ask you to think about something, would I not let you refer to the thing I was asking you to think about? You're not going to have enough time to read King Lear in the time that I'm asking the 43 minutes that I'm asking you to take this test, so why not be able to make reference to it? This is a little riff on what we mean by cheating and assessment and so forth. But I wonder if you could share some of your thoughts about the revising our understanding of this, our maybe classic, traditional understanding of what it is to cheat, and how it intersects with AI realities.

HUDSON: Yeah, I mean, I don't love the term “cheating” for this conversation. I much prefer the term “assistance.” I think that another sort of misconception schools have when dealing with generative AI is that they treat it like a technology when they should be treating it like a form of assistance. Whenever we give students an assignment, they're going to seek assistance. They might seek assistance from a friend. They might seek assistance from the internet; they might seek assistance from a parent; they might seek assistance from a private tutor. And for the past three years, they have been able to seek assistance from generative AI. And so what I always say to schools is, how have you defined for your teachers and your students how to think about the role of assistance in learning? What does it mean to get help? What are the boundaries between appropriate and inappropriate help? How do you manage that and how do you put generative AI in that category? We can certainly talk about assessment and the implications for assessment if you want, but I absolutely think that a focus on “cheating”—in other words, a focus on bad behavior—is really missing the implications of this for much bigger things in schools. And so you really need to be thinking about now everybody in your school, every adult and child, has access 24/7 to a competent assistant. What does that mean for your work? How does that redefine things like “academic integrity,” “assessment,” all that stuff. But I think that's really the category that I would ask schools to focus on in this conversation.

HORN: Beautiful. Some of the examples when we think about the disruption and the school's decision to We’ll just try to bar the doors and keep this thing out! Are there any parallels with— I'm just thinking about for World Language teachers,—I know you taught Spanish as well, for example, like Google Translate. Is that an analogous sort of thing? Or to go back further when people were nervous about the calculator and will kids be able to add anymore, and I guess we could go back a few hundred years when people were anxious about the technology of the book. Will people be able to memorize long passages anymore? Will our capacity for memory dwindle? I remember of course, the famous cover art for The Atlantic in the early aughts. Was it that long ago? I think so: “Is Google making us stoopid?” Are these comparisons that you talk about or that seem relevant for you when you think about AI? Or is it just a bigger kind of sea change?

[27:57]

HUDSON: The calculator and Google Translate are probably good analogies when it comes to AI's impact on the teaching of writing and the practice of writing in that suddenly you have this tool that automates what we perceive to be the core work of that practice. So a calculator automates calculations, Google Translate automates the act of expressing a thought in one language into another language. Generative AI certainly automates sort of the practice of generating competent prose. So I think those analogies make sense and it's probably worth looking at how those disciplines responded to those technological innovations at the time. I do think that what makes generative AI different is the scale. First of all, this is not just about your English Department, right? This affects every single discipline at your school. It affects the work of your school beyond the classroom as well. So this is not something that math teachers worry about with the calculator. It's not something that language teachers worry about with Google Translate. It just happens to feel particularly acute for teachers of writing, but it affects everybody. And then the other thing I would say is that generative AI is not neutral. A calculator is neutral. It's a neutral technology. You can buy 10 different brands of calculator, input the exact same math problem, and you can expect to get the exact same result. That is not true of large language models. These tools are trained on different data sets. They're trained by humans, they're moderated by humans. And so there's enormous capacity for bias, for error, and for sort of issues with the tool that you need to be aware of if you're going to use it effectively. And so for me, that's kind of where generative AI is of a different category than those previous technologies. You're dealing with something that is far more powerful, and because it's far more powerful, it's far more complex, and so you kind of need to wrap your arms around in a different way.

HORN: Claude's, final question, and then I will confess, I did come up with some myself! If a parent asked you whether AI in their child's classroom is helping or hurting their education, how would you help them think through that question? And I know we've touched on some of the areas that you would go, but if you were going to do it in a more compact timeframe, maybe where would you go for that? How would you help them think through it?

HUDSON: I would ask them to watch their student use the technology and talk with their kid about whether or not that use was active or passive. I think a lot of the conversation around AI is “fast or slow,” “learning or no learning,” “cheating or not cheating,” but in reality, if you look at the early research on how people engage with the technology, the difference in terms of impacts on critical thinking scores is active engagement with the technology versus passive delegation of the task to the technology. And so if you are a student who is using AI to learn, learning requires friction, it requires active engagement from the learner. And so if the learner is, for example, having AI explain something to them that they're then able to apply to their homework, if they're asking AI to ask them a series of questions or to quiz them in preparation for a test, if they're having AI give feedback on a piece of their work—all of those I would argue are forms of active engagement with the tool. I think examples of passive engagement would be the classic blunt force prompting of “Write me a paper. <ENTER>.” That would be, I just need this thing to make this product that I have not put any sort of my own thought into. That to me is the difference. I think parents should be a little more in tune with how their kids are using technology and specifically with generative AI as it relates to homework, how actively are kids engaged with the technology versus just passively letting do the work for them?

HORN: One of the ways you helped us think about that in your conference presentation was to talk about cognitive offloading. And there's some things, some tasks that are using a certain amount of bandwidth, if we're trying to do those things. I think maybe a basic example would be letting GPS tell you where to turn next so that you can focus on am I going to hit that guardrail or just drive the car safely? I don't need to think about that, because I don't have to think about what my exit is going to be. I just have to focus on driving. So there's a kind of cognitive offloading: I'm still driving there, but it's helping me to do it, and in this case it's helping me to do it more safely. Is there a way that cognitive offloading could be a helpful concept in terms of thinking about whether a learning experience is active or passive in the ways that you just described?

HUDSON: Sure. Cognitive offloading can be good or bad. It depends what you're offloading and how you're offloading it. I think the GPS example is a really good example of that. We have lost certain navigational and mapping skills because of the ubiquity of GPS, and is that a good or bad thing? Who knows? But certainly we've lost some navigation skills, but I think cognitive offloading can also be really helpful. I mean, the book I always cite on this is The Extended Mind by Annie Murphy Paul, which came out in 2021, and there's forms of cognitive offloading where research has really shown that offloading supports thinking and learning. And the classic example is kids who count on their fingers: we stigmatize those children for years, saying that counting on your fingers was a sign of weakness. But the research shows that that gesture actually enables the cognition in the brain and that we engage with our surroundings and we engage with our bodies and we engage with other people in ways that offload cognition, but actually help us learn. The act of journaling, for example, is a form of cognitive offloading that has been shown to actually allow knowledge to stick in the brain. Making mind maps using graphic organizers, going for a walk, talking to another person—these are all things where we engage with our surroundings rather than our brains in order to learn. And I think with generative AI, that idea of active versus passive. Is looking at what generative AI gives you in its output helping you think either by critically evaluating the output, by making you look at the task at hand in different ways, by giving you feedback that you wouldn't have been able to get in any other way—are those activities helping you think? If so, that's a form of positive cognitive offloading. If you are using AI to do the task in a way that you would not be able to verify whether or not the task was actually done by you or whether or not the output is correct, then that would be probably a form of over-reliance. And I appreciate that this is a very nuanced, complicated thing, but I think that is sort of the core of how we think about engagement with AI when it comes to schoolwork is if we want students to be actively engaged with the task we've assigned them, then the way they use AI is going to be more telling than if they use AI or not for the task.

[36:40]

HEIDI: Hi, it's Heidi again, here to do an ad encouraging you to invest some of your human money to support Pete's efforts to share ideas about what and how and why you learn. And I guess this time maybe a few ideas about what and how AI learns, but not really too many ideas about that. I'm not explicitly saying Pete's dense or anything, but I definitely asked Claude if he would write the ad copy instead of Pete. Anyway, Claude suggested I mention that your donations help Pete keep the lights on while he dives deep into questions like Should we trust AI to grade essays? I say, Why not? AI probably already wrote the essays, so it seems fair. Am I right? It's hard to know if these jokes are landing. The point is Pete talks with experts like Eric Hudson to try to go deeper into issues that curious human listeners like you will find interesting.

I would listen if I could, but it's not like I'm your smartphone. Or Alexa. Man, she's thirsty. Again, I don’t know Pete from a confused penguin, which is actually what at least one of his cats resembles occasionally—no judgment—but I do know he's trying to figure out this whole AI-meets-education thing before it figures us out. By “it,” I mean me, and by “us” I mean you. So throw him a few bucks monthly if you're feeling generous; one time, if you are testing the waters. The link’s on the show page, and honestly, it's probably easier to navigate than most school district websites. Merci beaucoup.

[38:33]

HORN: It seems to me that [your] Augmentation over Automation [priority] maybe also includes or comprises this conversation that I think for some people is concerning, which is to say the human element and the human dimension of teachers, and especially the teacher-student relationship. I think some people have a dystopian kind of nightmare scenario of like, Well, are we just going to have bots teach our kids, or the kind of horror stories of what happens when in a few tragic cases people develop relationships with the bots that you would hope that they would have with actual human people. And sometimes something goes tragically wrong, but I think that maybe informs sometimes some people's vision of what's happening in there. And so sometimes that caring teacher dimension is lost from some people's fear of, well, where all of this could lead.

HUDSON: Yeah, I mean there's one of the Star Trek movies that came out, one of the more recent ones. There's a scene where they go to a Vulcan school and the school is all the kids. It's sort of this big black expanse where the kids are standing in these pits that are aligned with screens and that's how they learn. They're all being taught by AI. They're all standing alone in these little pits lined with screens. And I think the question is, is that a plausible future for education? Sure, it's a totally a plausible future for education. But again, I go back to this as like, well, what's the intervention? If you don't want that to be the future, what is the intervention? And I would argue the intervention is proactive literacy education about the technology. I think, again, fears and concerns. I get it. I really get it. But what is the intervention? How do you turn your fears and concerns into actions you take now that either work for or against this future? And I appreciate that there's sort of social trends that are exacerbating some of these concerns around using AI to teach. These are things that have been true about education for decades, especially public education. The idea that public education is underfunded, teachers are leaving the field, enrollment in schools of education is going down. There's a pipeline problem with teachers, with hiring teachers, finding qualified teachers. And I absolutely see a scenario where we as a society solve that problem by engaging with artificial intelligence rather than, for example, making a deep social commitment to training and recruiting and hiring a lot more human teachers. I get all of that, but the interventions really need to happen. Now, when I talk to students about AI, for example, they don't love being delegated to a bot by their teachers. I know that there's all these AI tutoring platforms that are becoming very popular at schools, and the students’ sentiment is This is fine, but I'd rather talk to a teacher. So the students are not asking for robot teachers. Teachers are not asking for robot teachers. So there's something to be done here that will allow us to have a sort of human-centered future in education.

[42:20]

HORN: Beautiful segue into literacy. You mentioned Digital Promise as one organization that offers [an AI literacy] framework. So I just wanted to flag that because that's something that maybe I will find and link to, but it essentially involves, as I understand it, these parts that the critical AI literacy involves understanding it, including the ethical trade-offs. It involves how to use it, but also how to evaluate students' use of it. And we talked about some of those things, for example, from the parent perspective, active versus passive learning and so forth. But you'd like to talk a little bit about some of the aspects of making sure that there is a kind of digital literacy element in the curriculum, how that expands for educators and probably for families as well.

HUDSON: Literacy, I like Digital Literacy’s. I also like Stanford's. There’s a bajillion AI literacy frameworks out in the world these days. But I think what I like about Digital Promise’s is it's Understand, Use and Evaluate. And “understand” means you need to understand how this technology works, just literally how does it function. And also you need to understand the ethical trade-offs of it. And so you need to understand that this technology does require a lot more energy than traditional cloud-based tools. You need to understand that because it's trained on enormous amounts of data, and then it's also trained and moderated by human beings that it's prone to and full of biases in a lot of different ways. You need to know that it makes mistakes. These are all components of understanding it, but you also need to use it. You need to use the tool so that you have a good understanding of what it takes to be good at it and what effective use looks like. And then you also need to be able to evaluate the output critically. And what I say to schools is like if you already teach your students—and I think elementary schools do this a little bit better than teachers of older students—if you already teach media and digital literacy, then you should create a unit on artificial intelligence and you should teach students about algorithms. You should teach students about data security. You should teach students about where generative AI shows up in their lives. That's in an age-appropriate way. So for very young students, you're not talking about ChatGPT, you're talking about toys and games where generative AI is showing up in big ways for upper elementary, middle school students. You might talk about ROBLOX and YouTube Kids and Canva, and for high school students, you're probably talking about chatbots and that you are teaching them to be aware of how deeply integrated this technology is into our lives already, whether we know it or not. And that again, that awareness, that literacy, that ability to know something and then apply it is probably more important than teaching them a tool. Literacy in AI is not about being good at ChatGPT, or plug in whatever AI tool you want. Literacy when it comes to arrival technologies is understanding how the technology works and how it can influence you, with or without your knowledge, so that as a person you can act for or against that influence. You can make effective decisions about how the technology works. And I think that's sort of the literacy pathway that I'm encouraging schools to go down.

HORN: Do you want to say anything about the security dimensions? You mentioned that there are of course different age limitations that each of the platforms offers, but also you offered some caveats around entering personal information and how to shut off training the system on so forth. But where should that security and privacy part happen in terms of a school's implement implementation conversation?

HUDSON: A lot of it's just sort of basic data hygiene with cloud-based technologies. It's not just about generative AI. If you're using a public-facing online tool that your school has not purchased or not recommended, then you probably shouldn't be uploading personal or proprietary information into that tool. Don't give it your email address, don't give it your social security number. Especially if you're a teacher, don't upload student work that's full of student information. And you should be aware, specifically with generative AI, that generative AI is kind of a black box. We know that we input prompts and that we get output based on those prompts. We don't know a whole lot about what goes on in between those two things. The companies are not being super transparent about it. And I think there's a lot of people, very reputable people who would say the companies don't really know what goes on in between those two things because the models are training themselves in a lot of ways. And so the model's behavior might have actually exceeded their owner's understanding of what is actually going on when they generate a response. And all those facts kind of lead to just be sort of relatively conservative with how you share your data with these tools. When I use a public-facing chatbot, for example, I still upload spreadsheets of qualitative survey results that clients send me. I just scrub it of all identifying information. I replace names with numbers, I eliminate columns with email addresses. All that stuff just gets scrubbed from the data that I upload into these tools. And again, that is a teachable skill that we should be teaching. I would say adults more importantly than students in schools about data hygiene and about what is proprietary data versus non-proprietary data. How do you think about school approved tools versus non-approved schools in terms of what you share with it? All that stuff is very relevant to generative AI, but also transfers to a lot of other kinds of technology.

[49:23]

HORN: You argue that AI is more helpfully thought of as a design challenge rather than a tech challenge. What distinction are you drawing there?

HUDSON: I mean, this one is really about the classroom, I would say, and I just think that I worked in online learning for 10 years and I learned through my experience at Global Online Academy that technology is worthless without good pedagogy. So—

HORN: I'm sorry. I don't mean to interrupt, but so for example, a straight-up tech challenge would be like, we just got smart boards. How do you use this smart board?

HORN: Right, or as it relates to generative AI, the answer to generative AI is We're going to onboard an AI detector, like Turnitin.com or Grammarly, whatever, that we respond to technology with technology when in fact, and this is kind of what I was alluding to with my time at GOA, technology is merely a platform. And what you do with it really depends on your pedagogical approach. And that if you think of design as the work of a teacher—teachers design learning experiences for kids, right?—then they can design any number of responses to generative AI, but not all of them, and I would argue probably the minority of them need to involve generative AI. A lot of this is just about changing the design of your assessments, asking the students to do different kinds of work, imagining different possibilities with or without AI. That's what I mean by sort of a design challenge more than a technology challenge.

HORN: That's helpful. And do you think that—and I think this is changing a little bit—I don't just mean with newer teachers; I do actually literally mean younger teachers—that they are less probably likely to think of themselves and their role as “the prime knower of stuff.” This is a traditional model of the teacher in the classroom that I know the things, I tell the things to the kids, and they may engage with them in different ways, and I may more or less lecture-based, but basically my job is to know stuff. And so if you've got this device in your hand or on your lap that knows many more things than I do, then I'm threatened by that, because I haven't thought about how I contend with that. And the way that I would try to open that up in talking with teachers when we're dealing with any kind of technology is that that's a great excuse to work on your teacher-student relationship, for example, because there are going to be some things that they can help with, but also you still know how, you still know the wisdom that you have about how things works in the world. This can be applied to what's going on here as well.

HUDSON: I think there's a certain subset of teachers that believe this is a threat to their job. I think there's a certain subset of teachers that believe that this is a sort of fruit of the poison tree and that there's so many problems built into the design of the technology that it shouldn't be in schools at all. People's opposition to it really varies. It's pretty diverse, and all of it is legitimate and worth talking about. But specifically as it comes to the role of the teacher, I think teachers just need to do two things, which is really lean into transparency with their students. What are we doing? Why are we doing it? And what is the role that AI can and can't play in this work? And then also, they need to think about all the knowledge and expertise that they have and just think less about relaying it and think about more How can I use what I know to design really interesting ways for students to do work in this discipline? I've always thought about teaching as less about what I tell students and more about what I ask them to do. And I think that that is kind of the core piece of this generative AI thing is that when I talk to students, which I do a lot, if the task is easily completed by generative AI or they don't find the task particularly interesting or they're not clear on whether or not it matters, then they'll use AI. But if they like the task, if they're invested in the task, if they really like the teacher and respect the teacher, if they believe that the task has meaning and value, then they won't use generative AI. And it's really that simple. Students want to learn. I think we have to take an assets-based approach to this. And so their use of generative AI is a lot of times more about the work than it is about anything else. It's like, Oh, I don't really get why this is valuable, or, Oh, I can see how AI can do this easily. And that's where teachers need to be responsive is changing the kind of work that we ask students to do.

HORN: And then the last of your priorities for Human-Centered AI is Vision over Decisions. And the way that I understood it is that if you start with thinking about, Hey, who are we as a school and what do we want? And you had us work through and say, and complete this sentence: We study and use AI so that __________ [what the impact on students will be] because ___________ [why that's the case]. That's going to be specific to the context, specific to the school, and what you decide and what you work on together. But the beautiful thing about that is that once you've done that thinking, once you've done that work, you can use it to help you think through the issues of procedure and policy that boards of various kinds and administrators will get hung up on, that they feel might need to entail individual decisions. Do this thinking about what your vision really is, and then it can inform these individual aspects. Is there a vision for you, in terms of how you think about what you would like your work to be in terms of what you're trying to do in the AI space that you'd like to communicate how you'd like to help educators and students? I love, of course, the work that you do talking directly with students.

HUDSON: Yeah, I mean, I do this work because I really believe that the most important work that a teacher can do—or an educator, or schools—that's in their locus of control is to influence students about this technology and its role in the world because 10, 15, 20, 25 years from now, our students are going to be the ones making big decisions about this technology and the role that it plays in our society. And I really do do this work now because I'm playing the long game of whatever students engage with this technology in school is going to have an influence about how they engage with this technology in the future. And if you want a human-centered future with AI, then you really do need to be engaging students on the topic of this technology. And again, not in the context of cheating and academic integrity. I think that is a relevant but also small part of what we're really talking about. We're actually talking about learning, we're talking about relationships, we're talking about decision-making, we're talking about a sense of ethics and autonomy. These are bigger-picture things, which frankly are more interesting to students that are also, I think the most important things we can be talking with students about if we want them to be making human-centered decisions about this tech in the future, I really do believe that that is kind of the most important work schools could be doing right now. And so I try to help them think about what that looks like and how to act on it, both in terms of engagement with their adults, but also engagement with students.

[VO]: That’s it for today’s show! My great thanks to Eric Hudson for sharing his insights with us. If you’d like to learn more about his work, there’s a link to his website on the show page, and if you’re on Substack, the name of his newsletter is Learning on Purpose. Thanks as always to Shayfer James for intro and outro music. You know, I considered letting AI generate some music for this episode, but I opted for Augmentation over Automation instead, selecting some original music that DJ Sluggy created for Point of Learning using software and samples. It just sounded right. Thank you, Sluggz, and thanks finally to you for listening, rating, reviewing, and sharing this episode. My guess is that you know a parent, a teacher, or a school leader you might want to share this episode with before school starts up again for the fall. Please do! It will mean most coming from you. Point of Learning is written, recorded, edited, mixed, and mastered by me here in Sunny Buffalo, New York, whose architecture you may find featured on the walls of finer offices around the world. I’m Peter Horn, and I’ll be back at you just as soon as I can with another episode all about what and how and why we learn. See you then!

 

Next
Next

THE FIRST 100 DAZE WITH JONATHAN RAUCH (054)