Knowledge Exists in Living Human Tissue
I’m actually very excited about artificial intelligence and education, but I’ll be honest - based on conversations with colleagues, discourse on bluesky teaching communities, and elsewhere, it seems I’m pretty alone in that and fear rules the day. It seems like the fears come in two categories: fear of a loss of humanity and fear of cheating. One thing I think we can all agree on is that this toothpaste isn’t going back in the tube. AI is here, I’m using it, your students are using it, and it is likely that you’ve used it as well, whether you know it or not.
On the Fear of a Loss of Humanity
Kieran Egan gives us some hope in his book The Future of Education (2008): “…there isn’t any knowledge stored in literacy, in all our libraries and databases. What we can store are symbols that are a cue to knowledge. People can read the symbols and not understand the knowledge, or partially understand it, or have only a vague sense of what it means… knowledge exists only in living human tissue, and the literacy codes we use for storage are cues that need to go through a complex transformation before they can be brought to life again in another mind.” Egan is railing against the loss of oral culture here in exchange for pressure placed on early childhood literacy programs, but this quote should also be taken as a reminder that knowledge isn’t stored in books, websites, databases, or LLMs. Knowledge, in Egan’s view, exists in our ability to transform what is written into applicable abstract conceptualizations and concrete experiences (a la Kolb’s learning cycle). Knowledge and its interpretation are an active skill that requires practice and guidance. Perhaps it is time to reconsider AI to be another means of gathering literacy codes and cues for us to use our skills in knowledge and interpretation. AI is not a threat to knowledge, but it is absolutely a shift in how we access the cues for it.
On the Fear of Cheating
Let me start off by saying I spent nine years in the classroom. The burden placed on teachers today is immense and what I’m about to suggest can be easily laughed off. Ready? Instead of being afraid of cheating, reconsider what you are actually assessing. If what you are assessing can be easily produced by ChatGPT or another LLM, it is likely not asking your students to consider deeply the question being asked. If ChatGPT can spit out an essay based on a poorly written prompt by an 8th grader, are you really asking your students to use their analytical skill of knowledge gathering and interpretation? Consider other means of assessment as well. If essays aren’t doing it, is there another oral or a dialectical means of analysis? My hunch is that yes, there probably is. That burden I’m placing on you to reevaluate what you’re assessing? Why don’t you ask ChatGPT or Claude to help you out - just see what they say.
There is one other thing to say here, and that is to remind students that turning in an essay generated by AI is plagiarism. Even though an LLM made it, it is still putting forward other’s work as your own.
A Way Forward
It breaks my educational heart to see institutions banning any form of knowledge gathering, whether its books or websites or AI. Regularly I see schools enact policies that put blanket bans on AI use for their students and on their campus and I’m often curious what the expected outcome is. In my view, all that school is doing is forcing students to use AI without understanding it, without guidance, and without instruction. I remember when Myspace and Facebook took over my school when I was a student. Looking back, we can agree that the introduction and growth of social media forced our society to change dramatically. Do you remember any good educational guidance around social media use then? I certainly don’t - and look where social media has gotten us today. We are at a pivotal point in the “early-ish” days of AI and this is a situation we have mishandled before. It is time for us to show students how to use AI in a way that can make it an effective tool for knowledge building. Why shouldn’t students use AI to help them organize thoughts? Dig deeper into subjects they don’t fully understand? Gather research in an area that’s new to them? Give feedback on a draft? The sooner we get ahead of teaching proper and useful AI use, the better.
You want some concrete examples? Here's some concrete examples:
Teach students how to fact-check AI outputs for accuracy
Teach students how to control outputs with proper prompting
Teach students how to use AI to brainstorm and organize thoughts
Teach students about the ethics of AI use and where it is being used in ‘the real world’ (hint: everywhere)
We’re faced with a choice here, in this moment. We can be scared of AI and refuse to present it to our students as a tool and they will go off and learn it on their own, with no guidance or guard rails. Or, we can teach about it, use it, and help our students move forward with more wisdom. I know where I stand.
AI and I
As a little bonus, I thought I’d share with you how I use AI in my day to day. Generally, I go back and forth between ChatGPT and Claude. ChatGPT is great at helping me organize my thoughts and my research, especially when I get pulled into 30 different rabbit holes at once. Claude is excellent at providing feedback for drafts and helping me refine my language a little bit. ChatGPT is useful for me because of its extensive memory and ability to group threads into projects. For instance, when I’m working on a conceptual paper, I’ll create a new project folder and keep all of my chats related to it in there. That way, any response I get will remain relevant to that specific conceptual paper. ChatGPT’s deep research feature has been a real boost lately. I’ll give you an example - I’ve been researching experiential education lately, reading a lot of early work by Dewey and some more recent work by David Kolb. One place where I knew I had a gap was in nature-play based education, something that David Sobel writes about all the time. I know that I have to deep dive into Sobel’s work, but for now I just needed a primer. ChatGPT’s deep research produced for me a solid primer on nature-play based education, directed me toward where to start with Sobel’s work, and introduced me to three or four brand new researchers who hadn’t heard of before. Two boxes checked - I got my primer and I got my path forward for the deep dive. I like to call AI my “second brain” sometimes simply because it helps me keep these things organized and helps me find connections when I don’t obviously see them. Is it producing new knowledge? Absolutely not. Is it supporting me in producing new knowledge? For sure.