Important: Translated automatically from Spanish by 🌐💬 Aphra 1.0.0
It seems unbelievable that a year has passed since the last time I wrote a new article. And yet, here we are.
The truth is, I don’t like returning to write about the same topic as last time at all. Artificial Intelligence has almost monopolized our context, and I’m not thrilled that it monopolizes this personal space as well. But at the same time, I feel the need to talk about it, as I have the conviction that reflection on the uses of this technology is urgent and essential. A moral obligation as a society, and therefore, elevated to the next power for the teaching collective. If you, dear reader, have made it this far and find yourself absolutely disgusted at having to read about this topic again, I apologize and encourage you to continue your journey through other links. I don’t think I’ll write anything interesting anyway.
The truth is that I don’t believe in predictions. Nowadays, with so many factors to consider, anyone who tries to convince you that something specific is going to happen is probably scamming you in some way. Even if it’s just to keep your attention. So take this only as an opinion: we are on the verge of a very fierce educational revolution. Much more radical and rapid than what the emergence of the Internet meant. The signs are there for those who stop to observe them. I don’t know how it will end, or even if it will, there are too many parallel processes advancing without stopping, so the number of possible ramifications is infinite. But I do firmly believe that we need to arm ourselves to face it. And knowledge is, as always, our best weapon.
When we received our first personal computer, mobile phone, or smartphone, we didn’t need to know how they worked. We only had to learn how to use them. But for that, it wasn’t necessary to know how RAM was internally managed or that there was a processor in charge of converting all our actions and their responses into simple mathematical operations. On the other hand, to make responsible use of an LLM (Large Language Model, an advanced AI system)1, it’s not enough to know how to open a web page and start writing. Can you do it? You can. But it can be dangerous. Not dangerous in the Terminator sense. Dangerous because we can fall into two common traps with the same probability: underestimating their capabilities… And overestimating them. If we underestimate them, we’ll believe they’re not useful for anything, we won’t worry about their impact. And we’ll get distracted… Until one day we won’t have time to swim and the wave will catch us. If we overestimate them, we’ll trust their answers excessively. In a short time, we’ll stop reviewing them. And we’ll become irrelevant people. Before we know it, we’ll have lost all critical sense. Like a person who spends a whole year without exercising, rehabilitation will be hard. Even irreparable in some cases. How do I know that a vast majority doesn’t understand how a language model works? Because recently a poorly answered question went viral that is being used to evaluate “how unintelligent” these types of models are. If you know how they work, the mere asking of the question seems totally absurd to you, as it doesn’t give rise to any kind of discussion or conclusion. So test yourself: Could you explain this ChatGPT response and why it makes no sense to pose that question to these models? If you think not, I advise you to consult this article. If you’re also looking for a more technical resource, this video is pure gold. It’s not possible to face this, our greatest ally or our worst enemy, without knowing it.
Because an LLM can be the most useful tool ever invented for learning and, at the same time, the biggest barrier we have encountered. This is the paradox we face. The disadvantage we start from is that learning requires a conscious effort. Our brains are optimized to minimize that effort, and this technology also allows us to bypass it, giving us a valid result (or even one that simply appears to be), avoiding that unpleasant sensation of mental gymnastics. There have been several experiments in which different groups were created to carry out certain tasks. Usually, there is a group that has no access to any LLM, another that has access to an LLM like GPT-3, and a third group with access to GPT-4. The conclusions of these studies tend to be similar, as the speed with which tasks are completed increases when access to an LLM is available. The more powerful, the better. It should be noted that on many occasions it has happened that the group with access to the most powerful LLM has finished the task earlier. But with errors. Because it’s common to overestimate their capabilities and stop checking if the answer is correct or not. More worrying is the educational field. When it comes to using an LLM for learning, usually the group with access to the most powerful LLM forgets more quickly. They learn less… Or nothing. Again, learning requires effort. And having things done with the least possible effort is an irresistible treat that only the most conscientious minds will avoid chewing.
During this summer, I’ve read a couple of books on the subject that have made me reflect on the application of Artificial Intelligence in education: “Co-Intelligence: Living and Working with AI” by Ethan Mollick and “Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)” by Salman Khan. In the first one, you can find a much better developed reflection of some ideas from the previous paragraph. The second, although it’s clear on many occasions that it’s a marketing brochure for a product, allows you to get a very clear idea of the positive impact that a well-directed language model can have on the teaching-learning process. Khan and his team had the privilege of accessing a model like GPT-4 several months before its existence was made public. It’s interesting to read how they realized the destructive potential this technology could have on their own business, and how they managed to turn that danger into a competitive advantage like Khanmigo2. Available for free to all teachers in the United States. Someday I’ll talk about how it can impact a globalized and competitive world that certain geographic areas have exclusive access to certain tools.
Midway through the course, I came across this article, which made me reflect on how LLM technology can be combined, with different calls and capabilities, to create certain “complex systems” or “workflows”, as we want to call them. The fact is that they allow automating certain functionalities that until now were unthinkable to carry out without human intervention. Compensating, moreover, for certain limitations of current language models. What happens if we make an LLM reflect on the suitability of its own response and elaborate a new one taking into account that criticism? These models also make mistakes when writing programming code but… What happens if I allow it to run tests to check that generated code, accessing the result of it to generate a second version that corrects those errors? The possibilities and possible strategies, added to how the elaboration of the prompt influences in each case and model, are infinite. From there I began to do my own experiments. For example, TEAgpt3, a system designed to adapt the wording of a task to ASD students, even searching in certain sources for the appropriate pictograms and incorporating them into the document. I also took advantage during the eurekIA4 event to explore with a team a system we called MEVALUA5. This application provides feedback to students before submitting a work, thus being able to check that it meets the requirements established by the teachers. That feedback included comments about how they could improve their submission, but never indicating the specific solution, but encouraging the student to discover it. The goal was to achieve a truly effective formative assessment by being instant and personalized. Finally, with the aim of building something truly stable and reusable, Aphra6 emerged. Thanks to this system, this blog is also available in English. If you’re interested in how it was created at a more technical level, you can check the devlog.
Doing these tests made it clear to me that chatbot-type interfaces where we interact directly with a raw language model are destined to become a niche. Because in a practical sense it doesn’t make sense to continually provide all the context we need, nor to iterate the conversation using the same strategies over and over again to reach the result we want. No, these “complex systems” will be directly integrated into the applications we use, in our smartphone, in our operating system. Until the LLM is just one more component of the “intelligent system”.
Perhaps these reflections have convinced you that we are on the verge of a revolution, or perhaps not. But there’s something I haven’t told you. All these “predictions”, to call them something, start from the premise that technology does not evolve. That is, all this can be composed with what is invented so far. There’s still a lot of systems to develop and integrate, but new knowledge is not necessary for it. However, rumors are getting stronger that the year will not end without us seeing a relevant new iteration, one that takes LLMs to a new level calling them “reasoners”. And this iteration can be trivially incorporated into the “complex systems” already developed, replacing previous LLMs and unlocking a whole new range of possibilities.
Now that we’re starting a new course, we have a new opportunity to arm ourselves, not only ourselves, but also our environment. Our faculty. Our family. Our friendships. Only in this way will we reduce the number of casualties. Share, discuss, plan, know, demonstrate, encourage, persevere. Don’t lose curiosity and keep an eye open. But don’t forget to blink to hydrate it. Rest is also important, and very human.
Here’s to a Great New School Year.
PS: No language model was harmed in the writing of this article… But it was for generating the image that illustrates it.
-
LLM stands for Large Language Model, an advanced AI system capable of understanding and generating human-like text. ↩︎
-
Khanmigo is an AI-powered tutoring tool developed by Khan Academy, founded by Salman Khan. ↩︎
-
TEAgpt is a system designed to adapt tasks for students with Autism Spectrum Disorder (ASD), using pictograms. ↩︎
-
eurekIA is a wordplay combining “eureka” (discovery) with “IA” (Spanish acronym for Artificial Intelligence). ↩︎
-
MEVALUA is an AI system for providing instant, personalized feedback to students before submitting work. ↩︎
-
Aphra is an AI translation system developed by the author of this blog. ↩︎