
Let’s get specific
Last week marked the launch of a series of reports commissioned by the N-Tutorr project for the National Digital Leadership Network in Ireland. They’re an excellent set of reports, offering high level and practical insights. I’ll look at some of the other reports in a later post, but I want to draw out an element of my own one in this post.
My report acted as something of an overview to several others, which then took a deeper dive. I looked at five trends relating to new models of teaching through technology, namely Hybrid and Blended Modes for Learning and Working, Microcredentials, Generative AI, Extended Reality and Adaptive and Personalised learning. I tried to offer a balanced view on these (ie less ranty and opinionated than this blog), and provide useful ways of framing their implementation in higher ed.
One message, particularly relating to AI, that I wanted to convey was “Don’t Panic!”. While it is having an impact, and policies need to be drawn up relating to its use, there is _always_ a tendency to overstate the impact of technology on a sector. Panic is part of the business model – tell the sector it has to get on board, completely and immediately, or else they will be obsolete in 5 years. There is no evidence that the panic model has been a beneficial way to progress in higher education and its use of educational technology over the past 30 years. I know the cry is always “but this time, it’s really different!”. Yeah, tell it your non-fungible tokens artwork collection. I didn’t quite put it in these terms in the report, but the importance of considered implementation was emphasised.
Related to this I proposed a continuum where we can consider technology as having specific or general application. I suggested the five approaches in this report might be placed thus:

Thinking further on this, I think part of the problem of ed tech is that the industry always wants a General application. They have been raised on a diet of “Disruption” blather, and “Revolution” nonsense. For disruption to occur it has to impact across the sector. But within higher ed, we’re more often concerned with specific applications. This technology is useful for this discipline to teach this particular element. Differentiating between General and Specific intention is a useful way to approach ed tech and avoid much of the froth.
For instance, like a lot of people, I have found AI really useful for generating code. I don’t know much about writing Python but was playing around with it for my hockey analytics blog. The code that Claude gave me was really good (I think). For this use case, there is no way that educators in computer science won’t be using AI as part of their toolkit (yes, I’m ignoring the bigger picture stuff for a moment here). But you still need to understand what it’s doing, how it works with other elements, when it needs debugging, etc. The need for education doesn’t disappear here, it shifts to accommodate this tool.
Similarly VR and XR has long been trumpeted as a General solution in higher ed from all those Second Life campuses to Meta insisting that this time it’s really going to happen. But in education it is much more useful to consider it at the specific application level. There are applications then in engineering or medicine, without demanding it replaces everything.
I’ve been saying versions of this for years on here, so I apologise if this is just a reiteration of old themes. While it’s not quite as simple as “Specific = good, General = bad”, that’s probably not a bad starting point if you’re feeling discombobulated by the continual barrage of ed tech media hype, which always operates at the General level, but also feel that there is something useful that you can be doing for your students here.


2 Comments
Alan Levine
An interesting perspective, as usual. I wonder though how much specifity can be designed into a “tool” (which you know calling all that is under Ai a single tool might be problematic). I was reaching for a metaphor in my toolbox, but left it. The specifics that arise in education come from the people applying it, so is it more how much a tool has potential to allow an individual to get there?
Or consider AR and XR also have some requirements of participation (hardware).
Still the wisdom I leave with is “The need for education doesn’t disappear here, it shifts to accommodate this tool.” I’m glancing at your hockey blog and having no clue what you are talking about; but you have not only hockey “education” but some motivation too, and enough to judge whether Claude’s code is returning something meaningful, rather than needing to know how it’s doing it.
Much to noodle, you still got game, Weller
and thankfully you still have comments open!
Shamsher Haider
Great read! As someone from a software/ML background interested in AI for personalisation, assessment, accessibility, and analytics, I often see that push towards generic solutions. Your report’s emphasis on specific applications over generic ones really hits home. As someone who has often tried using Claude 3.7 for coding, it’s very powerful for Python (comparatively visibly much less for .NET etc), but for me the human effort needed to refine and rectify the output for making it usable still feels very high for more complex projects, at least for now (though I believe this could change very rapidly in next couple of years)