Nov 05, 2002
In a sense the terms "component" and distributed are holdovers from client-server architectures, they don't properly describe what is being done on the internet (even though the internet is effect the ultimate client server structure, it has more or less transcended itself).The closest things to describing what we're looking for is "web services" and "semantic web" but these seem inadequate as well (or else why we would the IT community be compelled to go back and drag out the componentization paradigm again).
We are in a state of transition wherein the previous vision of how this would work, didn't work and the new visions have not really got off the ground. On a more practical note, one of the greatest failings with the "distributed" approach was its over emphasis on application issues, leading to extreme complexity.
These factors have been conquered before, but only in strictly controlled environments (where performance was guaranteed, regulations in place etc.)
The e-learning repository network will have to be a lower cost proposition by nature in order to succeed. The reason that peer to peer music succeeded was its architectural simplicity - simpler by a factor of 10x or more than what we're looking at.
It's a very tough call, I still think we are a step or two away from a viable conceptual architecture for distributed repositories, I don't think those steps are big - just haven't recognized the breakthoughs yet...
I think he raises serious issues. But they are issues with the potential for a good response.
It would be useful to obtain an analysis of why previous efforts at componentization (is that a word?) failed, and indeed why (and in what way) the internet is growing beyond itself.
From my perspective, the internet - and web - is itself the clearest example of the success of componentization.
Part of my thinking is derived from my imagining what the web would have looked liked had we taken the enterprise system approach. On such a model, in order to have web access, you would have to buy a large content management system, download 'the web' (or those components of the web to which you have licensed access), store it on your own system, and access it in-house.
Obviously, the web could not have been successful without the development of some very simple components: a way of separating the brosing functionality from the web server functionality.
I agree that the cornerstone of componentization is simplicity. It is not clear to me that peer-to-peer as it has evolved has achieved that level of simplicity. The mess in instant messaging, for example, illustrates what can go wrong. That is why I advocate standards tolerance, and that is why I advocate a decentralized system - so you cannot have an AOL, say, denying access to the network by competing technologies.
I believe that simplicity is obtained by taking a lot of what is (was previously) believed to be network (or enterprise system, on the other view) properties and incorporating them into the objects themselves. Build complexity into learning objects, not the system that transports them.
For example, most (all?) LMSs and LCMSs have built in discussion board (or chat) tools. There is no reason for this. Access to a discussion list is a service, the functionality (or front-end) for which can be encoded into a learning object (perhaps not on the current definition of 'learning object' but I refuse to be dragged into such digressions.
Building too much functionality into the network itself - rather than into the objects transported into the network - is what leads to the creation of alternative competing networks, and a fragmentation of the system. It would be like building streaming media into the definition of the HTTP protocol - then we would have a web that is compatable for MS media Player users and a completely different web for Real Networks users.
But by building functionality into a subsystem that can be transported over the network (aka plug-ins) this sort of fragmentation can be avoided. Oh sure, you still have compatibility issues (I have *never* made Quicktime work properly, for example), but at least the network as a whole isn't broken.
Semantic web, web service - yes. These are names for it, implementations of the concept. They haven't really taken off because we lack the simple components that allow people to use them productively. There is no 'web services browser' - nor would one make sense, under the current implementation. There is no 'semantic web' browser (Amaya notwithstanding).
That's why we need a simple tool for e-learning access, a simple API or protocol that can be easily adapted by developers, that provides people with a view of the entire learning object sphere. Until such a thing exists, the whole field of learning objects dies stillborn, a great idea that nobody could use.
But you don't get a simple browser without componentization. Which, since I think learning objects really are a good idea, is why we need componentization.
My own personal view (and of course there remains every possibility that I may be wrong) is that the breakthroughts we need are in this description:
- multiple instances of essential components, to avoid bottlenecks
- standards tolerance, to ensure interoperability
- smart learning objects, to ensure simplicity
- browsing tools, to ensure usability
The rest of it, the layers of third party metadata, of DRM, of personal information systems, etc., are to enable increased functionality. They are layered onto the basic system, much in the way images and later streaming media were layed onto the web.
Anyhow, that's my view. That's how I see the infrastructure unfolding. And I would be very surprised to see it unfold a substantially different way, at least over the long term.