Wednesday, December 05, 2007

Deep Learning

With 20-minute modules, fast-paced scenarios, multiple-choice questions (branched or direct), scant text and big images dominating the e-learning landscape, is it possible for learners accessing these programs to learn anything in depth?

The dominant view in e-learning has always been to make learning easier and fun and never to make it difficult. However, to learn anything in depth is a struggle, because depth is acquired not just through retention but by attempting to get to the core of things and making associations between different types of knowledge. You achieve a certain degree of success as a learner when what seemed to you as complex in the beginning appears relatively clear. This is almost like an adventurer landing in a strange country and slowly working his way towards making it his own. In the beginning, everything seems strange—people, places, language and customs. Slowly, steadfastly he struggles with this “content” until he becomes a part of it. He doesn’t have the luxury of first learning the language, then the people, then the customs, and so on in well-chunked modules.

So is the case with deep learning. It means wading through a large volume of content (in whichever form—text, graphs, lectures or films), spending long hours distilling other people’s thoughts through interaction and reflection, and articulating these thoughts in one’s own terms. A self-paced e-learning program, even when it is a simulation, usually does not allow for any of these.

Interaction with ideas, real people, real things and uncertainty is key to deep learning. So, how can e-learning promote deep learning? Does the answer lie in asynchronous e-learning that makes use of all the elements of classroom instruction: lectures, discussions, assignments and projects? Or, is Google the answer? Or learning communities sharing links and conversation? Or, is the answer yet to come?

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.