Monday, 2 March 2015

Hygiene factors: Using VLE minimum standards to avoid student dissatisfaction (publication)

Advertisement for Great Expectations in All the Year Round.
It's been a while since I blogged about the work we've done around student expectations of technology and VLE minimum standards/baselines.

To summarise, I conducted a staff and student survey to gauge opinions and experiences around a range of areas related to technology in learning and teaching, and in particular the introduction of a baseline for the VLE. I canvassed the community to see what others were doing in this area (something which @philvincent has recently picked up);  compared staff and student responses to my questionnaire; and shared how we are automating some of our baseline content. The ELESIG Small Grant Scheme also helped me along the way.

Looking at the data brought back some earlier discussions with Mark Stubbs and Neil Ringan from my time at MMU, and I began to apply Herzberg's notion of Hygiene Factors to minimum standards - some of the more basic 'things' can prevent dissatisfaction, but won't necessarily cause satisfaction.

Well having presented about this a couple of times, I've had an article published with a colleague Simon Watmough in eLearning and Digital Media. It's available through their OnlineFirst page, where they make articles available immediately ahead of print. Simon has done lots of work analysing our NSS results so we've managed to integrate some of this into the article.

I've been planning on running some focus groups with students to really pick the bones of this a bit more - what do students want; why; how do they access it and where from? Hopefully we'll get these going properly soon and have much more to write about. For now though, here's the abstract to the paper and feel free to go access the full text version...

Inconsistency in the use of Virtual Learning Environments (VLEs) has led to dissatisfaction amongst students and is an issue across the Higher Education sector. This paper outlines research undertaken in one faculty within one university to ascertain staff and student views on minimum standards within the VLE; how the VLE could reduce student dissatisfaction; and to propose a conceptual framework surrounding student satisfaction with the VLE.
A questionnaire was sent to staff and students asking if they agreed with the need to introduce minimum standards in the VLE and what criteria they wanted. The National Student Survey (NSS) results were analysed for six schools within the faculty over a 4-year period. Many of the NSS results were relevant to developing minimum standards with the VLE.
The questionnaire results showed the vast majority of staff and students favour the introduction of minimum standards and identified specific items that should be included, for example handbooks, contact information for staff, access to previous modules, assessment information, further reading, etc. The NSS data showed that students wanted lectures available in the VLE, improved feedback, more computers for students and information about cancelled sessions/timetable changes in the VLE.
The results suggest the presence of many minimum standards may reduce student dissatisfaction with the VLE. However, a distinction is made between those pre-potent factors that cause dissatisfaction and those that lead to satisfaction, using Herzberg’s Two-Factor Theory as a theoretical basis.
When considering minimum standards as ‘hygiene factors’, their presence can prevent student dissatisfaction and provide the foundations for innovation in technology-enhanced learning.


Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Tuesday, 24 February 2015

Difficulties in Developing Online Learning

iPad image CC BY Flickr user Official GDC
I've been thinking about the difficulties in developing online learning for a while, and a few months back questioned what innovation in online learning actually looked like. Well good old David Hopkins has stirred those thoughts at a very timely point for me. Although he discusses learner engagement in MOOCs,  I'm trying to transfer good practice to a number of completely online, credit bearing modules at Liverpool. And if MOOCs aren't the innovative solution to online learning we thought they were, what is the answer and how do we apply that to our formal taught modules?

Many of the modules/programmes I'm currently looking at are aimed at full time employees in various healthcare settings. For example we have an Acute Oncology module which attracts Clinical Nurse Specialists (CNS), Registrars, etc; and we also have a Transplant Science module, which has recruited transplant surgeons and nephrologists from all over the world.

But as someone who is involved in so many different discussions about innovation in learning and teaching, I'm stuck when I think about how these modules would be truly classed as innovative (with existing resources of course). The theoretical models are all too familiar e.g. Laurillard and Salmon, but in practice this translates to a combination of: some form of delivering content (recorded lecture of some kind); further reading; a quiz; and a discussion forum.

That's not really 'all that', is it? I'm toying with integrating more visuals and interactive scenarios, etc to really factor in some of the multimedia learning theory (I've covered Mayer's work earlier), but I'd love to know what other people think about this, and even what they do when building online courses, MOOCS, etc. There are innovative solutions to open, online CPD (through experimenting with pedagogies and technologies), but I often find that University QA processes aren't too forgiving when it comes to things like that. They tend to like things that can be accountable - they like solid learning outcomes, definitive schedules and predetermined assessment strategies.

So how can we innovate? Or do these traditional Institutional process hamper our ability to do so?


Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Wednesday, 18 February 2015

on Mayer's Multimedia Learning...

As I'm currently working on developing some online modules, I thought I'd put this post out, which is actually something I wrote about 3 or 4 years ago introducing and critiquing some multimedia learning theory.

'Instructional development is too often based on what computers can do rather than on a research-based theory of how students learn with technology’
The quote above comes from Richard Mayer discussing Multimedia Learning: someone widely cited in eLearning publications, and whose work I have wanted to read in more depth for a long time. Having done so in recent weeks, I have had various thoughts about his Multimedia Learning and Generative Theory....

Some background info

Mayer advocates cognitive approaches to learning, and identifies Dual Channels for information processing in humans - a visual channel (to process images, animations, etc) and a verbal channel (to process written and spoken words, etc). The cognitive phases Select information for processing by the dual channels; Organise verbal and visual representations; and finally Integrates, or builds connections between verbal and pictorial models with prior knowledge.

His research supports this notion, suggesting targeting both channels can increase learning (defined as applying creative solutions to problem solving) by 50%. In developing multimedia resources for learners, Mayer suggests three critical principles for reducing cognitive load, assisting human's limited capacity for information processing, and to encourage effective learning and knowledge construction:

Spatial contiguity: learning is more effective when words and images are presented closer together (a bit Gestalt-ey);
Temporal contiguity: learning is more effective when words and pictures are presented simultaneously;
Modality contiguity: learning is more effective when verbal information is presented auditory with pictures, over text with pictures. However, if there are no images/animations, text with audio narration can still target the dual channels.

This is certainly interesting for anyone developing online/multimedia learning materials, and something I will personally consider more thoughtfully. However.....

Concern 1: Classroom teaching 

In 'The Promise of Multimedia Learning', Mayer alludes that multimedia learning is more effective than learning within a classroom, suggesting it is a 'single-medium' presentation; that is, relying solely on words - the verbal channel.

However, I can't help but feel this is a misguided representation on classroom learning - it doesn't account for the variety of innovative approaches that can be utilised within the classroom. Social, experiential, problem based, and technology-enhanced approaches can all make for effective learning experiences within the classroom. For example, in learning the workings of a bicycle pump, Mayer suggests descriptive images/animations supported with audio is most effective, but doesn't consider the potential learning experiences if students could actually use the pump in real life, and dismantle it to see the inner workings.

Whilst I advocate the multimedia approaches, I think it is important educators don't get carried away with suggestions like this, and do actually challenge pre-existing presumptions of student learning. For as great as it is, multimedia (and eLearning in general) is not a panacea or answer to solve every learner's (and learning) problem!

Concern 2: Learning in isolation

Such Multimedia 'packages' suggest we learn in isolation i.e alone. Does Mayer recognise the importance and potential of social learning? Or does he refute it? Embedding such content within a VLE can offer a range of social possibilities through the use of discussion forums, chat and web conferencing; to share experience and help construct knowledge and meaning.

Concern 3: Subject matter

Mayer suggests;
‘Contiguous presentation of visual and verbal material may be most important when the material is a cause-and-effect explanation of a simple system, when the learners are inexperienced, and when the goal is meaningful learning’
So this therefore raises questions of transferability. Will following his principles for other situations, such as learning about 19th century literature, produce the same outputs? What if there are no cause-and-effect explanations to draw upon?

Concern 4: Freedom

In Clarke and Mayer (2011), the authors suggest;
"Because the metaphor of the Internet is high learner control, allowing learners to search, locate, and peruse thousands of Internet sites, a tempting pitfall is to create highly exploratory learning environments that give learners an unrestricted license to navigate and piece together their own unique learning experiences. One lesson we have learned from over fifty years of research on discovery learning is that it rarely works."

But this flies in the face of much of the current thinking around encouraging learners to search, find, review and select appropriate information. Michael Wesch is a popular figure advocating such skills, and a diverse bank of research into tools such as Second Life and digital literacies would equally encourage such discovery approaches.

Should we spoon-feed our students or provide a structure to enable them to solve problems and find certain things out for themselves? They won't be spoon fed in the world of work, so failing to prepare them here, is preparing them to fail in the real world!


I think the essence of Mayer's work stands true, and does have a place in education today. For example, I see the development of OERs as a clear area that could benefit from insight into Mayer's research, taking note of contiguity effects to reduce cognitive load. However ultimately, these objects might be repurposed and placed alongside other materials and activities to encourage a more holistic learning experience.

In relation to OERs (and that will take it's own post completely at some point), Windle et. al (2011) suggests learners value self-assessment; self paced learning; and use for revision of 'difficult' areas - together then, we can obtain a clear picture or framework for developing reusable content.

I'd love to hear your thoughts on the above, so please get in touch either in the comments, by email or on twitter.



Mayer, R. E., & Gallini, J. K. (1990) When is an illustration worth ten thousand words? Journal of Educational Psychology, 82(4), 715-726. doi:10.1037//0022-0663.82.4.715

Mayer, R. E. (1997) Multimedia Learning : Are We Asking the Right Questions ? Educational Psychology, 32(1), 1-19.

Mayer, R. E. (2003) The promise of multimedia learning: using the same instructional design methods across different media. Learning and Instruction, 13(2), 125-139. doi:10.1016/S0959-4752(02)00016-6

Mayer, R. E., & Moreno, R. (2003) Nine Ways to Reduce Cognitive Load in Multimedia Learning. Psychology, 38(1), 43-52.

Windle, R. J., McCormick, D., Dandrea, J., & Wharrad, H. (2011). The characteristics of reusable learning objects that enhance learning: A case-study in health-science education. British Journal of Educational Technology, 42(5), 811-823. doi:10.1111/j.1467-8535.2010.01108.x

Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Monday, 16 February 2015

Research Skills: My use of Google Scholar & Mendeley #phdchat

I've been asked to present at our Staff-Student Digital Literacies (LearnIT) event this week on the theme of Digital Research. I don't consider myself particularly good in this area but happy to share my experience.

There are many workflows and programmes out there and lots of people will disagree on what works best for them. So this just happens to be what works well for me.

I've recorded a short screencast on this, but essentially it's made up of using Google Scholar to identify articles - sometimes by the metrics option but also by general searches and viewing academic profiles for published work (the video only covers metrics though). Then I'll either use the Mendeley import tool or just download the article, drag and drop into Mendeley and annotate away. It's a god-send that Mendeley automatically recognises the journal citation details (author, date, volume, etc) - well 99% of the time it does anyway.

Then in Word I'll use the Mendeley plugin to insert citations and add my bibliography. It works super easy which is why I like it so much and as long as you use Mendeley for all your articles, you literally can't go wrong. I love it and use it for any writing project I'm working on. I suspect this will be my best friend as I'm embarking on PhD studies.

There are lots of other features of both Scholar and Mendeley that I haven't mentioned - the Mendeley groups for example, can be a good way to learn more about work in specific areas. Oh, and the iPad app works like a charm if you want to do your reading/annotating that way.

Anyway, take a peek at my video below and/or head over to Mendeley to grab it for free yourself. Oh, and I'd love to see how other people manage similar processes so why not blog your own workflows or just leave a comment below.


Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Monday, 9 February 2015

User input & interaction - from the mouse to gesture-based control

Englebart's Mouse Prototype
It's seemed like for many years the relative genius of The Mouse as an input device was truly innovative. It enabled us to interact with iconography on our desktops and thus transform the personal computer from something a bit geeky to something that, through the use of some strange metaphor-inspired icons, would be usable to millions of people across the world. Importantly, the metaphors of the User Interface - the desktop, trash can, folders and files - went hand-in-hand with its success at supporting users understand and move from the analogue world to a digital one.

The mouse has quite an interesting history, since its early prototypes by Douglas Englebart (who named it) through to the Xerox Alto and ultimately, the Macintosh Lisa (a decent-enough history is available on wikipedia). It has ruled as the main input device since, and has seen many slight variations in its design. But, it's still a mouse.

Of course, Apple really blew our minds when they introduced the iPhone and its touch screen, which has ultimately led to almost all smartphones being touch screen; the introduction of a viable tablet device and even touch screen laptops and desktop PCs. To a large degree, controlling our devices with the touch of a finger, a pinch and zoom, or a swipe has become second nature to many of us. In fact for some young thundercats, it's all they've ever really known.

Leap Motion | CC BY Flickr user David Berkowitz
It wasn't until around 2010 when we seen Leap Motion - a gesture based controller, enabling us to wave our hands in front of the screen to control what we would typically use a mouse for. It was all a bit exciting and Minority Report-like. I still find it fascinating how that movie is seen as almost the goal for technological development.

The Leap is now quite affordable (about £60 on Amazon) but for one reason or another, I think it's still seen as a gimmicky type of device that hasn't really impacted on computing, or even educational technology, quite as much as I thought it might. Which is a shame.

The latest gesture-based interaction is something that I've just come across (and actually which inspired me to write this post) - an app called ControlAir. Well actually, it's the webcam which is the device, and the clever software does the donkey work. It enables the user (using a Mac) to control certain apps (iTunes, Spotify, etc) by the use of a gesture. For example, to mute the volume, bring your index finger up to your mouth (as though you're telling it to shush!). Genius. To raise/lower volume and move back and forward, raise your index finger and click your finger in mid-air. Check out the marketing clip below.

I downloaded this over the weekend and love it already. Yes, it's hugely gimmicky but offers an insight into how we might be controlling our devices in the future. It's easy to see how we might point and click to select items on a screen, pinch and zoom, fast forward through movies or swipe away. Apply this to scenarios enabling students to interact with virtual skeletons, muscles and organs and it's actually quite exciting.

Head over to the eyeSight website to see ControlAir.


Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Friday, 30 January 2015

Using 3D animations in teaching #ukoer

You know sometimes when you start a new job and you always hear about some great work that's been taking place in parts of the University? Well this post is about one such project - Dr. Anna O'Connor's 3D Eye Animations for the teaching of eye movement disorders in our Orthoptics department. What's even better, is that they're openly available on the web - #UKOER.

Rather than me tell you about it, Anna and her students have provided me with some brief text...

From Anna...
3D concepts, such as eye movement disorders, are difficult to explain and visualise from 2D static images. That’s why we were very excited about the opportunity to work with the elearning unit to develop a series of eye animations showing eye movement disorders. These animations will be an invaluable resource, which students can access to support their learning at university, and to reinforce their learning while on clinical placement at hospitals around the UK. They are freely available for anyone interested, whether a student, patient, clinician or teacher. We started with normal eye movements which were a big hit, being accessed by over 2,000 people around the world in the first year. Now we want to let everyone know about the new developments. Huge thanks go to the elearning unit and Scott Dingwall for creating these amazing animations.

From the Students...

As Student Orthoptists, it is very important for us to know about different conditions that effect the muscles of the eyes and what problems can occur when these muscles stop working properly. When it comes to revision and studying, it can sometimes be quite difficult to visualise what happens to the eyes with certain conditions. These animations show how the 6 muscles around the eye (known as the extra ocular muscles or EOM's) act when certain conditions occur - such as Duane's retraction syndrome, Brown's syndrome and nerve palsies - where the innervation to the muscle is not working.

The eye animations really are the next best thing to having a real patient in front of you! They are easy, quick and simple to use and it allows you to see the action of the muscles in all 9 positions of gaze in different conditions.

So there you are. Another great project taking place at Liverpool. I think it's wonderful that they've managed to get a development like this together with a limited budget - the skills required to develop these 3D animations are not everyday skills so massive pat on the back for Scott, as well as for Anna to have the vision (no pun intended) to see the value in this. And even nicer to see students are appreciating it.

Feel free to head over to the eye animations to have a play and you can follow Anna on Twitter (@Drannoc).


Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License

Wednesday, 28 January 2015

Only if there was a book about #edtech

Oh wait, there is....

The Really Useful #EdTechBook
It's called "The Really Useful #EdTechBook".

Some time ago David Hopkins (@hopkinsdavid) came up with this wonderful idea of editing a book comprising chapters from lots of people engaged with #edtech on a daily basis and I was honoured that he asked me to be one of those authors, along with the likes of +Sheila MacNeill , +David Walker +Wayne Barry +Sue Beckingham and +Sharon Flynn (amongst many others).

The whole writing of the book was interesting in itself, with authors dotted all over the place working in Google Docs and engaged in discussion with each other. David conducted short interviews with each of us and at the time of mine, he was shacked up in a hotel room - for me something that really summed up the impact technologies have had on us all.

So my chapter attempts to take a view of Learning Technologists and the teams in which they sit within HEIs. I'm sure it's by no means a holistic picture but I think it sets the scene for the variations in both the roles and the teams. I consulted a few tweeps in the making to ensure that what I was writing was as accurate as can be.

As I was writing the chapter I thought about the various pressures on HE and the LT role in particular. It struck me that as we move to a post-digital era, and with continued financial pressures on institutions, the variations of the Learning Technologist will not cease any time soon. Who knows what's in store for us...

Anyway, you can buy the book, get on major eBook readers and there's also a cheeky pdf you can grab for free. Head over to David's blog here for more details.

Oh, and a big hats off to David and all the authors for achieving this in such a short period of time.


Creative Commons License
The Reed Diaries by Peter Reed is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License