Approaches to evaluating the effect of ICT on student

What is meant by evaluation?A useful broad definition of evaluation is “Providing information to make decisions about the product or process”. However, when we try to apply this definition to “evaluating the effect of ICT on student learning”, we run into problems. What is the product? What is the process? These are questions that are difficult to answer when applied to learners. Because human beings are complex creatures, and learning is a complex, multifaceted activity, we are led from evaluation into research.
While we might evaluate the usefulness of ICT in an educational context, we also need to research how ICT can affect the processes of learning, and what learning outcomes are achieved. We can call this activity evaluation research.
When many people think about evaluating the effect of an ICT innovation, they think of “asking the students”, usually by giving them a survey. While students’ perceptions about the merit of an ICT innovation are valuable, they are only one source of data, and relying on perceptions alone can give a false impression. For example, in a study where groups of students created their own interactive videos for language learning, half of the students stated that they had not learnt any language through this process. Seen on its own,
this perception may have convinced the teaching staff to discontinue the approach. However, other evidence from video ‘out-takes’ showed that student teams were indeed engaging deeply with the language, but because of the challenge of learning new technology, they were unaware of this.
Systematic evaluation research needs to go beyond perceptions, and needs to have a clear purpose. A useful distinction is between formative and summative evaluation. Formative evaluation focuses on improvements to products and processes which are being developed, while summative evaluation focuses on the effectiveness of the finished product. Formative evaluation doesn’t only concern itself with the ICT product, but also with the learning processes of students and our performance as teachers.
To summatively evaluate the effectiveness of ICT on student learning, we first need ICT which works in the way that it should. We also need to be clear about the type of learning the ICT is designed to achieve. This means we must be aware of research on student learning.
Each individual teacher in a discipline has their own predisposition towards a favoured teaching approach, and their own beliefs about learning. These beliefs often reflect the ‘traditional’ way that

their discipline is taught paying scant attention to the scholarship of teaching and learning.
For example, in a study of online enhancements to a basic botany course, it was found that the online resources were used heavily, and were
found to be valuable by students. However, upon deeper investigation, it was found that the resources reinforced the surface-learning nature of th course, contrary to the intentions of the teaching staff.
You must understand and be comfortable with your personal paradigm of teaching and learning. Within this paradigm, you should be able to articulate why you designed the ICT in the way that you did. It is then much easier to make judgements about how well the ICT performed.
 Research paradigmsIn the same way that we need to be aware of our paradigm of teaching and learning, we also need to be aware of our preconceptions about research. As academics, we work within particular research traditions, and these may limit our capacity to evaluate the effectiveness of ICT on student learning. For example, a medical scientist might want to set up an experimental evaluation study, with ‘equivalent’ treatment and control groups.
Reeves1 has identified a range of methodological deficiencies in experimental studies, and has suggested that qualitative approaches are more appropriate for the complexity of tertiary student learning supported by ICT.
On the other hand, a social scientist may want to carry out a qualitative study, telling the ‘story’ of the students in the class. These are also problematic, because they focus on describing what happens, but often without any judgments being made about areas which need change. Purely descriptive studies may be appropriate when we don’t understand anything about the phenomenon being studied, but this isn’t the case
Evaluation and educational design
There should be a close relationship between the educational design of a learning environment and evaluation. Evaluation is an integral part of the design, develop, evaluate cycle of production.
Some evaluation models explicitly map evaluation activities to phases of the development process. As one example, the Learning-centred Evaluation framework3 has four phases:
Analysis and design: analysing the curriculum, analysing teaching and learning activities; and
with ICT.
Reeves2 proposes a pragmatic approach to evaluating the effectiveness of ICT. Instead of comparing ‘things’ or describing ‘things’, it is more appropriate to try to discover how things work in a particular learning context, using a mixture of qualitative and quantitative sources of data
An apocryphal example relates to an interactive videodisk in the early 1990s. Students using this videodisk to study Physics were found to perform significantly better in exams than students in previous years. However, when a subsequent researcher went to ask the students what happened, they said that the videodisk was so bad that they had to get together in study groups, and go to the library together.
We encourage you to question whether your disciplinary research paradigm is applicable to evaluations of the effectiveness of ICT on student learning. It is preferable to ‘step back’ from your traditional approach, and, instead of focussing on methodology, to focus on questions to ask, and how best to get answers to these questions. This ‘pragmatic’ approach is the focus of the rest of this document.
specifying the behaviour of the innovation.
Development: finding out if the innovation works in the way it was designed, and what is needed to improve it (closely related to formative evaluation).
Implementation: evaluating the effectiveness and viability of the finished product (closely related to summative evaluation).
Institutionalisation: evaluating the effects of ongoing use of the innovation within the institution

Comments

Popular posts from this blog

Why ICT in 21st century and a Teacher’s role