Education

How is technology changing education?

Michael Trucano
Senior ICT and Education Policy Specialist, World Bank
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Education?
The Big Picture
Explore and monitor how Education is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Education

What’s the impact of technology use on education, and on learning?

This simple question is rather difficult to answer, for a number of reasons. The quick answer — that ‘it depends on how you are using it, and to what end’ — may be unsatisfying to many, but is nevertheless accurate. That said, before you attempt to assess impact, it can be rather helpful first to understand how technologies are being used (or not used) in actual practice. And before you can do this, it is useful to know what is actually available for use today, as well as some of the key factors which may influence this use. Being able to compare this state of affairs with those found in other countries around the world can help you put this knowledge into some comparative context. (Are we typical, or an outlier? Are we ahead, or behind?)

Back in December 2009, the UNESCO Institute for Statistics (UIS), the specialized agency within the UN system responsible for collecting data related to education (the World Bank’s EdStats initiative is a close partner of the UIS in this regard) published a very usefulGuide to Measuring Information and Communication Technologies (ICT) in Education [pdf] that has since been used to guide regional data collection efforts in much of the world.

(The EduTech blog has looked at results from a number of these efforts, including in Asia, theArab states, and Latin America, as well as more generally about what these efforts tell us about the state of school connectivity around the world; a regional report from the UIS on ICT and education in Africa is due out in the first half of 2015.)

Building on these efforts, it is expected that the first comprehensive global initiative will commence next year to regularly collect basic data related to technology use in education in *all* countries, big *and* small, rich *and* poor.

What sort of data might be important to collect, and what can be collected in practice?

Are the existing set of ‘indicators’ put forward by the UIS relevant and useful, or should they be reduced/enlarged/amended, based on what has been learned as part of efforts to collect and analyze them in recent years?

To help explore such questions, the UIS brought together a ‘technical advisory panel’ comprising an acronymic soup of organizations (including ADEA, ALECSO, CETIC, European Schoolnet, ITU, KERIS, OECD, TAGI, UNESCO, World Bank) earlier this month to review lessons from the first set of regional data collection efforts and to provide comments on, and suggest possible changes to, a consolidated list of related ICT/education ‘indicators’ and related questionnaire [pdf]. A new global survey of technology use in education, meant to be part of the regular, on-going data collection efforts of UIS in the education sector coordinated through national statistical offices, is due to launch in September 2015.

Much policymaking around technology use in education around the world exists in a sort of data-free zone. (Indeed, some argue that, given the lack of data upon which decisions can be made and justified, educational technology investments are the real ‘faith-based initiative’ in many education systems.) Even where there is an appetite for data-informed decisionmaking, limited data exist that have been rigorously collected, and most of these are from a limited set of countries. Insights are quite often drawn from, and extrapolations are made as a result of, small pilot projects and/or from OECD (i.e. ‘highly-developed or ‘industrialized’) contexts and experiences. The related policy literature is replete with mentions of things like ‘transformation’ and ‘disruption’, although such formulations are often more theoretical and aspirational than grounded in observable, on-the-ground realities in schools. While there are a number of rigorous data and opinion gathering exercises of potential relevance to policymakers in higher income countries (PISA, efforts from the European Schoolnet, theHorizon Reports), there is a paucity of such activities explicitly designed to be relevant to so-called ‘developing country’ contexts, let alone offer the opportunity to draw comparisons across such countries. In addition, rich experiences from Latin America are, generally speaking, not well known outside of the region, and vice versa (the fact that the region comprises a large number of countries which speak the same language no doubt is a contributing factor here).

In order to be relevant to key related policymaking discussions and exercises, we need data to inform answers to two different types of questions from policymakers: those that they currently, and commonly, ask; and those which they do not ask, but probably should. If we are to put in place an infrastructure to help answer such questions and to justify why we are proposing that different questions be asked, drawing on experiences that are truly global in scope, it would be useful to have systems and processes in place to collect observable, countable inventory-type data. Much more than this is needed, of course, but you have to start somewhere. When you can’t answer the simple questions, many policymakers tend to discount the value of what you suggest when discussing more complicated issues. And: If you can’t put into place the processes, approaches and tools to count the easy stuff, what hope is there that you’ll be able to collect data about and analyze things that are much more difficult?

Here is a(n unfortunately) common scenario:

A country buys a lot of computers. Despite large amounts of money being spent and lots of shiny new equipment arriving in schools, no measurable or observable ‘impact’ can be discerned as a result of all of this spending on new gadgets. One of the first things that many countries do in such a situation is to start making investments in digital educational content (something which of course should have been part of the initiative at the start, but better late than never). When devices + content does not seem to be making any impact, someone finally suggests, ‘maybe we should help our teachers figure out how to use all of this stuff’.

When this happens, here are the five most common questions I hear educational policymakers and planners ask:
1. Do teachers need specific training related to ICT?
2. What/which teachers need to be trained? (by subject, gender, age, etc.)
3. How much training should they receive?
4. What should the training include?
5. Is there a recognized standard/certification we should use?

Whether or not these are indeed the questions that should be asked at this point (i.e. if these are the questions that get at things that are really important and impactful), these are the questions that, in my experience working in middle and low income countries, policymakers as a matter of practice typically ask. Faced with such queries, it is helpful to have data on hand which can help answer them, as well as serve as pathways to help ask better questions. If you have data that you feel are reliable, and which come from a wide variety of countries and developmental contexts around the world, the process is made a lot easier.

When constructing data collection instruments and processes to help with this sort of thing, a number of dilemmas may need to be considered. To what extent, for example, should you seek to document what is already apparent versus introducing concepts or perspectives that might be useful for policymakers to consider? The act of observing, after all, may over time change what is being observed. If, as part of a global data collection effort, certain types of data are requested that policymakers in a given country have not traditionally considered important, nor tracked in an significant way, but which (for example) research and experiences in other countries suggest might be of keen potential policy relevance, might it be useful to include them as part of an effort of this sort? Perhaps. That said, those who do surveys for a living often say that overall data quality may decline as you ask for more and more data. Certain types of ICT/education data may not be easily (or inexpensively, or reliably) collected. Trying to serve three different objectives at once — advocacy vs. policy relevance vs. documentation — may mean that you fail at all of them.

If you believe it is important to, as the saying goes, ‘measure what you treasure’, it is important to remember not to simply treasure what it is that you can easily measure. Inventory-related measurement exercises (How many computers are there in schools? How many schools are connected, and at what speed? How much technology-related training do teachers get?) are important first steps, but these may well suggest more about potential access to tools for teaching and learning than about the quality or impact of such activities. When it comes to technology use in education, access-related indicators are useful, but data related to the actual frequency of such use (including ‘time on task’) are most likely much more important in the end for decisionmakers. Data about the nature of this use (something which may begin to go beyond issues and access and touch on issues related to quality) are potentially even more useful.

Different processes and systems may be better at collecting data that may result in policy-relevant insights than others. Administrative surveys such as those overseen by UIS may be better at collecting access-related data, for example, while more involved, and expensive, survey activities like those supported by CETIC in Brazil may provide greater insight about frequency and nature of ICT use. While acknowledging the important caution repeated by statisticians and social scientists (but often not fully understood by policymakers or politicians) that ‘correlation is not causation’, it may still be useful to note that data about access to ICTs in schools, and the nature and frequency of the use of such tools, might be useful to consider as part of related policy discussions.

With this context in mind, here are a few questions and comments related to the existing UIS survey instruments (the full list of ICT in education indicators is provided for your reference at the bottom of this blog post), to give you a sense of some of the related discussions that are occurring:

  • Given that the existing official global standard definition for ‘broadband’ is so low as to be of little practical utility for policymakers or relevance for schools, should the term even be used as part of global ICT/education data collection efforts?

Broadband, for better or worse (mainly for worse, in my opinion), is currently officially defined at an international level as something representing a minimum download speed of 256kbps, a rate at which a single student on a school’s  ‘broadband’ connection may have difficulties easily watching a video on YouTube. Beyond issues related to definitions, there are all sorts of practical challenges when it comes to defining key measurable attributes of Internet connectivity in schools: Who can make use of this access, and to what end? Do we care about both download *and* upload speeds? How do we account for caching and ‘overhead’ and bandwidth shaping? Do we try to collect data about speeds based only on what is available and observable today, or do we try to ‘future-proof’ the questionnaires, knowing that things will improve over time, and so include bands of access speeds that are today perhaps only relevant for schools in places like (e.g.) Korea and Singapore?

‘Future-proofing’ survey questions is not always as easy as it may seem, however:

  • If the idea is to collect data about what will be most relevant in the future, should we really be asking questions about the use of radio, as is done in the current survey instruments (ED1)?

If you were to ask most education policymakers around the world about the technologies that they are most interested in, and will be spending the most money on going forward, very few of them might mention the use of ‘radio’ high on their list of priority areas for investment. Indeed, radio is not integral to the ‘future of education’ in most places … but it might be very integral in a few of them, especially those impacted by acute crises. For example, the Ebola crisis has brought renewed discussions of the use of educational radio to reach children in places like Liberia, Sierra Leone and Guinea who have not been able to attend school for months as a result of the outbreak of that deadly disease. Such realities suggest that caution should be in order when deciding that certain technologies are ‘no longer relevant’.

  • Given definitional challenges related to documenting ‘computer-aided instruction’ (CAI) and ‘Internet-aided instruction’ (IAI), two terms that related to existing indicators (ED22 and ED23 respectively), might it be better to omit related questions from the official questionnaires entirely?

The more room there is for interpretation, the greater the likelihood that terms will be interpreted differently, especially across languages and cultures and education systems. Challenges related to explaining what (e.g.) exactly CAI and IAI are meant to represent, and how you can observe and count related activities so as to provide useful related data, may make such terms too problematic to utilize in data collection efforts such as these. Similar comments could perhaps also be made related to collecting data about the use of ‘open education resources’ (if you yourself don’t know what OERs are, or if you feel you do know but your definition differs in important ways from how others may understand the term, you’ll probably question the utility of asking about them as part of global data collection efforts).

  • Are there key policy goals for which there are no relevant indicators that have been identified?

If so, perhaps we should direct some of our attention to exploring how we might develop some of them. Some examples for potential consideration:

  • Perhaps *the* frontier issue confronting educational policymakers in many highly developed countries today relates to data privacy. This is a topic that is inextricably linked with the use of ICTs and one for which there is no related indicator in the current framework.
  • Another hot (and politically charged topic) of more immediate policy relevance to a larger number of countries around the world relates to the filtering of internet access in schools; this too is not addressed in the current indicator framework.
  • Efforts like the ‘Hour of Code’ and the (re-)emergence of ‘coding’ as a high-profile topic of widespread instruction within a few education systems education systems (like England and Estonia, for example) has meant that policymakers in many countries are wondering whether a greater empahsis on computer programming and computer science makes sense within their education systems, and the extent to which related efforts are happening in other places.
  • One educational goal given a lot of rhetorical attention in many policy documents related to the use of new technologies is that of promoting ‘personalized learning’. Definitions for, and understanding of, what this means in practice may vary widely (and in some cases may not be truly understood at all). One crude proxy by which this phenomenon has been measured in many places (and which is included in the UIS indicator list as indicators ED4 and ED4bis) is the learner-to-computer ratio, with the achievement of a “1-to-1” ratio signaling that ‘personalized learning’ as arrived. While such device-centric perspectives may have some utility, perhaps a better proxy for this sort of thing might be the existence within an education system of a unique identifier and log-in for each student, especially as more and more services and educational activities are mediated by digital devices and migrated to ‘the cloud’. This sort of indicator may also be useful to the extent that it can serve as a proxy for the relative sophistication of an education system’s approach to using technology (this sort of thing is hard to do, and typically takes lots of effort and expenditure). In addition, it mighthelp indicate where privacy issues might be of increasing relevance and acuteness.
  • The existence of high stakes, online summative (e.g. end-of-term) assessments(like those that exist today in places like Lithuania, Georgia and Slovenia, and which are coming to the United States as the impact of testing related to the ‘Common Core’ is felt) is another potentially quite useful phenomenon to considering measuring. Indeed, I can perhaps think of no better indicator of where an education system will be making huge new investments related to technology use than in cases where new policies call (for better or worse) for the introduction of high stakes, online summative assessments. (This would also be something that would be pretty easy to measure.)

Not all policy-relevant questions can be answered with the help of data collected as part of administrative surveys of the sort overseen by the UIS, of course. Certain types of data will need to be collected by other means. The data that *are* collected will need to be regularly examined to see whether collecting them is really in the end useful in any practical way, given the time and costs and disruptions involved. (Just because a question is potentially interesting doesn’t mean we should spend a lot of effort trying to answer it!) That said, while they are only one part of a much larger puzzle, efforts like those being led by UIS to collect a set of globally comparable data related to the use of ICTs in education should be welcomed by policymakers, funders, and various stakeholder groups in the public, private and civil society sectors with an interest in related issues. Here at the World Bank, we expect that these data will (to cite just one use) form a core part of what we will be doing related to technology use in education under our SABER initiative, and we look forward to continuing to participate in this global effort as it unfolds.

This article was first published by The World Bank’s EduTech Blog. Publication does not imply endorsement of views by the World Economic Forum.

To keep up with Forum:Agenda subscribe to our weekly newsletter.

Author: Michael Trucano is the World Bank’s Senior ICT and Education Policy Specialist, serving as the organization’s focal point on issues at the intersection of technology use and education in middle- and low-income countries and emerging markets around the world.

Image: A student studies at a library at Kim Il-sung University in Pyongyang April 11, 2012. REUTERS.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
EducationEmerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why we need global minimum quality standards in EdTech

Natalia Kucirkova

April 17, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum