UK Digital Poverty Evidence Review 2022

Over the last year, I’ve researched and written the 2022 UK Digital Poverty Evidence Review for the Digital Poverty Alliance, which launched yesterday in the House of Lords.

The report synthesises a great deal of important work on digital exclusion and poverty, and it was impossible to cite everything or give each topic as much space as it probably deserved (you surely wouldn’t read a 2000-page report – who would?!). But I’m a fan of “showing your work,” so I’m making the list of references I consulted available for anyone who wants to dig even more deeply into the research behind the report (as a Zotero library).

The report spotlights three big-picture myths and three game-changing shifts that we need to address to tackle digital poverty in the pervasively digitised world of 2022. These are:

Big picture myths

The kids are alright

There are important demographic divides between those who are online with high levels of skills, and those who are offline with low levels of skills. On the whole, people over the age of 65 are more likely to be offline. This rather coarse statistic has given rise to the myth that young people are naturally “digital natives”: having grown up with technology, they will acquire the necessary digital capabilities simply through high exposure. The evidence increasingly refutes this assumption, with factors such as employment status, education, disability, income, and self-confidence cutting across age and impacting people’s level of exclusion. Often, unequal access to technology is a feature of schooling, with a growing inequity between affluent schools with more access to and choice about technology, and less well-resourced schools with more limited access and choices. As a result, technology provision in education is deepening existing differences in life chances.

Access is access

In the early days of digital divide research and policy, digital inequality was mainly thought of
as the gap between those who have internet access and those who do not. This was called
the “first-level digital divide,” and it has been thoroughly challenged by decades of further evidence showing that there are second- and third-level divides in skills, usage, and outcomes. Still today, digital inclusion is often treated like a switch that can be flipped on once and stays on for life. However, evidence shows that digital inclusion is a process rather than an event. Differences in quality, reliability, location, and experiences of access all influence whether an individual will be able to make the most of the digital world.

Digital exclusion will diminish or disappear over time without intervention

There is a common misconception that time will solve three of the biggest factors in digital exclusion in the UK – exposure, motivation, and confidence. The logic goes that the more people have to do online, the more people will spend time online, and the better acquainted with the digital world they will become. However, the digital divide has remained a problem for digitising societies since the beginning of the digital revolution – lower prices for hardware, more devices, and widespread connectivity have not solved digital exclusion. This is because digital inclusion is relative, the benchmarks are always changing as technology changes, and the solutions depend on social, political and technical responses to inequality. Ultimately, only concerted top-down and bottom-up efforts to address deep-rooted societal inequalities will help make progress on digital poverty. This dynamic approach demands thinking big and small at the same time, and putting the needs of people first.

Game-changing shifts

Digital is not a separate domain, sector, or agenda

In our increasingly digitised world, the division between online and offline has become completely blurred. One of the tensions in dealing with digital poverty is keeping the spotlight on digital and its contribution to disadvantage, while also stressing that digital is pervasive and cannot be treated as a separate issue or programme. A focus on digital poverty, like the one taken in this report, could be misconstrued to suggest that “digital” constitutes its own domain, separate or on top of other domains of social life, such as education or work. The reality is that digital is embedded in all domains. In the words of Ofcom Chief Executive Dame Melanie Dawes, digital is not a separate sector.

The digitally excluded are still digital citizens

Everyone is part of a digital society — whether they are online or not. “Datafication” is the process by which information about people is turned into data that can be processed by computers,32 and this occurs behind the scenes, whether the datafied person is digitally literate or not. It is important to recognise how the digital world affects everyone – even people who are not actively online or have long periods of digital absence33 – especially as more of our everyday lives are digitised through the Internet of Things and Smart Cities, for example.

The digital world can be unfair by design

A growing body of literature has emerged on the issue of algorithmic bias34 and automated discrimination. Tackling the determinants of digital poverty will entail an awareness of the assumptions that go into the design and deployment of technology and how these can replicate and deepen certain inequalities and exclusions. Digital poverty is not just about access to connection and devices; it is also about ensuring the digitised, algorithmic systems do not perpetuate, deepen, or create new disadvantages for people.36 The automation of many processes and services and the invisibility of algorithmic “decisions” can create a false impression that these decisions are objective and neutral. When frontline staff in essential services rely on these outputs, it can deepen inequalities faced by already disadvantaged groups. In addition, the design of platforms and technologies can actively exclude, mislead, or disadvantage certain users. For example, websites that have not been designed to Web Content Accessibility Guidelines (WCAG) exclude assistive technology users and other disabled users.

The evidence also pointed to several key recommendations:

Digital poverty does not respect sector siloes, and neither should the recommendations
for tackling it
. These recommendations have implications for all sectors – Government, local authorities, industry, the private sector, the third sector, and academia or the research sector. They have also gone on to inform five specific Policy Principles, developed in consultation with the Digital Poverty Alliance community to take the agenda forward. These recommendations and principles will contribute to the Digital Poverty Alliance’s forthcoming National Delivery Plan.

  • Affordable and sustainable inclusion: Digital inclusion must be made more affordable and sustainable through both stop-gap digital inclusion initiatives, such as device distribution, and long-term community investment that recognises digital inclusion as dependent on broader (non-digital) community resilience and resources.
  • Inclusive and accessible design: Technologies, platforms, and digital services must be designed to be safe, inclusive, accessible and privacy-protecting from the outset, through participatory design – involving affected communities in the design of technologies that affect their lives – and through effective and enforceable regulation.
  • People-centred and community-embedded interventions: Digital inclusion policy, interventions, and research need to meet people where they already are by fostering and utilising existing community-based, formal, and informal spaces for inclusion, and focusing on helping people meet their own goals and objectives.
  • Skills to engage and empower: The skills needed to tackle today’s pervasive and complex digital world are more than technical competencies, like typing and internet searching. Digital literacy must treat digital as part of civic life, encompassing critical thinking and awareness of data rights, privacy, and consent.
  • Support for the whole journey: Digital inclusion needs to accommodate a shifting and increasingly complex digital landscape by supporting people throughout their entire lives and meeting them where they are in that journey – in school, on the job, through the health and care system, and more. Life circumstances and social context are important contributors to digital poverty, so this requires a focus on the offline, social dynamics of disadvantage.
  • Building the evidence base: Although a lot of research on digital exclusion and poverty exists, there are some significant gaps. Research needs to consider digital poverty in relation to social, economic, political, and health inequality, and vice versa – these issues cannot remain siloed. Data on digital poverty needs to be both quantitative (statistical) and qualitative (interview, observation, and lived experience-based), and it needs to be representative, comparable, longitudinal, and freely available to the public and research community.

And these recommendations went on to inform the Digital Poverty Alliance’s Five Policy Principles:

Policy Principle 1: Digital is a basic right. Digital is now an essential utility – and access to it should be treated as such.

Policy Principle 2: Accessing key public services online, like social security and healthcare, must be simple, safe, and meet everyone’s needs.

Policy Principle 3: Digital should fit into people’s lives, not be an additional burden — particularly the most disadvantaged.

Policy Principle 4: Digital skills should be fundamental to education and training throughout life. Support must be provided to trusted intermediaries who have a key role in providing access to digital.

Policy Principle 5: There must be cross-sector efforts to provide free and open evidence on digital exclusion.

Opening Statement at the 2022 National Digital Conference

Presented at the Digital Leaders 2022 National Digital Conference, Understanding the Evidence Panel

Thank you very much for having me here. Today I’m mostly going to speak from my experience writing the 2022 UK Digital Poverty Evidence Review for the Digital Poverty Alliance, which is launching next week.

For the review, I consulted more than 200 sources of evidence on the five determinants of digital poverty from academia, the third sector, industry, and Government. The five determinants, as outlined by the Digital Poverty Alliance are Devices and Connectivity, Access, Capabilities, Motivation, and Support and Participation. As you can probably imagine, these headings encompass quite a wide range of issues and supporting data and the reason comes down to this: digital poverty is at least as much a social issue as it is a technological issue. So, to tackle it, we need to know more about people – their day to day lives, their hardships, the inequalities they face – and we need to build technologies that take this diversity of experiences and forms of exclusion into account.  

By way of an opening statement, I’m going to highlight three things about where we need to be looking for the evidence to end digital poverty, based on the evidence review that’s coming out on Monday in full. These are top-level observations, and if you want to dig into them, obviously check out the report on Monday, and I’ll be making my entire list of source material – including things that aren’t cited in the report – available then, too.

First, we need to look beyond the longstanding absolute divide between digital haves and have-nots – the classic online/offline distinction – to focus instead on relative differences and divides. On the face of it, the UK is a highly connected country: Ofcom reports that 94% of households have internet access. But these aggregate statistics can obscure the ways that the digital divide is deepening for some people – especially people who are already disadvantaged. There are regional divides, with rural areas especially in Scotland, Wales, and Northern Ireland the least able to access decent broadband. And divides based on income, with households in the lowest socio-economic grades being more than 15 points more likely to use only a smartphone to get online compared to the highest socio-economic grades. And there are divides based on education, with those lacking formal qualifications being 2.8 times more likely to say the internet is “not for them,” according to research Simeon, who is here on the panel with us, conducted for Good Things Foundation. There are divides based on disability, with disabled adults 18 points less likely to be recent internet users according to ONS. Factors like the reliability of your connection, the speed of your connection, and the privacy of the spaces you have at your disposal to connect also all affect your experience of the digital world. 

Second, digital inequality – and therefore digital poverty – is becoming a very complex issue in the digital world today because of what scholars call ‘datafication,’ meaning the collection of information about people and the processing of that data, which now underpins most digital services. 

It’s not just about whether someone has an internet connection or an internet-enabled device anymore. It’s also about whether enough or too little data about them is being collected and whether data-driven decisions are putting them at a greater disadvantage, for instance in risk-scoring for housing or insurance. So we need to look at the evidence around issues like algorithmic bias, digital tracking and surveillance, and the commercial sale of data to understand how people are benefitting or suffering from digitisation. And all these issues are also contributing to what people think about the digital world – their motivation. People care more and more about privacy, and this affects their trust in digital technologies. Lloyds Bank reports that over half of people offline say they’re worried about their privacy. And the Centre for Data Ethics and Innovation has found that people with low digital familiarity are the most likely to be worried about data security and risks. At the same time, people generally don’t understand how their data is collected and used, or how to identify risks to their data or their access to information. Again, CDEI reports that infrequent digital users mostly say they know little or nothing about how data about them is used or collected, but in the general population less than half of people say they know these things. These issues are all factors contributing to digital poverty.

Third, and this is related to the second point, we need to explore and address the double-edged sword of inclusion. What do I mean by that? Well, digital poverty doesn’t end when people finally get online or have access to a reasonable device. It’s not a switch that gets flipped from “off” to “on,” and now people will be able to experience the positive outcomes of digitisation. People may actually be exposed to more harms due to their digital disadvantage, so we need to include evidence about what those harms are, who is most likely to be affected, and how to mitigate them. This means building digital technologies and systems that are safe, accessible, and privacy-enhancing.

In summary, the evidence we need to take into account in order to tackle digital poverty goes beyond what we’ve traditionally relied on – statistics on digital connections and skills – and now needs to encompass all the complexities of a data-driven world and how these are embedded in people’s social contexts.

Queer Rural Connections

When my friend and project partner, Tim Allsop, approached me with a concept for a research, film, and theatre exploration of queer rural life, I was thrilled. Tim, himself, comes from a rural upbringing and had begun reflecting creatively on the impact of that context on his identity and understanding of queerness – which he explores in a beautiful series of essays on Medium.

We decided to combine ethnographic and oral history interview techniques with multi-media storytelling. Tim adapted our first set of interviews into a play (The Stars are Brighter Here), and we collaborated with videographer Suzy Shepherd and musician Conor Molloy to edit some of those interviews into a documentary film, which was selected this year for the BFI Flare Festival.

A transgender woman, Lauren, applies lipstick in a mirror in this still from the film Queer Rural Connections, which features the official BFI Flare Festival logo.

In this film, we meet interviewees who live in and around rural Suffolk and represent several different generations of LGBTQIA+ experiences and activism. They reflect on how being queer and rural has changed over time, a push and pull of connection and disconnection, as social progress has meant that queerness exists more openly in the countryside.

More on this project, and the film (including opportunities to view it) coming soon… watch this space. 🙂

Your computer problem solved – in exchange for cake.

Original illustration by Gustavo Nascimento. Creative Commons BY-NC-SA.

I have been studying community networks – internet networks built, owned, and operated by local communities – since 2018. There is a global community network movement of sorts, but I started this research with one close to “home”, in the rural Northwest of England. Broadband 4 the Rural North started with a handful of tenacious rural residents, who were fed up with their lack of internet connectivity and the unfulfilled promises of England’s leading telecommunications providers to reach their rural homes. They formed a community benefit society, raised funds themselves, and built the fastest and most affordable fibre-optic network in the country, with volunteers in every village mapping the routes and digging the ditches for the cable.

During the pandemic, communities in Lancashire banded together, using human networks they had developed while building the internet network for B4RN to get supplies to people who needed them. They also ran an online “Computer Club” via Zoom to stay connected and offer technical support to B4RN members.

A paper sign at the B4RN Computer Club, reading: “B4RN Computer Club. Your problem solved – in return for cake (or biscuits, chocolates)!”

I’ve done hours and hours of interviews and observations with B4RN, and I finally put together a podcast with some of the audio I’ve collected over the years. GenderIT gave me the excuse and the opportunity, as part of a great collection on community resilience during the pandemic. In this recording, I talk mainly about the volunteer-led B4RN Computer Club – how it has evolved from the in-person Computer Club hosted every Friday at their modest headquarters in Melling, Lancashire, into an online format during the pandemic, and how the club helps bridge the digital divide by sharing knowledge with local people about how to make the most of their internet connections.

I wanted to introduce listeners to these people I’ve gotten to know over the last few years – not just their dogged commitment to helping people get online and feel confident about it, but also their humour and camaraderie. The dynamic in these Computer Club meetings shows how B4RN is no ordinary telco.

Five Essays over Five Days at a Digital Poverty Summit

I’m currently writing an evidence review on digital poverty for the Digital Poverty Alliance, a new charitable organisation focused on connecting and focussing the digital poverty agenda in the UK. During this time, the Digital Poverty Alliance also asked me to attend, observe, and write a summary for each day of a digital poverty summit it had supported alongside several All-Party Parliamentary Groups related to digital issues. I’m reposting those essays here. They’re all available on the Digital Poverty Alliance blog.

Day 1: Digital Capability and Understanding – Digital Skills in the Workplace and the Future of Work

The future of work is digital, and the UK has some catching up to do if it aspires to a digitally capable workforce fit to meet that future. 

This was the predominant message from the first installment of the Digital Poverty and Inequalities Summit, hosted yesterday by the APPG for Digital Skills. Invited contributors included representatives from TechUK, FutureDotNow, Google, Harvey Nash Group, BT, City & Guilds, Community Trade Union, and Prospect. 

Despite encouraging figures indicating that there are 5.6 million more people with foundational digital skills as a result of upskilling during the pandemic, Lloyds Bank reports that 11.8 million (36%) of the workforce still lack Essential Digital Skills for Work. Thinking ahead, the digital workplace is changing more rapidly than ever before, rendering digital skills a constantly moving target. By some estimates (published by the Confederation of British Industry and McKinsey), 90 percent of the UK workforce will need to reskill by 2030. 

Several recommendations surfaced at the roundtable to address where there are important gaps:

  • Evidence

We need to understand more fully what working life looks like for adults in the UK today, as well as understanding the link between digital skills and all aspects of life (e.g. health, recidivism, and of course productivity), both on a personal and societal level. Questions were raised around the role of Government’s existing significant investment in the What Works Network to generate evidence-based insights about digital across sectors to enable more holistic policy social impact.

  • Education

The pathways between education and work are not adequately preparing young people for a digital workplace. Formal education needs a stronger emphasis on digital skills across the whole curriculum, not just IT, informed by the needs of the employment market; and skills training needs to be available for the many people who do not pursue university education, including on-the-job training for both younger and older employees.

  • Lifelong inclusion

People constantly need new skills to be able to engage with a changing digital world. One of the places where people have the highest exposure to digital skills is in the workplace, on the job. When people fall out of employment or retire, their skills can deteriorate, so there needs to be provision for free, lifelong learning at different life stages and circumstances.

  • Prioritisation from the top

Digital skills delivery and digital skills policy is often fragmented across different sectors and at different levels (from the community to the national level). Digital capability needs to be a clear strategic national priority, communicated across government from the highest levels. As recommended by the House of Lords Covid-19 Select Committee, this should be led by the Cabinet Office and supported by respective departments, such as the Department for Education and HM Treasury to realise the benefits to UK PLC as well as for social and economic inclusion.

  • Signposting

Several speakers stated that the problem in delivering digital skills is not supply but demand. A range of digital skills training programmes exist — Learn My Way, the Lloyds AcademyGoogle GarageiDEA, and the new skills boot camps were all mentioned — and one-to-one help exists in Online Centres across the country. But people often do not know where to go for help. There needs to be more cross-sector signposting of available skills resources and training for people at the first point of contact, when they need it, and follow through to make sure they can access them. The Government has a key role to play here, as it manages many of the most important channels to the most vulnerable people, across health, education and housing, for example. (Learn more about how the Digital Poverty Alliance Community Board aims to support this.) 

  • Motivation and skills go hand-in-hand

Both capability and motivation are determinants of digital poverty, and they are very closely linked. As Liz Williams from FutureDotNow put it, “If the pandemic hasn’t motivated people, what’s it going to take?” Several speakers highlighted how a lack of exposure, confusion regarding the language we use to talk about digital skills and the digital world, and/or a lack of confidence can be de-motivating for people in acquiring digital skills. We need to tackle motivation alongside skills from education to employment and beyond.

Although it is impossible to cover the full range of issues relevant to digital skills in the workplace in just one roundtable discussion, there were some important themes missing from the conversation.

  • Locating responsibility for digital skills

Discussions of digital skills in the workplace tend to take the expectations of employers and industry as the default perspective. The question therefore often starts from the same premise. What do employers need? What does the economy need? 

Of course, this is an important perspective because people do need skills that are required in the job market. However, some roundtable participants acknowledged the risk of this default point-of-view: it ignores users’ (people’s) experiences. And in doing so, it individualises the ‘problem’ of digital skills — situating the responsibility for digital skills on the individual rather than placing an equal burden on the system. What is the responsibility of the job market, or even the designers and developers of technologies and digital systems themselves? When digital platforms and technologies are not built to be user-friendly for marginalised users (such as disabled people, people who speak English as a second language, people who have left education, or lack textual literacy), the experience of being online can be disheartening and de-motivating, if not discriminatory.

In research that colleagues and I conducted in public libraries, we found that people face many simple digital barriers in accessing jobs that otherwise require minimal digital skills. For example, the proliferation of online-only job applications for low-paid, hourly work blocks many digitally excluded people from even applying,  and it may also be de-motivating for people to consider acquiring any further digital skills. 

Therefore, additional important questions should include: whose responsibility are digital skills and literacy, and how can the job market be made less alienating for people experiencing digital exclusion? This is a shared responsibility across Government, business, and the tech sector.

  • Critical and abstract thinking skills

In our increasingly complex digital world, many of the digital skills needed to thrive not only in the workplace but in everyday life are not technical skills; they are critical thinking and abstract problem solving skills. And they diverge in important ways from the problem solving skills outlined in the Essential Digital Skills framework. 

Ofcom has identified some of these issues, reporting that people are increasingly unlikely to validate online information sources, have limited understanding of the ways companies collect and use personal data, and fail to accurately identify paid-for online advertising. The Me and My Big Data project found that many people in the UK lack data literacy and feel disempowered in the way their data is extracted and used. And in my own research, I have found that digitally excluded users often struggle most with constructing an abstract set of steps in their mind to get to a digital end-goal. Although they may have basic competencies, like logging into Wifi, this abstract thinking is a key digital barrier.

Therefore, other important questions should be: how can we cultivate both technical and critical thinking skills among even the most basic digital technology users? Can/should the digital world be designed to require less abstract thinking in the interest of becoming more inclusive?

  • Public participation

Both of these themes point to the need for greater public participation in the design of the digital workplace, digital technologies and systems, and digital skills learning programmes. There is a notable lack of lived experience perspectives — the views of ordinary people experiencing compound forms of inequality — in high level conversations about digital skills.Tackling the motivation side of the capability equation will involve not only identifying what skills people need, but crucially what skills they want. We need diverse voices in the room from, for instance, the disabled community, in order to meet people’s needs first.

The recommendations from the roundtables will inform a forthcoming Digital Poverty Evidence Review i2022 for the Digital Poverty Alliance, in which I will explore these further themes in greater depth, drawing on evidence from academia, industry, Government and the third sector. Read the interim report here.

If you have a single suggestion about what Government could do that would make a difference in the area of digital capability, e-mail: digitalskillsappg@connectpa.co.uk.

This roundtable was hosted by the APPG for Digital Skills, in collaboration with the APPG Data Poverty, APPG PICTFOR and supported by the Digital Poverty Alliance.


Day 2: Data Poverty

If there is one digital exclusion issue that has been unprecedentedly spotlighted by the COVID-19 pandemic, it is data poverty. And now that the light has been shed, there will be no looking away.

Data poverty was the topic of the second day of the Digital Poverty and Inequalities Summit hosted by a cross-party coalition of All-Party Parliamentary Groups and MPs and supported by the Digital Poverty Alliance. The relatively new APPG on Data Poverty, which hosted yesterday’s roundtable, is a direct response to the urgent realisation, as one speaker put it, that “the digital divide comes with exclusion from society more generally.” 

Last year’s national lockdowns saw schools, workplaces, and public spaces close to prevent the spread of the coronavirus in a sharp disruption to everyday rhythms that suddenly revealed how many people were without the basic connectivity needed to continue life, let alone level up — online. According to Citizen’s Advice, 2.5 million people have fallen behind on broadband bills during the pandemic. Ofcom reports that approximately 9 percent of households with children lacked access to a laptop, desktop, or tablet. Around 17 percent did not have consistent access to a suitable device for their online home-learning, which increased to 27 percent of children from households classed as most financially vulnerable. The recent Nominet Digital Youth Index finds that a third of young people do not have broadband at home. Even among those with home broadband, 13 percent say their connection is not good enough for everyday tasks and 52 percent say there are things they can’t do online due to poor connectivity. A deluge of media coverage and personal stories powerfully illustrated how many British families have faced impossible choices between necessities during the pandemic: “pay the wifi or feed the children”. As the UN Special Rapporteur on Extreme Poverty articulated (in 2019), in a pervasively digital world, the digital divide is a question of basic human rights.

But the roundtable speakers, who represented organisations including The Good Things Foundation, Jisc, BT, Glide, Vodafone, and Nominet, all said that this was a problem well known to them before the pandemic. The cost and accessibility of connectivity and devices is a determinant of digital poverty. According to Lloyds Bank, nearly a third of those offline said that cheaper costs would encourage them to use the internet. Ofcom finds that 10 percent of internet users go online with a smartphone only, rising to 18 percent among those in socio-economic group DE. These issues are closely linked; when people do not have or cannot afford a home broadband connection, and they rely on mobile internet instead, they are paying for more expensive data

The entangled nature of data poverty (how much is about access? affordability? devices?) makes it difficult to define. And the definition often hinges on what a minimally acceptable standard would look like. The Good Things Foundation says that means data that is cheap, handy (easy to access), enough (in terms of speed and quantity), safe (to ensure privacy and protect users from harms), suitable (appropriate for an individual’s life circumstances). Nesta identifies data poverty as an inability to engage fully in the online world due to barriers including low income, not being able to get a data contract, lack of privacy, and local infrastructure. 

But the roundtable discussion demonstrated that precise definitions are less important than understanding the vectors of the problem. Data poverty — like poverty more broadly — is a product and producer of both resource and social exclusion. It is contextual, embedded in individual circumstances. And it is relative, meaning that the benchmark of exclusion changes as the nature of digital technology changes. 

Uniting around the urgency of the issue is the imperative, as captured in the key takeaways from the session:

  • Government must take a leadership role

Eradicating digital poverty cannot be achieved in isolation, and it cannot be accomplished in siloes. Government needs to lead national efforts to tackle data poverty. Despite the rapid rollout of many innovative schemes to fill an emergency gap during the pandemic (see the next point), many speakers said that people often do not know about the schemes that are available. In part, this is due to the piecemeal and fragmented array of partnerships and programmes, which are necessarily led by industry and the third sector. When there is market failure, as there is in this case, the Government must step in. The other part is the user journey, with attendees noting that where there are low cost offers, these are often too complex or hard to find for the people they aim to support. This is reflected in low take-up numbers. 

One speaker remarked, “Sometimes it feels like the Government is just standing back and saying, ‘oh, thank you very much.’” Data poverty impacts society and citizenship, yet it is non-governmental sectors that are having to step in and bridge the gaps — out of sheer public need. Government can do more, and there are many people and organisations who want to help.

Some recommendations included zero-rating essential services and implementing a universal service levy on companies that reap the greatest reward for digital engagement, many of which have saved billions in cost due to digital transformation, which has not in turn been returned to their customers. The Government has saved, too, and these windfalls should be re-invested in digital equity and inclusion. Another recommendation is to impose a social tariff on all operators — an initiative BT has already undertaken. As community members of the Digital Poverty Alliance pointed out, at the very least the Government and big business can signpost to available affordability schemes, subsidise social broadband tariffs, impose regulation requiring minimum standards of connectivity, offer help with paying bills, and help to identify the people most in need through their existing channels.   

  • We need long-term solutions that are sustainable beyond the pandemic

Industry and the third sector stepped up to meet public need during the pandemic with stop-gap measures that helped hundreds of thousands of people. To name just a few: BT, Openreach, Virgin Media, Sky, TalkTalk, O2, Vodafone, Three, Hyperoptic, Gigaclear, and KCOM took measures to lift data allowance caps on their broadband services; DevicesDotNow and others distributed donated and refurbished devices to families in need; and the Department for Education partnered with telecom companies to provide free data to disadvantaged families through schools.

But there is a clear need to develop long-term solutions to data poverty that are sustainable beyond the crisis moment. For example, what happens to the group of children next year who enter school without home access, or to the family whose limited-time free offer of connectivity runs out so they must again choose between food and connectivity? According to the Association of Colleges, 36 percent of colleges in England do not have sufficient access available, even in school. If industry and the third sector are meant to continue support for disadvantaged families and individuals, there must be a long-term plan in place to fund these initiatives and to address the multiple factors that contribute to digital poverty, including access to adequate devices and consumer choice (the ability to choose among fairly priced competitive internet service providers).

  • Data poverty is poverty

A clear theme that emerged in the roundtable was the intersection between data poverty and socio-economic deprivation. Although data poverty is a relatively new concept, it is not distinctfrom poverty writ large. Rather, the digital divide is a determinant of poverty, just like the inability to afford heating or inadequate nutrition. People who lack digital skills also often pay more for utilities and earn less per year. In short, data poverty contributes to the poverty premium. And in the midst of our most profound modern health crisis, research increasingly shows that digital exclusion is a determinant of health outcomes.

For these reasons, it is important to consider data poverty in the same terms in which we consider other forms of deprivation. And we should ask: what is the minimum standard needed to survive in our digital world? Projects like the newly minted Minimum Digital Living Standardresearch network will aim to address this issue, recognising that poverty is often defined by context as much as by simple thresholds like the speed of a connection or the availability of a single device. When families need to share devices, for instance, a limited resource winds up spread thinly across individuals’ needs.

  • There is a need to more accurately identify need

Data poverty is two-fold: it is about getting people access to the data (internet service) they need, but on the delivery side, it is also about gathering better data to locate the need. 

While there is a clear willingness to deliver more affordable access and devices to people who need them, there is a distinct gap in evidence about who those people are and what mechanisms lead to digital poverty. Here, again, is a clear role for the Government, which has the ability to signpost to those with a registered disability, jobseekers, those on free school meals, those in poor health, carers, those on low income, and those in receipt of Universal Credit, for example. These have been key vulnerable groups identified during the pandemic; we need to ensure that the pipeline of information from government to service delivery stays open and that existing channels to these people are shared between government departments so that people’s entire needs are met.

  • Where people have access is as important as other factors

It is easy to overlook the important qualitative differences in access to data that contribute to “data poverty.” For example, public internet access points have long been part of strategies for digital inclusion. The Government’s 2017 Digital Strategy called libraries the “go-to providers” of digital inclusion, and public libraries are, in fact, vitally important access points for people living in data poverty. (My own research with colleagues at the University of Oxford showed that 29% of library computer users in Oxfordshire had neither computers nor internet access at home.)

But public access is not qualitatively the same as access at home, and public wifi cannot be considered an adequate solution for people to be digitally included. Not only do people who rely on public wifi have fewer opportunities to acquire and practise digital skills, but they can also be subjected to more surveillance and tracking on public networks. Certain tasks, like attending court hearings and online banking are more difficult and risky in public internet spaces — and it is often marginalised people who are forced to conduct their private (online) lives in public. Therefore, priority must be placed on at-home or mobile internet suitable to individuals’ needs.

I think at least one further point deserves attention in a discussion of digital poverty. This is the related, downstream impact of data poverty on further digital exclusion. In particular, this is the problem of people living in data poverty becoming “missing data.” One attendee mentioned in the Zoom chat that many people are unable to prove their identity to digital ID systems. (This was a criticism leveled by the National Audit Office on the Verify system for Universal Credit.) The issue of datafied invisibility is a nuanced aspect of data poverty: people become increasingly invisible to digital systems when they do not leave data trails, and they cannot leave data trails when they cannot access or afford the internet.

Avoiding these feedback loops in which the poor have inadequate access to the internet and are further penalised for their inadequate access — by high utility bills, targeted scams, and failed credit checks, etc. — should be of paramount concern to society, the business sector, and certainly to Government.

These and other issues related to digital poverty along with policy recommendations that have emerged from the #DPIS21 meetings will inform a forthcoming Digital Poverty Evidence Review 2022 for the Digital Poverty Alliance. Read the interim report here.

This roundtable was hosted by the APPG for Data Poverty, in collaboration with the APPG Digital Skills, APPG PICTFOR and supported by the Digital Poverty Alliance.


Day 3: Research and Development – How Can the Tech Sector Drive Innovation in the UK Economy and Help Close the Digital Divide?

Both the title and discussion of yesterday’s installment of the Digital Poverty and Inequalities Summit left open the question of the relationship between tech innovation and the digital divide: is the question whether it is possible for the tech sector to both drive innovation and close the digital divide (i.e. are these ambitions at odds with one another)? Or, is it whether tech sector-driven innovations in the UK economy could possibly close the digital divide (i.e. is innovation the answer to inequality)? 

Depending on how one interprets the question, there are two potential debates and two sets of policy recommendations that might emerge from the provocation. The November 17th roundtable was hosted by the APPG PICTFOR and supported by a cross-party group of MPs and the Digital Poverty Alliance. Speakers included MPs from both parties and a representative from Telecoms Supply Chain Diversification Advisory Council, and there were also many contributions from attendees. One of the invited speakers framed the discussion by asking, “what can the tech sector do?” The speaker pointed out that this marked a departure from asking — as is often the case in parliamentary circles — “what can Government do?” 

And it is certainly a critical question. What can the tech sector do? To put it succinctly: arguably, the tech sector has done a lot. And, arguably, it could do a great deal more.

During the pandemic, collaboration between the tech sector, local charities, and Government helped mitigate some of the severe disparities in digital access and skills that were damaging people’s lives. I mentioned a number of these programmes in the blog about #DPIS21 Day 2 on Data Poverty — from device donation schemes to free data packages. Roundtable speakers also brought up the many digital skills bootcamps and apprenticeship programmes spearheaded by companies — Barclays Digital EaglesLloyds Bank AcademyGoogle Garage, and the Amazon apprenticeship scheme. The tech sector is also a major sponsor of digital inclusion initiatives more broadly — from research conducted by charities to afterschool code clubs to APPGs themselves. However, this smattering of fragmented interventions can result in incomplete user journeys, riddled with too many opportunities for vulnerable people to slip through the cracks. Still, it is clear that the tech sector is doing a lot.

It can also do more. One speaker described the “interdependence of innovation and closing the digital divide.” Transformative innovation is contingent on digital and social equity. This means access and accessibility — not just to connections and devices but to the tech sector itself. According to the Wise Campaign, just 16.7 percent of ICT professionals are women. Tech Nationreports that women hold only 22 percent of tech directorships. And a 2017 report by PwC finds that just 3 percent of women say that technology would be their first choice for a career. There is also a 20-point gap between men and women who study STEM in school. These figures point to a societal responsibility across all sectors — and especially those that benefit and create profit from the digital world — to address the systemic inequalities that make the digital world unfair and uncomfortable for many marginalised people and also make it hard for marginalised people to participate in building that world.

Ultimately, there were two questions to address at the roundtable and two resulting categories of themes that emerged:

Driving Innovation

The discussion on innovation centred on education and skills. Industry needs a more digitally capable workforce and stronger tech skills coming out of formal education in order to work in the tech sector. In fact, digital skills are needed across all sectors, with at least 82% of online advertised openings across the UK requiring digital skills and paying around 29% over those that do not. Beyond technical competencies, one speaker pointed out that a future workforce also needs to be adaptable, as the tech landscape changes constantly. 

There were strong resonances in this part of the discussion with themes from the roundtable on capabilities, and the issue of adaptability points to the need for creativity and abstract thinking skills alongside technical competences.

In addition, speakers mentioned the need for diversity in the tech sector, articulating a desire to encourage young people from underrepresented backgrounds to consider tech careers. Not only is the participation of women, non-binary, and BAME individuals critical for to achieve social equality, but their leadership in the sector can also help ensure products and services meet the needs of the whole population. 

However, the conversation stopped short of fully engaging with the question of digital exclusion and the negative feedback loop between digital poverty and employment prospects. The Nominet Digital Youth Index reports that “Tech jobs are least appealing to those most impacted by inadequate tech,” with men and those on higher incomes more likely to consider tech a viable career. Motivation was not mentioned, but it is also key here. A lack of interest in technology or the tech sector can be rooted in many intersectional factors contributing to digital and social exclusion — including negative experiences online like harassment and bullying. According to the same 2017 PwC survey cited above, 83 percent of young women said that they actively look for employers that prioritise diversity, equality, and inclusion.

The discussion highlighted the importance of focusing on the small — local and regional success stories, and the role of small startup companies in the tech ecosystem. Supporting Combined Authorities that drive innovation in their regions as well as small businesses can not only open up opportunities for innovation but also encourage workers to consider working locally and in smaller companies.

Finally, the hunger and need for collaboration across sectors (including Government) and internationally emerged as a prominent theme. The digital economy is a global one, so it will be vital to learn lessons from other countries and build bridges beyond borders at a time when Britain is having to renegotiate its relationship with even its closest economic partners.

Closing the Digital Divide

On closing the digital divide, the roundtable discussion focussed mainly on infrastructure to deliver connectivity. In 2021 it is unacceptable that parts of the UK are entirely without internet connections, particularly in rural areas. Recommendations on this topic included the need for the telecom sector to be completely transparent about where there is market failure (that is, an area that is not commercially viable to connect) so that Government can step in or assist. 

And, as one speaker put it, the policy cannot be “connect and forget.” Connectivity must come with long-term, community-embedded digital and social inclusion in the form of robust digital education in schools and local resources on digital skills.

The rural-urban digital divide is still an important consideration in the UK, where of the roughly 2% of properties in England unable to get even 10 Mbit/s connections, over 50% are rural. Although it did not get a mention at the roundtable, Government initiatives like the Rural Gigabit Voucherprogramme have helped telecom operators extend coverage to harder-to-reach areas, including small and community-owned internet service providers (ISPs). For the last several years, I have done research in rural communities that are working to get internet connections, and they often face bureaucratic barriers (the process of applying for vouchers requires whole departments for many ISPs) or severe delays (when local councils give a tender to a provider that will not build within the year). Despite infrastructure sharing regulations that allow multiple operators to use existing passive networks, another issue in infrastructure rollout is overbuild, where telecom companies install more infrastructure where it already exists rather than extending infrastructure to new areas. These are important issues at the intersection of the tech sector and Government, which deserve discussion in a forum on the role of industry in closing the digital divide.

There is a tendency for conversations about the tech industry to veer toward what academics call “technological solutionism,” meaning that technology is seen as the answer to social problems. Forums like these throw up an important question, as the tech sector steps up to fill some gaps in digital inclusion: is tech solutionism inevitable when we leave the solutions to the tech sector? Almost in response to this unspoken question, a final big theme from the roundtable was the role of Government. Echoing the first two days of the Summit, discussions pointed to the need for Government to set a clear agenda and to help the tech sector with the kind of social transformation — of education, for instance — needed to address both inclusion and innovation. 

In my view, the conversation skirted some of the most pressing issues in relation to the tech sector’s role and responsibility in relation to the digital divide (which encompasses many more issues of exclusion beyond connectivity alone). For example, there is the issue of technology design — and the need to centre the experiences of disabled users, second-language speakers, the elderly, cognitive diversity, and more. There is also the issue of how the tech sector contributes to deepening disadvantage for some people — through surveillance and risk profiling, for instance. And there is the role of the tech sector in mitigating online harms — including both the content people access online but also how their data is extracted and repurposed. 

Of course, the tech sector is a broad category that could conceivably include everything from online platforms or telecom companies to hardware manufacturers or infrastructure suppliers. It is a challenge to unpack the role of such a diverse sector, let alone in a single roundtable. By the end of the discussion, though, everyone seemed to agree on one thing: technology is likely part of the solution to the digital divide, but it is certainly not all of it. 

“We all want to help,” said the final speaker, an attendee representing a tech SME. There is an unmistakable drive within the tech sector to close the digital divide and end digital poverty; we need a collaborative and critical cross-sector community to accomplish it. This is a space that the Digital Poverty Alliance hopes to occupy, as a convenor of dialogue and collaborations. As a member of the Digital Poverty Alliance community, I see these roundtables as crucial starting points for updating the agenda around digital poverty, and the recommendations and gaps that emerge will inform the UK Digital Poverty Evidence Review 2022. 

Read the interim evidence review here.

This roundtable was hosted by the APPG PICTFOR, in collaboration with the APPG Digital Skills, APPG Data Poverty and supported by the Digital Poverty Alliance.


Day 4: Education and the Digital Divide

“This is about the new normal,” declared a teachers’ union member at yesterday’s Digital Poverty and Inequalities Summit, which tackled the issue of education and the digital divide. The comment succinctly captured a chorus of personal experience and insight that reverberated with real feeling through the discussion. As the title of the roundtable itself suggested, this “new normal” arguably encompasses both the reality of blended online and offline learning that will endure beyond the COVID-19 pandemic and the realisation of the profound digital inequalities that are exacerbating an education gap for already-disadvantaged students. 

The discussion on education rather fittingly focused on what we could learn from the pandemic moment to inform a more digitally and educationally equitable future. Speakers universally shared a concern and commitment to apply lessons about what worked and what failed to future strategic planning about technology in education. As one speaker put it, the worry is that because this period has been so challenging, educators will now “walk away and just say ‘thank goodness’.” 

But none of the roundtable contributors seemed inclined to walk away. Speakers included three former Secretaries of State for Education or Children, MPs chairing other APPGs for Social Mobility and Education Technology, the Shadow Minister for Schools, the General Secretaries of the NASUWT and NEU, senior representatives of the National Association of Head Teachers, Ofsted, Teach First, UNICEF, BESA, the Learning Foundation, Times Higher Education, and Digital Unite. Several speakers recounted first-hand experiences of families asking for help accessing devices and connectivity during lockdowns — and many receiving it through schemes like the Department for Education’s Get Help With Technology programme. And there was much praise for teachers and schools, as well as community initiatives, like local football clubs, that stepped up to provide digital resources to children in need. 

It was clear that the pandemic exposed the scale of a longstanding problem: today, digital exclusion is a key contributor to social disadvantage. According to a report by the Sutton Trust, in the first week of the January 2021 lockdown, just 10 percent of teachers said their students had adequate access to a device for remote learning. And Ofcom estimates that more than 1.7 million children do not have access to a laptop, desktop, or tablet at home. 

And the disparities were greatest for the most disadvantaged; a UCL survey found that one in five children receiving free school meals had no computer access at home. A survey by TeachFirstreported that 84 percent of schools with the poorest students did not have enough devices and internet access to ensure they could keep learning.

In considering how we learn from the crisis and adapt to a new normal, several forward-looking themes emerged over the course of the discussion:

Teachers need support and training to make the most of digital technologies for learning.

“Technology is a tool, not an end in itself” was a repeated refrain in the roundtable. Strategic thinking around a digital education needs to focus on how teachers and technology can work together to deliver a better education — which also means a fairer and more equitable educational experience. There were many anecdotal lessons learned during the pandemic about best practice in online and hybrid learning. For example, one speaker pointed out that “there was a quiet accrual of more mundane uses of technology,” citing online vocabulary quizzes for foreign languages as an example. Although the “digital classroom” often conjures images of smart whiteboards and virtual reality headsets, there are fairly simple digital tools available to teachers that are under-utilised for engaging students in traditional classroom settings.

But teachers need training to make the most of digital technologies. Several speakers were part of the education system when information technology (IT) was a new frontier, and one recalled how “tech was used by some and feared by others,” which led to different learning experiences for students in the classroom. Many nodding heads in my Zoom grid seemed to indicate that this is still a relevant issue. Another speaker pointed out that young aspiring teachers are often assumed to have digital skills, and as a result, digital skills are not included in teacher training. But it will be crucial to develop pedagogy around online and hybrid learning, with a distinct focus on how to integrate digital literacies and technologies into teaching. Speakers raised open questions, such as “what is tech good at, and what are people good at, and how can they work together?” Or, “when is face-to-face teaching essential and when could online learning be more effective?” 

I would venture to suggest that behind these important questions about best practice and pedagogy is a need for immediate research on learning experiences during the pandemic with the people who delivered them: teachers. This research must include deep, thoughtful qualitative insights in order to develop better teacher training and equip teachers with strategies that work, and it needs to be done now — while the learning is fresh.

Education extends into the home.

The digital divide in education reflects a societal divide, and we cannot fix one without addressing the other. Schools are often expected to compensate for lack of support at home for children — they are meant to be great levelers. But speaker after speaker pointed out how schools cannot do this leveling alone. There is an educational continuum between school and the home and community, so thinking about education means thinking about all of these domains at once. 

The pandemic blurred the lines between school and home, drawing attention to the ways in which different private environments impact learning. For example, some children have quiet, private spaces to study, while others have to share devices and space, contending with constant distractions and demands on their time and attention. Roundtable speakers pointed out that this has always been the case; online learning during the pandemic just made these differences more obvious. 

As Alicia Blum-Ross and Sonia Livingstone write in their book based on survey data and qualitative interviews, Parenting for a Digital Future, “although both better-off and poorer parents try to use technology to confer advantage, they are very differently positioned to do so.” Socio-economic differences are especially pronounced in the home, where children are influenced by the dynamics of family and space. One speaker recounted how some parents on low incomes needed to borrow their children’s devices during the pandemic in order to work or search for jobs. 

And digital skills are also an issue among family members. “We didn’t train the parents,” one former Secretary of State for Education said, and this was a major oversight in the rollout of IT in schools. Motivation to engage with the digital world has a lot to do with context, others pointed out. After all, we know from national surveys, including Ofcom and Lloyds Bank, that people are most comfortable learning and asking for help with digital skills from people they trust, like friends and family. And with nearly a 34 percent reported increase in homeschooling since last year, addressing the digital divide in education cannot just stop at school gates; it has to extend to parents, who need access to free, lifelong digital skills training.  

We tend to focus on the digital divide, but technology offers opportunities, too.

The expansion of digitisation and digital technologies in schools has worsened inequality for many disadvantaged students, but speakers also painted a more optimistic picture about how technology offers opportunities to make education fairer and more inclusive. Digital technologies can help to engage students with different learning styles and needs, and it can also enable students to learn in more individualised ways than would be possible in a traditional, analogue classroom. The potential to adapt course material to different ability levels offers exciting possibilities for education that meets students where they are and accommodates cognitive diversity.

In addition, digital technologies can help improve teacher productivity and enable teachers to more effectively share knowledge. Despite an acknowledgement that teachers worked harder during the pandemic in a hybrid format than perhaps ever before, several speakers mentioned the role of technology in potentially reducing teacher workload by streamlining administrative tasks, including assessments. One learning from the pandemic was that online options for some educational engagements can be equalising; online parents’ evenings allowed some working parents to engage with teachers for the first time because they could do so from home, rather than traveling to the school. 

There was also enthusiasm for innovations that could lead to what we might call the “datafied classroom” — the use of data collection and analytics to influence student outcomes. One speaker mentioned the potential of machine learning to track students’ performance in class to help identify individual learning challenges that would otherwise go unseen. Teachers could be notified by digital systems if students are struggling or bored. “This is the direction we should be moving in,” the speaker said, adding that down the line there is the potential that a young person’s progress could be constantly monitored, ultimately replacing the need for exams. “That’s not a threat; it’s an opportunity.”

Listening to this roundtable discussion, I was surprised to hear such unmitigated optimism about using datafied predictions in education, especially following the highly controversial Ofqual algorithm that predicted students’ A-level results in 2020 and demonstrated biases that devastated many students’ university prospects and prompted public protests. Any discussion of student data and algorithmic processes in education should include at least a nod toward the equality and privacy implications of such an extensive proposed regime of surveillance and assessment. The Ada Lovelace Institute last year published a blog outlining what safeguards should be in place following the Ofqual debacle, and has also published resources on algorithmic accountability that can inform public policy. Although, as this theme in the discussion highlights, there are opportunities for technology to improve classroom experiences, at this stage no technological solution should be posited without critical reflection on potential harms and downstream impacts on inequality.

We need to involve children in decisions about digital education and tools.

The final and perhaps most important theme of the roundtable was on “learning from the experts,” as one speaker put it. The experts, in this case, are children and teachers themselves. Taking a children’s rights approach to education and the digital divide means not only addressing the whole spectrum of children’s wellbeing in education (from access to devices to critical thinking skills for dealing with the digital world), but it also requires that children are consulted in the design and deployment of technologies for learning. Designing technologies withand not just for children can result in better digital consent policies and more inclusive, accessible tools that meet the needs of people with physical or cognitive disabilities, language barriers, and more.

Academic research — by danah boyd and Sonia Livingstone in particular — has long argued for including children as decision-makers in digital policy. And the ICO has issued some guidance on how to engage with children in the design of technology, recognising the importance of user-driven design. Still, the narrative around children often focuses on protection rather than empowerment. But the equitable, fair, and just digital future we want must be built with children’s rights at the core.

Even in an hour and a half-long roundtable, with many distinguished and informed speakers, there were topics left untouched that deserve a mention here. For example, the discussion did not address digital inequality in higher education (a Jisc survey reports that 63% of higher education students had problems with wifi connectivity, mobile data costs, or access to suitable devices and spaces to study during the pandemic). Nor did it engage with the role of algorithms and big data in education — which, as scholars Elinor Carmi and Simeon Yates argue, must include education about algorithms and big data. 

To me, the most notable omission was the topic of “EdTech” — technology and platforms marketed specifically for educational settings, which has seen accelerated uptake during the pandemic. The language quizzes mentioned by a speaker (and referenced above in this blog) are an example. In many ways, EdTech is revolutionising learning in positive ways, helping teachers mark work faster and collaborate with colleagues and helping to engage students with multimedia and interactive content. But the adoption of EdTech deserves more circumspection. 

Technologies for learning are often integrated into the classroom without due consideration of children’s data or privacy and the long-term implications for who has power and influence in an educational system (increasingly, power concentrates in the hands of EdTech companies, which build the technologies and capitalise on collecting and analysing student data). EdTech makes a lot of things more convenient, but the tyranny of convenience (as legal scholar and author Tim Wu put it) is that it masks the choices that tech companies are making about how we live, work, learn, and play. The much-debated and -anticipated Online Safety Bill, which holds tech companies accountable for how their products are designed and marketed for young users, does not specifically apply to EdTech. As Sonia Livingstone has written, “Schools have few mechanisms, and insufficient resources, to hold EdTech companies accountable for the processing of children’s data. EdTech providers, on the other hand, have considerable latitude to interpret the law, and to access children in real time learning to test and develop their products.” 

And this is an even bigger issue, now that the digital divide is front-and-centre in our debates about the future of education. Some children — particularly the most disadvantaged — will rely on school-issued digital devices and free digital services and platforms in school and at home. If those devices and platforms are designed to track students’ activities, those students can be perpetually surveilled, entrenching inequalities in surveillance and policing of behaviour for the most marginalised. The issues of the school-home continuum and children’s rights are clearly implicated in the rollout of EdTech in schools, so it needs to be on the agenda for tackling the digital divide.

Acknowledging the interconnectedness of the various issues that arose at the roundtable, speakers championed the goal of working together. The topic of education is a particularly personal one. Speakers regularly remarked on how they were coming to the issue not only as a professional, but also as a parent. With the will to learn the lessons of the pandemic, all that remains is to ensure that we engage with the full complexity of those lessons — the triumphs and failures, the visionary innovations and the blind spots. “All the puzzle pieces are there,” said a speaker representing the Digital Poverty Alliance, “they just need to be put together.” 

This roundtable was hosted by the APPG Digital Skills, in collaboration with the APPG Data Poverty and APPG PICTFOR and supported by the Digital Poverty Alliance.


Day 5: Beating the Barriers – Online Safety, Security, and Accessibility

In September 2020 the Government announced a new National Data Strategy, which aspired to “make the UK the safest place in the world to go online.” Safety was at the heart of this strategy for tech innovation and growth, and its legislative manifestation is the draft Online Safety Bill, which sets out a new regulatory regime to tackle harmful content online by placing a duty of care on certain internet service providers that allow users to upload content and search the internet. Online safety, security, and accessibility were the focus of the Digital Poverty and Inequalities Summit on Wednesday, and the bill was centre stage.

Roundtable speakers and contributors included members of the Commons and Lords involved in drafting or evaluating the bill, representatives of Barnardo’s children’s charity, the Children’s Media Centre, TikTok, the Centre for Countering Digital Hate, and the NSPCC to name a few. Unlike the other summit roundtables, this one was distinctly more focused — with a piece of draft legislation in the pipeline, there is a clear goal with potential for impact on how people experience the internet. I was struck by how this fact rendered the discussion more consequential but perhaps less capacious. With the country on the cusp of legislation that would protect people from a panoply of online harms, harmful but elusive issues like inequality, bias, and discrimination received hardly a mention. 

That said, the Online safety bill has been heralded as groundbreaking, even revolutionary, with a great deal of potential to set a benchmark that more of the world will follow. Undoubtedly the anticipation around this bill is in part because it is arriving “late” in the evolution of the internet and online platforms. One speaker called it “a good late step.” It is also in part because its present arrival opens up the potential for it to be a repository of our regulatory hopes and dreams about how to make the internet better — to fix what has seemingly gone wrong. But if it is to be effective, the bill must rise above the specific grievances that make it urgent and necessary — to tackle the systemic and system-level issues that underpin the worst abuses online. “If too much is loaded onto this legislation,” one speaker warned, “it will fall under its own weight.”

Although perhaps contributing to that burden, the discussion centred on several issues that speakers hoped the bill would ultimately address:

  • The Online Safety Bill must do more to address the most egregious harms to children, especially exposure to pornography and grooming.

“Childhood lasts a lifetime,” one roundtable speaker remarked. And it was clear that most of the contributors to the discussion viewed the protection of children as a primary concern for the bill. Speakers see the legislation as a chance to achieve what the 2017 Digital Economy Act has failed to do: implement robust age verification for pornographic content and reduce child exposure to sexual content and sexual exploitation, such as grooming. Behind these concerns is a broader anxiety about the long-term social impact that these experiences can have on behaviour and wellbeing. And negative online experiences are arguably a bigger issue, encompassing a whole range of social and socialising experiences. According to The Wireless Report, four out of every ten young people have been subject to online abuse, and 25 percent of young people have received an unwanted sexual message online. Ofcom reports that more than half of 12 to 15 year-olds have had a “negative” experience online, such as bullying, and 95 percent of 12 to 15s who use social media and messaging apps said they felt people were mean or unkind to one another online.

Roundtable contributors also raised the issue of encryption and the potential of end-to-end encryption on social media platforms in particular to hide the activities of child abusers. There are no simple answers to these thorny issues. Encryption can hide illegal or harmful activities, but it can also protect privacy, activism, and free speech. So called “back doors” that would allow law enforcement to access certain encrypted content also opens up the potential for exploiting those security weaknesses by others. Although some speakers returned to the “duty of care” outlined in the draft bill to argue that platforms will have to prove that encryption, in combination with other design choices on platforms, is consistent with a duty of care to users, few of the issues that sit at the uncomfortable nexus between safety (or its foil, harm) and security are black-and-white. Flexibility in approach will likely be the bill’s ultimate strength, but it inherently leaves open many questions that people want answers to. Really, what people want is for tech companies to have to answer to them.

  • Ofcom must be adequately supported to take on its new power and responsibility under the bill.

Another theme from the discussion was the need for Ofcom to be resourced effectively to exercise its new powers under the draft legislation and to shoulder its new regulatory responsibility. Indeed, this is a whole new frontier for the regulator presently tasked with overseeing the telecoms market. The Ofcom chief executive has expressed some trepidation about the sheer volume of user complaints the regulator may face and the legal battles likely to be fought with tech companies that fail to comply with the new regulations. Secretary of State for Digital, Culture, Media and Sport Nadine Dorries wants criminal liability for tech company directors, setting Ofcom up for a confrontation with the likes of Mark Zuckerburg. 

Bill supporters at the roundtable were quick to offer reassurance that Ofcom would be equipped to handle its new duties, but it is understandable that questions remain. The multi-billion dollar platforms in the eye of the storm have struggled (and often failed) to handle reported abuses on their own sites, which host billions of users speaking different languages and with different cultural reference points. Critics of big tech will argue (probably rightly) that those failures are largely down to lack of will; harmful content still makes money. But there are other factors, too. They are also due to an egregious lack of local, contextual knowledge — essential for tackling harms, which are socially constructed and embedded. And due to scale — companies have employed both human moderators and algorithms in an effort to manage the volume of content and complaints, and it is still not enough. Ofcom has reason to be concerned. And therefore, the bill’s drafters do, too. 

I was left reflecting on the important questions we still need to ask about the aspirational outcomes the bill is meant to achieve. Goals like transparency and accountability will be most impactful at the system level in taking companies to task, but what about user empowerment and agency? Big tech might think about users as a stream of data points, but this bill has the potential to treat them like individuals — human beings with a context as well as a complaint — and that would be truly revolutionary. So, to return to this theme from the roundtable, is Ofcom prepared to perform that role? 

  • A legislated approach to online harms must be adaptive and focused on the systems level in order to be future-facing.

The last theme worth drawing out from the roundtable discussion was the issue of future-proofing the bill. “Future-proof” is a common expression in technology development and deployment, but I think it is not quite the right way to frame the concept. It would be better (albeit less catchy) to conceptualise it as “uncertainty-aware.” Coupled with the almost universally shared feeling that this bill might be too little, too late in a digital ecosystem that has developed largely without the kind of toothy government regulation that can bite, there was also a palpable feeling in this Zoom call of wanting to get it right this time: getting ahead of the game, rather than playing catch-up later on. 

One roundtable contributor said, “When rules are too prescriptive, they’re easy to get around.” The solution, according to multiple contributors at the roundtable, will be to ensure the bill can be adapted to yet-unanticipated future scenarios. It must comprehensively address and define (to some extent) the dangers of the internet as we know it today, but it must also leave open the possibility that new powers and responsibilities may need to be bestowed on the regulatory process. It is important to recognise that this uncertainty-aware approach is not the child of necessity, born of the digital age. It is how laws are often made (and changed). In fact, one speaker explained that the idea behind the bill is not to do something radically new but to “level the field between online and other environments.” As media scholars have long argued, while the digital age has ushered in unprecedented technological and societal changes, it is overly sensational to treat it as entirely new and unfamiliar.    

What is difficult, I would argue, in the drafting of this bill is that there are such clear “perpetrators” of harm exacerbation and perpetuation: digital platform companies (Facebook and Google, for instance). This is what happens when we outsource our democracy to undemocratic companies in Silicon Valley, one speaker said. They are in our mind’s eye when we think about how to make this law work. And that is helpful on the one hand because it can concretise certain concepts and terminology in an effort to close loopholes for the companies we know we want to get their houses in order. But on the other hand, we also somehow need to keep a focus on the bigger picture: tackling online harms requires challenging the underlying logic of the digital economy, which trades on people’s personal data and analyses it without adequate consent in order to manipulate behaviour and generate more profit. At least one speaker made this point: it is not as much about the harmful material online as it is about how that material is surfaced and promoted by algorithmic processes. And this is an important point. As an investigation by The Markup found recently, algorithms on Facebook show some users extreme content not just once but hundreds of times. It is about the content and it is about what makes the content valuable — user attention

A joint committee held hearings about the Online Safety Bill that ended earlier in November and is set to conclude its report by December 10th and publish shortly after that. It will be interesting to see which aspects of this conversation — and contributions to the hearings — make it into the revised document. 

One theme that has consistently emerged in all of the previous roundtables during the Summit was absent in this one: the social and societal dimensions of online safety. One speaker did mention that there is a continuum between the online and the offline when it comes to harms. But there is a risk that in focusing on defining what constitutes a harm worthy of regulation, we never get to the crucial conversation about the uneven distribution of harms in society — how and why certain harms disproportionately accumulate for certain people. We know, for instance, that there is a gendered dimension to pornographic content and exposurewomen, girls, and LGBTQIA+ individuals have faced increased online harassment during the pandemic, and children with an impacting/limiting condition are more likely to experience bullying and other negative interactions online. But issues like accessibility did not feature in the discussion. Many of the harms exacerbated by digital content are socially embedded and conditioned. Therefore, platform regulation must accompany comprehensive sex and relationship education that addresses not only interpersonal communication and interactions online but also media literacy. Our digitally mediated lives are a mirror to norms, behaviours, and inequalities in society more broadly; the capitalisation of data and the algorithmic manipulation of data for commercial ends can turn the mirror into an anamorphic funhouse. A truly systems-level approach to online safety needs to take on systems of oppression and marginalisation both in cyberspace and in society as a whole.

This can only be done with the participation of people in the processes of accountability outlined in the bill. People need to be empowered not only to report harms but to define what harms are(right now, the draft bill leaves the category open to interpretation by the Culture Secretary, Ofcom, and Parliament in consultation with one another). And in addition to algorithmic transparency and accountability to a regulator, there must be transparency to the citizen-user in the form of meaningful consent regimes that give people more actual control over their data and reporting regimes that make people feel like the harms they have experienced are real, legitimate, and actionable. Legislation wields the semantic power to define certain terms and relationships, like user and harm. Tech companies have built digital spaces that define us (users) as consumers first and foremost. The law has an obligation to reassert our citizenship, instead.

This roundtable was hosted by the APPG Digital Skills, in collaboration with the APPG Data Poverty and APPG PICTFOR and supported by the Digital Poverty Alliance.

Rethinking Digital Skills in the Era of Compulsory Computing: Methods, Measurement, Policy, and Theory

Around the world, digital platforms have become the first – or only – option for many everyday activities. The United Kingdom, for instance, is implementing a ‘digital-by-default’ e-government agenda, which has steadily digitized vital services such as taxes, pensions, and welfare. This pervasive digitization marks an important shift in the relationship between society and computing; people are compelled to use computers and the internet in order to accomplish the basic tasks. We suggest that this era of compulsory computing demands new ways of measuring and theorizing about digital skills, which remain a crucial dimension of the digital divide. In this article, we re-examine the theory and measurement of digital skills, making three contributions to understanding of how digital skills are encountered, acquired, and conceptualized. First, we introduce a new methodology to research skills: participant-observation of novices in the process of learning new skills along with interviews with the people who help them. Our ethnographically informed method leads us to a second contribution: a different theory of skills, which identifies three primary characteristics: (1) sequence, (2) simultaneity, and, most importantly, (3) path abstraction. Third, we argue that these characteristics suggest the need to change current ways skills are measured, and we also discuss the policy implications of this empirically informed theory.

The whole article is available open access: https://www.tandfonline.com/doi/full/10.1080/1369118X.2021.1874475

My interview on the Critical Future Tech podcast

I really enjoyed this conversation with podcast host Lawrence Almeida on the Critical Future Tech podcast (although I don’t enjoy the sound of my own voice enough to listen all the way back through it!). Click on the button below to listen, or follow this link: https://criticalfuture.tech/issue-10-august-2021-kira-allmann-0242ac130003/

TRANSCRIPT

Lawrence – Welcome! Today we have the pleasure of talking with Dr. Kira Allmann. 

Kira is a post-doctoral research fellow in Media, Law and Policy at the Oxford Center for Socio-Legal Studies. Her research focuses on digital inequality, how the digitalization of our everyday lives is leaving people behind and what are the communities doing to resist and reimagine our digital futures at a local grassroots level. 

Kira, welcome to Critical Future Tech.

Kira – Thank you so much for having me. It’s a pleasure to be here.

L I’m really happy to have you for this tenth edition of Critical Future, which is a project that aims to ignite critical thought towards technology’s impact in our lives. 

I am passionate about the positive impact of technology, but also I’m equally obsessed with the potential negative side effects that it can bring, right? And you are someone that has clearly a lot of interest in understanding and reducing digital marginalization. And I realized that when I read the Digital Exclusion Report that you did, for the Oxfordshire county libraries right? 

Before we get into all the topics that I want to go through with you, I want to just talk a little bit about what may be a digital divide by going through the story that you have in that report. 

For the listeners, the report starts with a small story. 

“A man that approaches a staff member of a public library. And the staff member is kind of swamped in customer help requests here and there. That man asks for a phone charger. Not a power outlet, right? A phone charger. And the staff member says they don’t provide those for customers at which point the man says that he’s actually homeless and he has no way of charging his phone. He’s asking for that help ’cause he wants to charge his phone for a bit. So the staff member realizes that this isn’t your regular digital help request and ultimately they’re able to find a charger for that man, which allows him to charge his phone.” 

So you volunteered as a digital helper for that library, right? And what I want to ask you is: was that the moment that made you become interested or that made you sensitive towards this sort of digital divide? Was that the first time or were you subject to that before that?

K That’s such an interesting question, thank you for that. It actually was not the precise moment that got me interested in the role that libraries were playing in bridging the digital divide. It was actually, remarkably, one of many such moments that I had experienced. 

I started volunteering at the library in part, because I did have a broad awareness of the digital divide in the UK. It was the focus of the research that I was just starting actually at that time in my postdoctoral research fellowship on digital inequality. And really, I just kind of wanted to give back. 

When I set out to volunteer in the library, I didn’t actually have any intention for it to turn into a research project or a collaboration with the county council library at all. It was really just something I wanted to do for the community. But it became really apparent that from day one – and I unfortunately can’t remember the specific scenes I saw on day one – it became really apparent that this was actually a really important site for observing the lived experience of digital exclusion on the ground. 

In talking with fellow digital helper volunteers, other people who were doing the same kind of volunteering that I was doing, and also the library staff, I also learned that it was just really difficult for the library to keep track or document or collect data given how thinly spread they were on the ground on the really vital work they were doing to help people like the man that I described in the opening scene. 

So I thought I had access to the amazing resources of a great university institution, if I could somehow kind of put those resources toward helping the library, get a bit better data on the work they were doing and to kind of spotlight what was happening on the ground then that seemed like a really good use of those university resources. 

So that’s actually how the project came about, through constant conversation with the library staff members that I was working with everyday. 

But to return to your original question,only that was really just one of many scenes that I observed as a digital helper in the library. Certainly not necessarily the first or only one that made me think differently about where we should be studying the digital divide.

“The digital divide is actually a very complex concept that is very important because it has become a key contributor to inequality.”

L Awesome. That intro showed that digital divide can be manifested in many ways. So I’m going to ask you, can you tell us what is the digital divide?

K Well actually it is a little bit difficult to pinpoint a single definition of the digital divide. 

I think that when most people use the term in a kind of colloquial everyday conversation, what people have in their minds is the gap between people who have access to the internet and maybe internet connected devices like computers and smartphones, and those who don’t have that access. That’s kind of the simplistic “haves and have nots” kind of dichotomy. That’s the basic idea that a lot of people have in their minds. 

But the digital divide as you’ve rightly pointed out is a lot more complex and nuanced than that and to call it “the” digital divide is probably a little bit misleading, but we all do it, I do it as well. 

There are actually quite a lot of intersecting overlapping compounding divides that have a digital component to them. 

Let me start by just quite simply explaining how scholars think about the digital digital divide. 
Scholars, basically, have stated that there are three levels of the digital divide. 

The first level being the one I just articulated, which is a divide between those who have and don’t have access to the internet. 

The second level is more of a divide in skills and literacy. This is basically saying you may have access to the internet, but you may not actually be able to use those resources to their fullest capacity because you just don’t have the knowledge of how to use them. And obviously there are many layers of skills and literacy that might come into play on that level, the second level. 

The third level is really on outcomes. How do you take your access and your skills and literacy and turn them into meaningful, positive outcomes in your life. Meaning maybe attaining greater educational opportunities or greater economic gain. 

Those three levels are kind of broadly what scholars talk about when they talk about the divide, but even that is a little flattening at times, because drawing those clear dividing lines between the levels is often very difficult. They all intersect with one another and affect one another in various ways. And of course, within each of those levels, there are a lot of nuances and differentiations. 

Also the experience of being digitally excluded is often compounded by other forms of inequality. Things like linguistic inequality, racial inequality, gender inequality, socioeconomic inequality. All of these kinds of what we might call quite simplistically, offline inequalities, compound and affect people’s access to digital resources like the internet and digital devices, but also how they use them and what kinds of experiences they may have online, let’s say when they do get online. 

So basically the digital divide is actually a very complex concept that is very important because it has become a key contributor to inequality. If you’re interested in inequality, digital is a space that we all need to be looking all the time. And to relegate it actually to just the issue of internet access, for instance is really kind of an oversimplification.

L Yeah, but that’s the most visible that you can go for. Especially since the pandemic where everyone is remote there were a lot of cases in the U.S., in Europe, places where you would think everyone has access to stable, reliable internet, where that’s not really the case. 

And that is also one of the things that I read when researching some of your work on rural areas and how they can be impacted and even how they can overcome that with the example of the community-led internet that has fiber optics, that is really an incredible story. 

One thing that you mentioned when I first heard your talk was: I can have a reliable internet connection, but because I don’t have a high income I don’t have a Mac or I don’t even have a computer. The only thing that I have is my mom’s smartphone. 

That was very interesting because you believe that any youngster, they are all literate. They can all work with Excel and do spreadsheets and so on. And that’s not really the case because of that example that you gave. 

That was for me, very interesting, because that is also a way of divide, right? Again, you lack the hardware in this case to learn and when you arrive to the marketplace, you’re actually at a disadvantage towards other people that have had the experience of using say, you know, like a spreadsheet software or something like that.

K Absolutely. And actually that was something that I observed and that was told to me in various interviews during the library project as well. 

This issue of making assumptions, for instance, about what kind of people will have access to what kinds of devices and you spotlighted two key assumptions that often permeate expectations about the digital divide. 

One is that, basically, wealthier countries like European countries and the United States don’t have a digital divide problem because the internet is ubiquitous. This is an assumption that is definitely false as the pandemic has actually quite starkly revealed. And another assumption is that young people are “digital natives” which is a term that I think has been thoroughly critiqued and debunked by other fantastic scholars and policymakers. But it’s this idea that basically young people kind of grow up around technology, so they won’t have any deficiencies in terms of digital literacy or access. They’ll be absolutely fluent in things like Excel like you mentioned. They’ll be fluent in smartphones, laptops, iPads, everything. 

The reality is that that just isn’t true. What you see in a place like public libraries, you see a lot of kids coming in, for instance, who only have access to a smartphone. And when it comes to, say, printing a document off that they need, for some reason, maybe it’s a payslip or something like that they really don’t know how to use even a keyboard and a mouse. And this was something I heard from a lot of staff members that many of the students they were dealing with were pretty flummoxed by the setup of a desktop computer. 

Even things like entering passwords, for instance, into a desktop version of a platform like Gmail. Because a lot of us actually rely on saved passwords and fingerprint ID and things like this on smartphones, we don’t retain a memory of what our passwords are and when we suddenly have to enter it on a different platform, we get locked out. 

This is something you see a lot, especially among young people who really only have single device literacy. That’s something that I tried to highlight a little bit in the library report, and I’ve certainly brought it up in other forms as well around education and digital inequality, because it tends to be kind of an invisible form of digital inequality, largely because of those assumptions that people make about certain demographic groups.

L The single device literacy is an interesting term that also takes me to an idea which is: the ecosystem of platforms and systems that you may interact with — even just on a smartphone, if that’s the only thing that you got — is becoming more and more reduced. 
For instance, in some countries, basically, Facebook is the internet, you know? That’s where you search, that’s where you read about things that others share. And the same ecosystem also exists when you have packages where for X euros or pounds you will get free access to Facebook, Instagram, Spotify, and a couple of other things which have unlimited data so you are going to navigate that universe almost exclusively, but not necessarily Wikipedia articles which will use your data plan and then you will pay for that.

K Yeah, you’re absolutely right and the term that I usually apply to this phenomenon you’re describing, this kind of echo chamber phenomenon, is proprietary literacy. 

I basically mean that a lot of users who have limited access and their access is through say a platform like Facebook, they become very fluent in that platform and that company’s toolkit basically, but nothing really beyond that company’s toolkit. 

So another great example of this (well not great in the sense of positive, it’s just a good example to further illustrate the point) is the prevalence of, for instance, Google Classroom in schools that are under connected to the internet. Google has stepped in and a lot of cases where schools can’t afford or have limited connectivity for various reasons to get devices and internet access. 

Google has stepped in to help provide tools for students to be able to get online and develop skills but usually these students then only have access really to the Google suite of software and even Google hardware like Google Chrome books. And what happens is those students wind up growing up sort of really familiar with Google and not that comfortable, not that fluent in other platforms, other proprietary software and other kinds of hardware. 

I’ve spoken to teachers in rural schools that are members of the Google Classroom program who say that their students basically only want to use Chromebooks and that when they have the opportunity to get a device for the first time, what they want is that Google device and it’s not surprising because the devices that they have access to in the school are exclusively Google products. 

And so that is also, I would argue, a very limited form of digital literacy. It’s quite narrow this platform or proprietary literacy.

“If we want that imaginative space to be open, it’s best to cultivate literacy in a wide range of platforms and devices and also to think about digital use less as an issue of consumption than it is an issue of participation.”

L That is very interesting. And I don’t want to get into monopoly or antitrust thoughts right now but my question is: if you have a device, say the Google Chromebook, you use all of Google’s apps and Chrome and so on and all of that allows you to interact with society, right? So you’re able to pay your taxes to consult anything that you may want and work and communicate and you’re able to do that in that ecosystem from Google, what’s the problem with that? 

What is the problem of being locked into that ecosystem? Or do you see any problem with that, that person can live a digitally included life?

K Arguably this phenomenon is not new. Throughout the history of technology there tend to be kind of dominant technologies that lots of people buy into, they become more fluent and literate in the one that they know. I remember for instance, I had a school that bought a lot of Apple products when I was a kid and so I was a lot more comfortable with Apple products because that was what I had. 

It’s not necessarily a new phenomenon but I think there is a reason to be sort of just critical about it to kind of stick with that theme. That’s because we do live in a much more diverse digital space than a monopolistic one. In fact, there are lots of different products out there, there are lots of different companies competing and arguably we want to live in an innovative dynamic future in which new ideas are generated and there will be new companies and new products and maybe even alternative ownership models for platforms and things like that. 

If we want that imaginative space to be open it’s best, I think, to cultivate literacy in a wide range of platforms and devices and also to think about digital use less as an issue of consumption than it is an issue of participation. 

The thing about having sort of proprietary literacy as the predominant form of literacy, especially for digitally excluded communities – the communities that have limited access – what tends to happen is that these users are really being cultivated as future consumers of products. They’re being motivated, they’re nudged to buy products that are produced by a particular company. 

You may have various views on the usefulness or the value of that socially but arguably it could potentially reduce competition in the long run and it also views children, the student users of these platforms as consumers first and citizens second. 

I would suggest that that isn’t really encouraging the kind of diversity and dynamic thinking that we need in terms of building a more inclusive digital future in the long run.

L Thank you. That’s a great answer and touches on something that I want to talk about a little bit later, which is Critical Tech Literacy. We’re hinting a lot about people being critical of things, even though they are great to be used like Apple and Google products. And by the way, Apple is also another company that’s very keen on having a foothold on education. 

So talking about digital divide: we understand that it’s a complex issue and it is manifested in different ways. 

I am a technologist, I’m a software engineer. I build products online for users around the world and I already know about some things that can contribute to digital exclusion such as: it’s English only or it requires fast connections for you to connect so if you can’t go for that, then my product doesn’t work for you and I’m excluding you. 

Those sorts of things are kind of known for the more attentive technologists and so my question is: what are some things that can hint at digital exclusion? Putting aside those obvious hurdles that I just mentioned, what are things that I could be on the lookout for or that maybe I’m not aware of as I’m building new digital products that I can look for and anticipate and incorporate into my solutions?

K Of course it’s very difficult to anticipate what a better kind of more inclusive build will be without talking to users. 

I’m an anthropologist so I always believe that the best way to get a sense of what’s actually happening on the ground in people’s real lives is to observe them in their everyday lives, doing ordinary things. It tends to be very revealing. And this is slightly different than arguing for something like user driven design which I also think is a very important aspect of design development. 

But what you’re asking is: how do you undercut your own assumptions? And that’s very difficult because it’s very hard for all of us to be so self-aware that we can be conscious of our own assumptions that we build into our technologies. 

Usually the best way to do that is to step out of our own perspective and occupy somebody else’s perspective for a while. 

I can give an example of this from a conversation I had with a library staff member, actually in Oxfordshire libraries, who runs tablet and smartphone sessions mostly for pensioners — for elderly folks — in the community. He was saying there are all these symbols that especially tablets and smartphones use to navigate around menus that a lot of older folks just don’t really understand. I mean they can functionally touch things and they know that an application will open if you touch this thing and things like that but there are things that are just not intuitive to a certain generation. 

For instance how on earth would you know that a little circle with a line coming out of it is a magnifying glass, and that means “search”? I tend to refer to this as the visual vernacular of platforms or apps. 

There are a lot of sorts of things that we have intuitively come to understand as users of digital technology that aren’t necessarily universal. The sort of three lines that indicate a menu – you can expand into a menu – a lot of people find that confusing. A lot of older folks don’t see a camera app icon as being a camera. It doesn’t look like a camera to them, it’s like a circle inside a square and they say things like “how is this a camera”?

“The issue is that digital inclusion isn’t a switch that just gets turned on at some point and then it’s always on. It’s actually more of a process where people can fall in and out of being included over the course of their lifetimes.”

L To be honest I threw that question out there not expecting a bullet list of things. 

The first thing is of course be aware that your users may have special needs that your product doesn’t account for. Of course understand your users, understand for who you’re doing the product or the service that you’re building. Talking with them is essential. 

Right now you were talking about the icons and it’s funny because sometimes I’ll be prototyping some interface and I’m like: “all right I need a search icon here”. So I go on this website that gives me a lot of free and paid icons and I just type “search” and I have a lot of magnifying glass icons, you know? 

So there is this notion that like “that is a search icon”, you know? At least for web developers and designers and so on. If I say to my designer colleague “put a search icon here”, he’s not going to put anything else besides that. And it’s interesting that some groups may not realize that. 

Do you think that that will come to an end at some point? We’re going to have a generation that has interacted so much with those interfaces that at some point do you think this gap is going to narrow itself because everyone is a bit more digital native to some extent, or is new technology going to come up like VR or AR glasses and then our generation, we’re going to be like “whoa, I cannot reason with this” [laughs]. Do you think that’s going to be the case?

K It’s probably unlikely to be totally eradicated. This problem is very unlikely to totally go away and that’s for a few reasons. 

You highlighted one of them, which is that technology changes all the time, very rapidly. And for a lot of us – especially those of us who have been kind of consistently connected since let’s say the beginning of the digital age. – it’s even hard for us to remember when those transitions occurred: when certain icons morphed into other icons and when something became the standard symbol for search or when something became the standard symbol for save and that’s because that change happens gradually and happens frequently. 

As long as you’re constantly connected you might experience the change and take it on board, but not necessarily note it. I think that the issue is that digital inclusion isn’t a switch that just gets turned on at some point and then it’s always on. It’s actually kind of more of a process and people can fall in and out of being included over the course of their lifetimes as well. 

That this is something that is very important for understanding why the digital divide is unlikely to just kind of naturally close as a function of sort of demographic shifts. As young people get older they’ll just remain digitally connected and included, and we’re just not going to have a digital divide anymore. 

The reason that’s unlikely to be the case is for the reasons that we were discussing earlier that the digital divide is actually a function of a lot of compound inequalities. For instance people may be highly digitally connected when they’re employed, but then when they become pensioners they’re on lower incomes. They may actually be only living off of their state pension for instance and due to that, they may decide “I actually don’t need internet connectivity for the next few months or the next year, because it’s a bit expensive and I’ll just roll that back”. 

And then if you’re offline for a year or two years the digital world does move on in that time and when you come back online a lot of things can be really confusing. 

This is something we can see already. For instance people who leave school at 16 (you can leave school at 16 in the U.K.) and then maybe are in and out of employment for a few years and then get a job that requires digital skills, let’s say in their twenties, will often be very behind in terms of digital literacy, because they just had that gap of a few years when they weren’t regularly connected or maybe they only had a smartphone and they kind of really didn’t do that much on a laptop and all kinds of applications have changed. 

For instance our regular Microsoft Word users, sometimes you get an update on Word and you’re like “where did everything go? I don’t know where anything is anymore”. Just think of that on a much larger scale: if you’re a little disconnected for a few years due to unemployment or lack of income or something like that – life stage changes basically – that will continue to affect people basically as long as inequality continues to affect society. 

That’s why the digital divide is unlikely to be really just purely a demographic or a time problem, mainly because people fall in and out of various levels of inclusion over the course of their lifetimes. That’s something that digital designers could certainly be aware of. 

To return to your earlier question about what else designers can be aware of. We talked about the visual digital world but one other thing I wanted to mention was the importance of simplicity and how many assumptions go into deciding what is simple for a user. 

I know that a big thing in app design and development is intuitive design: this idea that things should be as easy as possible for users. But a lot of times what digitally fluent people like you or I would assume is easy is actually very difficult for users who are digitally excluded or digital novices — they’re coming to devices for the first time. 

Even something like having to create a user account can create a barrier for a user to use a particular platform or application or requiring somebody to create an email before they can use your platform or account adds an additional layer of complication to a user who may potentially desperately need access to the platform that you’ve built if it’s for something like say banking or welfare. 

It’s very important to think about what simplicity is to a user and not to you as a designer.

“Critical tech thinking is about applying a critical lens to technology. This is increasingly important because of the fact that the digital world that we encounter today is not a fair one.”

L I could go on on discussions that sometimes I have with designers or fellow front end developers about “No, just put a tooltip that just shows up when you hover on it” and I’m like, “yeah I like that you’re saving space but if they don’t know they can hover that thing and that thing has some info there and they are not used to your interface, your product, then that doesn’t exist and you’re not helping them.” There are so many stories like that and I’m going to use this to move to Critical Tech Literacy. 

Thinking critically about technology as a whole regardless of whether you’re a technologist like a programmer or a researcher. We all use technology nowadays, virtually it is everywhere, it is eating everything so it is important that we think about it critically. I’m going to read a quote from one of your slides that I screenshot. I’m going to read that and then we can dive into it a little bit. 

“Critical Tech Literacy means cultivating skills to think critically about how we engage with the life critical technologies that have become essential to everyday life. It includes sometimes taking a critical stance towards technologies that perpetuate or create inequality and unfairness in society.” 

So, first I was like “wow, Critical Tech! that is the same name! [laughs]” I went and researched it to understand what was out there regarding this theme and I mainly found literature on how critical it is for people to be literate in technology. In the sense of: you need it to work, you need it to be competitive, to be productive. 

But that’s not really what you’re saying in this sentence, right? The floor is yours to expand on what you mean by Critical Tech Literacy in this case.

K Critical Tech Literacy is actually a term that I have alighted on that I’ve kind of started using really only very recently actually in that webinar that you attended. And yeah, I am using it differently from the literature that you described. 

What I’m talking about is really kind of blending critical thinking with digital literacy. 

Digital literacy really deals with competencies: how can you use technology and can you use it effectively for achieving your goals – those outcomes that are part of the third level of the divide. That’s digital literacy. It’s a nuanced concept but it’s very widely been adopted in policy circles. 

Critical thinking is about applying a critical lens to technology. I would argue that this is increasingly important because of the fact that the digital world that we encounter today is not a fair one. Especially in recent years, there’s been a lot of excellent scholarship and reporting on the ways in which bias is built into technology, which should not be surprising because technology is a social product. 

Bias is built into so many things that we use in our everyday lives, there’s no reason we should assume that digital technology is any different. 

But still today, digital literacy is kind of approached – especially in school curricula – as a set of competencies: “How do you deal with digital technology? Are you able to perform certain tasks with technology?” And in its sort of most critical form: “can you keep yourself safe in the digital world?” These are the focuses basically of digital literacy, especially at the school level. 

I think that we really need to move more in the direction of teaching kids to think critically about the technologies they use, how the technologies are built, what biases have been built into them and how to live balanced lives with technology. 

Technology is pervasive and also largely built and marketed by private companies that have an interest in cultivating consumers who will continue to engage with those products in order to create value for the company. What that means in the long run is that sometimes that constant engagement isn’t necessarily in the best interest of the user. 

How do we start thinking critically about the pervasiveness of technology in our everyday lives? 

That’s really what I mean by Critical Tech Literacy. It’s about thinking critically about technology so that the next generation of tech users and designers: how do we ensure that they’re thinking about the assumptions that are built into technology, about their own positionality in relation to technology and how technology is a social product? 

These are all concepts that are very widespread in academia, and we use all kinds of complicated language to talk about them but they’re concepts that can be translated into a digital literacy program for all ages. They’re not really that complicated in practice and so my argument for Critical Tech Literacy is that we should really take some of these very important conversations that are happening in the academy and make them a lot more widespread.

“If we want the technology marketplace to be dynamic and increasingly fair then we need to prepare students of technology today to be thinking like that.”

L And I’m a hundred percent behind that as you may imagine by having invited you to talk about it. 

I feel that technologists are more and more aware, even though it may not be as mainstream as we would like it to be but there are things coming out in the mainstream: books like “Weapons Of Math Destruction” and even documentaries such as “The Social Dilemma” which explains in very simple terms how technology can be biased and can be used against you. And so we should be aware and be critical about what we’re building. 

One thing that is funny, that is maybe just my perception, but when you put the word “critical”, people instantly are like: “Wow, you’re going to do destructive criticism.. And what? You don’t like technology?” And that’s not the thing. Actually, I love technology. I work in that field and what I just don’t want is to contribute to things that are then going to have negative side effects for groups that I may not even be aware that that is happening, right? 

As technology becomes more and more pervasive, inevitably, it is important we wonder what is going on and not just take it in a passive manner. 

My worry is that governments or schools or even your employers are gonna say: ” what’s the concrete outcome for that?” How to use the tool, how to navigate the web – that is understandable: you’re productive, you can get a better job. 

But what is the advantage of being critical about technology? How would you get buy-in from a company or from a government and explain that we actually need Critical Tech Literacy on a more abstract level, on a more existential level and not on a practical level? How could you convince company’s management teams or a government to say: “we need more of this”?

K I think that there is really a ground swell right now of increasing awareness as you said of the issues related to how digital technologies can deepen certain social inequalities and there’s been a bit of a backlash against that. 

The debates that we’ve seen in Europe and the U.S. around data management and privacy are kind of the tip of the iceberg and I doubt that these issues are going to go away anytime soon. The debates around things like Clearview AI, the scraping of personal content without consent, what terms and conditions actually mean for users, things like this. These are debates that are not going to go away. Companies won’t be able to dodge them, governments won’t be able to dodge them and the more awareness that people kind of generally have, the more they will stay on the agenda. 

Future technologies, whether they’re built by companies or governments or NGOs or individuals or whatever, are going to have to design their platforms in fairer ways. That’s the direction of travel right now. 

So it is actually very much in the interest of companies, government and schools to think about who the next designers of technology are likely to be. Undoubtedly kids in schools today are growing up kind of with ambitious plans for what technology should look like in the future, because a lot of them are heavy technology users, that’s the reality. 

If we want the technology marketplace to be dynamic and increasingly fair – I would argue that that’s a good social goal in and of itself – then we need to prepare students of technology today to be thinking like that. We need to prepare them to be questioning their own assumptions, to be thinking about living in balance with technology so that they can build better products that enable users to have more control over their data. 

And actually, I would also argue that while it is a kind of abstract esoteric concept, this idea of critical thinking about technology, there are some really concrete aspects to this. 

So for instance, that webinar that you attended (hosted by the University of the Arts London) in the workshop component we asked participants about their level of confidence, for instance with different digital skills. And a lot of people, because this was a very digitally literate crowd, ranked really highly on things like “I can produce a word document” and ” I can search the internet”, “I can even discern quality information from questionable information online”, things like this. 

But when it came to things like “I feel I have control over my digital footprints (the data trail that I lead)”, these kind of trickier areas where people are feeling kind of insecure, the confidence level went way down. 

And this was just in a small group of participants in this workshop, but these are very digitally fluent people. When it came to things like, “I feel like I have control over my data”, or “I feel like I can switch off when I want to”, these were things that people ranked pretty low in terms of their confidence. 

Those are things that going forward, people are going to want to have more control over and they’re going to want to do. That’s what Critical Tech Literacy is all about, and that is going to affect the entire economy around technology. And so it’s got to be of interest to companies, governments, and schools, unquestionably.

“Critical in and of itself does not mean you’re always criticizing technology. It really just means developing an awareness and a kind of constant practice of reflection about the role of technology in our personal lives and in society and how technology is shaped by social forces.”

L And I would even just add something which is: on a purely competitive aspect, technology is first functional, right? I can write a document, I can communicate with someone, I can find something that I’m looking for. That’s the functionality part of it. And we all love Google because it’s so great at delivering that functionality. 

And as those needs are fulfilled by the services and the products that we use and we become acquainted, we start looking maybe for a sort of higher order need, which is: “I still want to retain some control over more abstract, more higher level things such as my privacy, how my data is shared. 

So it’s like a sort of Maslow pyramid where you have your functional needs fulfilled and now you’re moving towards those more abstract needs that need to be fulfilled.

K Yeah, I think that’s a great addendum for sure and to echo something else you said as well, I am not anti-technology either. 

I love technology and I use Google and I have Apple products and I’m also not against these companies just because they’re companies. I think you made the point earlier that it’s quite common that people hear the word critical and they think you mean criticism. And to be fair, sometimes I do, sometimes I do mean criticism. 

But critical in and of itself does not mean you’re always criticizing technology. It really just means developing an awareness and a kind of constant practice of reflection about the role of technology in our personal lives and in society and how technology is shaped by social forces. 

That is not value neutral, it has value. But it also isn’t inherently critical or anti-tech. And so I do think it is important to constantly stress that it may lead to criticism when things go badly or when biases lead to exclusions that harm people, then it is deserving of criticism, but that isn’t necessarily what critical means.

L What we’re going for is building the futures that we were promised in science fiction. The good science fiction, the utopian one, not the dystopian one, right?

K Yeah, exactly! It really is about building better futures for society! 

My ethical orientation sees those futures as being more equal and fair and inclusive and just and so those are the values that I would argue need to be built into our social products like technology. It’s an optimistic view actually. It’s not a negative destructive view.

L And on that note, thank you so much for being here with us. It was a super interesting conversation. Tell everyone where they cen keep in touch with you. Where they can follow you, your work and your research.

K Great! Thank you so much again Lawrence for having me on the program, it’s been an absolute delight. I’ve really enjoyed the conversation myself. 

If people would like to follow up and stay in touch and follow this work you can go to my website which is kiraallmann.com. You can follow cherrysoupproductions.com which is where we’re doing a lot of the collaborative work and collaborative development around Critical Tech Literacy resources. There we will be putting up some free open resources on how you could run workshops and sessions on Critical Tech Literacy over the coming months. 

And I’m also on social media. You can find me on Twitter and Instagram, all at @kiraallmann, just my name so it’s very easy.

L Great, everyone go follow Kira. She publishes a lot of amazing research and great articles. 

Thank you so much and we’ll keep in touch!

K Great! I look forward to it.

Shaping the Future of Sexual and Reproductive Health Rights

I get to do a lot of cool things in my role at the Oxford Human Rights Hub – and this is one of them. We partnered with the World Health Organization to produce a five-part documentary series on sexual and reproductive health rights across the globe. Last year, I got to film in Oxford, Geneva, and Nairobi alongside Suzy Shepherd, a great videographer we hired for the project. We sweated it out in the Office of the High Commissioner for Human Rights on what must have been the hottest day ever in Switzerland, and we spent hours observing the rhythms of everyday life in the waiting rooms of reproductive health clinics in Nairobi. I love switching gears from interviewing for research to interviewing for film (or podcasts) because it stretches the translational muscles that enable us to communicate complex ideas to wider audiences. I hope we’ve done some of that translating effectively here.

I stepped back from this particular project a few months ago, and Suzy has brilliantly and beautifully taken it over the finish line. I’m delighted to share Shaping the Future – a mini documentary series exploring sexual and reproductive health rights in the contexts of school and the workplace, as well as looking in detail at how to realise the right to safe childbirth, and access to abortion. Later on, there will also be an episode on Universal Health Care in Kenya. Shaping the Future series take a comparative perspective and closely examines how the strong moral and legal imperatives of human rights can be given detailed substance by grounding them in local context and making sexual and reproductive health a reality to each individuals’ lived experiences.

Check out the videos on the Oxford Human Rights Hub (or WHO) YouTube channels, and linked below.

Shaping the Future: Strategies for Change
Shaping the Future of Reproductive Health Rights at Work
Shaping the Future of Abortion
Shaping the Future of Safe Childbirth
Shaping the Future of Reproductive Health Rights in School
Health Rights in a Pandemic: A Case Study of Universal Health Care in Kenya

Book Review: Social Media and the Automatic Production of Memory

I’ve long had a fascination with digital archives, so much so that I spent an entire chapter of my PhD dissertation on the evolving, politically contentious digital archive of the 2011 Egyptian revolution…

So I was interested in reading Social Media and the Automatic Production of Memory: Classification, Ranking, and Sorting of the Past by Ben Jacobsen and David Beer and was lucky enough to be invited to review it for a special issue of Internet Histories Journal. The short book is a thought-provoking exploration of some of the technical characteristics of social media “memory” as well as the anxieties around what algorithmic labelling and surfacing of this emotional category of digital material might mean for memory more broadly.

Unfortunately, this review is behind a paywall. So, here are some excerpts, and if you’d like to read the full piece and can’t access the journal, please get in touch with me.

“The introduction sets out the core preoccupation of the authors – namely, the titular automatic production of memory. It is this automation, in which memories are identified, ascribed certain meanings and values, and then targeted at users through technological processes that render the social media archive worthy of specific scholarly attention. […] It is clear from the outset that the authors are not indifferently curious about the answer to this question; they are troubled by the implications such a loss of agency in our human experiences of remembering.”

“These platforms perform diverse roles beyond archive – as messaging services, storefronts, news sources, and more. What does this definitional imprecision mean for our understanding of social media-as-archive? What social role do archives play, and how do social media archives adopt, emulate, or deviate from those expected roles? Does it matter if an archive is constructed as an archive from the outset or comes to occupy that role, de facto, later on? “

“But the understated acknowledgment of platforms’ commercial imperative keeps some important questions associated with algorithmically mediated memory-making and -keeping flickering on the horizon, just out of focus. It is a question prompted by the theme of this special issue, and it lingers between the lines of Social Media and the Automatic Production of Memory. How much does the memory work of platforms contribute to their longevity? To what extent are platforms-as-archives keeping platforms-as-everything-else alive by creating deeply personal, affective ties to the companies that own them? In the constellation of values perceived, created, and traded on platforms, how much is owed to the automatic processes of sense-making and how much to our own estimation of meaningfulness? How is the entanglement of social media in the preservation and interpretation of personal pasts part of their enduring power – their resistance to redundancy? “

“Jacobsen and Beer’s dense little book, though, does offer an impressively broad – if necessarily brief – insight into algorithmic processes that are subtly and pervasively intervening in the construction of our personal pasts, presents, and futures. It is host to a truly thought-provoking conversation within an abundant bibliography of essential readings on memory, archives, and datafication, and the book’s theoretical and empirical contributions to the discussion are undeniably apt. “

Libraries on the Front Lines of the Digital Divide

Today, libraries provide essential access to digital equipment, services, and skills training. They are vital bridges across the digital divide. In this report, we present findings from our research in the Oxfordshire County Libraries, focused on two themes: (1) Exploring the day-to-day role libraries are playing in our digital world; (2) Understanding the lived experience of digital exclusion, through observations and data on library computer users and digital help seekers. 

Prologue

A young man approached the front desk hesitantly but with a smile. “Do you have phone chargers?” he asked.

Emilie, the staff member working on the front desk, couldn’t catch a break. In the 20 or 30 minutes I had been sitting at the front desk with her, there had been a steady stream of customers queuing with questions about books or printing. It was a weekday evening; people were coming in after work. I had recently switched my normal digital helping shift from afternoons to evenings because library staff had mentioned (on numerous occasions) that they desperately needed digital helpers after 5 PM.

Despite the rush, Emilie was consistently friendly and calm, working quickly and issuing direct instructions to keep the queue moving.

“Phone chargers? For customers? No, no, we don’t. There are outlets all over the library, though. You can use those,” she said.

“Oh, no,” the man said, “The thing is – I don’t have a charger. I need to charge my phone. I’m homeless, and I really just need to charge my phone for a bit.”

“Oh,” Emilie paused. “I see what you mean…”

“Do you have one?” he ventured. “For your phone? That I could borrow?”

This was the kind of front desk request that threw off the whole rhythm, stalling the queue. Over the time I had served as a digital helper, many library staff members had remarked on this: a lot of front desk requests need personalised attention that will take time, more than the minute or two that can usually be spared by staff, who are juggling multiple tasks.

I expected Emilie to shrug, maybe offer some sympathetic apology. But instead, she said, “Well, what kind of phone do you have?”

He showed her. “This kind.” He held up the bottom of the phone, exposing the connector.

“Ok, mine’s not like that,” she said. “But hang on. Can you just wait around here for a minute? I’m going to deal with these customers and then I’ll see if someone here has that phone.”

The man looked as surprised as I was. “Sure, yeah, no problem,” he said, and wandered off for a moment.

Emilie served the now fidgety cluster of customers that had massed around the front desk. When the queue receded, the man reappeared, hovering off to the side. Emilie caught sight of him, and said, “Can you give me your phone for a moment? I’ll ask around and come right back. Would that be OK?”

The man agreed without hesitation, and Emilie dashed off to the staff room, leaving the front desk to another staff member, who had just returned from shelving books.

Moments later, she returned. “I found one. Someone else has a phone like yours. I’m going to plug it in here, if that’s okay with you, and then when you want it back, you just come back here and ask for it,” she told him. “There’s always someone at the front desk,” she added.

The man was grateful; he thanked her and left the desk.

It was not your typical “digital help” session, I thought, but it was “digital help” nonetheless. How would I describe the service that Emilie just provided? Lending out personal phone chargers? It was not part of the library’s standard offering. But then again, it was – kind of.

After two years of volunteering as a digital helper in the Oxfordshire County Library, I had seen firsthand that “digital help” is hard to define, and it certainly is not confined to what we might consider to be “digital.” Widescale digitisation across all sectors and facets of everyday life has meant that digital needs are not isolated needs; and they are not merely about computers or internet connections – they are about being able to live an ordinary, well-rounded life.

Understanding digital exclusion in our digital age requires meeting digitally marginalised people where they are and glimpsing what everyday life looks like from their perspective. Libraries are a good (but certainly not the only) place to do this.

I started volunteering as a digital helper in my capacity as a private citizen, not as an academic researcher. I simply wanted to offer some hands-on support in an area that I worked on intellectually in my day job. But it quickly became apparent that digital exclusion didn’t look quite like what existing theory or policy on digital inequality or digital skills reflected. And surprisingly little research on digital literacy and skills had taken place in the real-life places, where digital exclusion is most visible and critical.

In a world that is digitising fast, libraries have become crucial bridges across the digital divide, whether or not they are prepared and adequately supported to play that role. From this vantage point, it is clear that dealing with the challenges of a persistent and pernicious digital divide means dealing with people as much as dealing with technology.

So, was Emilie offering digital help? Or just reacting to a personal need, on a human level?

Although this report is about digital inclusion, we would encourage you to resist drawing any strict boundaries around the “digital” as you read.

In what follows, we will demonstrate that the digital world – and therefore digital exclusion – is more complex than we might realise. Rather spuriously, the concept of a divide makes us think about digital versus non-digital, connected versus unconnected, literate versus illiterate, and other de-contextualised dichotomies that would treat digital inclusion as the reconciliation of an either/or. But the reality and the likely solutions really lie in the space between – where the social and technological meet.

Read the whole report >>