Tech for…[Insert Here]

What do we really mean by “tech for good”?

This post appears in my extremely sporadic Critical Tech newsletter.

A few weeks ago, I attended the Digital City Festival in Manchester, hosted by Prolific North, a news hub for the North of England focusing on media and creative sectors. I was invited to speak on a panel about “Tech for Good.” It often happens that panel discussions prompt me to spend more time than is strictly necessary for the purposes of the event thinking about the topic and the pre-prepared questions. I have a tendency to write copious notes and then never refer to them again because I only have a minute or two to speak (why didn’t I think of that before writing all those notes?!). This is both the thrill and the frustration of a panel discussion. But thankfully, I also have a newsletter. So, I’m determined those pages of notes will not have been in vain…

Therefore, this post is a deeper dive on one of the questions we were given to prepare in advance of the panel. 

There are actually three questions I wanted to cover, but then I got a bit carried away, and the first question generated enough material for one e-mail on its own, so I’ll address that one now. And you can expect a Part 2 e-mail in a week, or so. (As someone who typically leaves about a year between mail-outs, this two-part series feels like a big commitment I’m not entirely comfortable making, but hey ho… here we go!) 

Q: What does “tech for good” mean to you?

I have to confess that this expression doesn’t mean a whole lot to me, and I’d go further to suggest it doesn’t have much semantic content in general. I’m well aware that this isn’t perhaps the best starter-for-ten on a panel titled TECH FOR GOOD, but when we say that we want tech to stand for something, it’s worth considering what that means it’s standing against. So, let’s get right into it and contemplate what the alternatives to “tech for good” might be. 

Is there such a thing as “tech for evil”? 

The tech giant Google famously included “don’t be evil” in its code of conduct (and this remains an unofficial company motto to this day). But the evolution of Google has perfectly exposed the profound limitations of this conceptual framing — who gets to define what is “good” and what is “evil”? When thousands of Google employees signed a letter of protest over the company’s work supporting the U.S. Department of Defense, it became apparent that perhaps not enough time had been spent defining what the company — let alone users of Google services — meant by “evil.” Another historical example further illustrates the point. Some evidence has surfaced that during WWII U.S. tech behemoth IBM leased Hollerith punch-card machines to the Nazi regime in Germany, which were used to facilitate the Holocaust. The technology introduced data-driven efficiency into the identification, persecution, and murder of Jews. In 2018, this shameful example of tech industry complicity was cited by Amazon workers, who were protesting their company’s sales of facial recognition technology to law enforcement. Is (or was) IBM a “tech for evil” company? What about Microsoft? Amazon?

The reality is that these definitions are forged at the intersection of ideology, values, and outcomes. And as a result, they often need to be (re)negotiated in the public sphere. I’d argue that the vast majority of tech designers and developers don’t set out with nefarious intent (leaving aside black-hat hackers and maybe the founders of 4-chan or 8-chan). Innovators and inventors throughout industrial history have overwhelmingly set out to make our lives better. The problem with tech is usually a problem of unintended consequences or unconscious bias — or, perhaps, murky and ill-defined moral imperatives – and that makes doing good a much more nuanced endeavor, fraught with difficulty. 

Instead of talking about “tech for good,” I’d like to suggest an alternative that might hopefully force us to engage with what we mean by “good” and where our definitions came from. I’d like to suggest we talk about values — not in the abstract, but in the specific. No technology exists in a vacuum, and talking about values helps to de-mythologise tech neutrality. If we start from the premise that your product, your platform, your devices have values, we can be more concrete about what tech stands forand what it is against. And in the process, we might become more aware of our own values and how they influence our assumptions.

This is perhaps best illustrated with a simple exercise.

Put two columns on a piece of paper and write “tech for good” above one column. Don’t label the other column. Now, list as many companies or organizations you can think of that qualify for the “tech for good” column. 

Next, start on the other column — in this column list all the companies or organizations working in the tech space that you would not describe as “tech for good.”

Once you’ve made your lists, look at the not-tech-for-good column. What unites these companies/organizations? Write down the qualities they share. Are those qualities similar? Do they generally fall into one or a couple of buckets? What is the dominant quality? 

Name the column for that quality: Tech for [Insert Here].

Is it: Tech for Profit? Tech for the Benefit of Shareholders Rather than Society? Tech for Terrorism? Tech for the Military? Tech for the Exploitation of People’s Data without Compensation or Adequate Consent? Tech for the Police? Tech for Invading Privacy? Tech for Spreading Misinformation? Tech for Extracting Scarce Resources from the Earth?

There are many, many possibilities. And reading this list, you might be thinking: mine looks nothing like any of those!

Now, consider for a moment whether the qualities that unite these organizations are clearly, unequivocally, not good — even bad? Think of some counter arguments. Think about whether your categories are likely to look the same as my categories, or anyone else’s.

What you have, at the end of this exercise, is not “tech for good” in one column and “tech for bad” in another — you have tech for certain values in one column and tech for different values in the other. You can label those values. You can name them. And you can (in fact, you will have to) engage with why you believe certain values embedded in technology will lead to beneficial outcomes and others to deleterious outcomes (and for whom?).

In other words, “good” is meaningless unless we know what we mean by “bad” — unless we’re actually willing to call out certain practices, technologies, platforms, policies as bad. In polite society, we’re largely unwilling to do this, and this ethical paralysis sabotages any attempt to change the tech industry in the fundamental ways that would refocus its efforts on human and environmental wellbeing. 

I had a moment of pause in organizing my thoughts on this topic when I turned a page in a book I recently finished reading, The Whale and the Reactor: A Search for Limits in an Age of High Technology by Langdon Winner. I landed on Chapter 9, titled “Brandy, Cigars, and Human Values.” Having dressed down the concepts of “nature” and “risk” as effective ways to hold technology to account in previous chapters, Winner now turned to “values,” writing critically about its “vacuity as a concept” and the proliferation of “values” in technocratic discourse. But (much to my relief) Winner winds up in a rather similar place in his argument to the one I’m making here. “One obvious cure for the hollowness of ‘values’ talk is to seek out terms that are more concrete, more specific,” he writes. 

Ultimately the specificity Winner celebrates in the book defaults perhaps a bit too readily to “universal” or “general” moral and political principles, which generations of critical theorists and empiricists have rightly challenged on the grounds that universality is a socially constructed concept. And, in Anglo-European philosophy and political science, it reflects an understanding of the world that predominantly derives from white male experience. In Feminism Confronts Technology, Judy Wajcman examines the social origins of “values” from the outset, observing that “the very definition of technology […] has a male bias.” And more recently Caroline Criado-Perez, in her book Invisible Women, refers to the “myth of male universality.” Ruha Benjamin’s work on what she calls the “New Jim Code” reveals how racial bias is reproduced by technology and also introduces new forms of social control, underpinned by the “default Whiteness of tech development.” “Does this mean that every form of technological prediction or personalization has racist effects?” she asks in her book, Race After Technology. “Not necessarily,” she writes. “It means that, whenever we hear the promises of tech being extolled, our antennae should pop up to question what all that hype of ‘better, faster, fairer’ might be hiding and making us ignore.” 

The obvious shortcomings of “universal” values as a social corrective perfectly expose a need for a constant exegesis of our values and a commitment to revisit them out in the open. To give Winner his due, however, his chapter winds up pointing to a way forward that responds actively to problem of values — “good” and “bad” among them — and their inescapable roots in lived experience: “The inquiry we need can only be a shared enterprise, a project of redemption that can and ought to include everyone.”

As I’ll elaborate in Part 2 of this newsletter, I think perhaps we need to do as much work on the how (the process of developing, interrogating, sharing, exposing, and implementing) of values in tech as on the what (the conceptual content or the technological products and outcomes) of those values. Examining the so-called “black box” of technology requires not only bringing to light the technical specifications that make it work, but also the values that inspired and inform it. 

This is by far the most under-developed part of the tech lifecycle: our mechanisms for oversight, accountability, consequence scanning, and regulation. And it is often the least glamorous. 

Journalist Steven Johnson captured this conundrum nicely in a recent article for the New York Times, examining the life and work of Thomas Midgley Jr., the inventor of two of the most socially and environmentally damaging inventions of the 20th century: leaded gasoline and chlorofluorocarbons. The article presents these two innovations as similar but different in the sense that they both have lessons to teach us about the unintended consequences of industrial invention — but that they also each reveal a complex confluence of variables, from profit margins to the pitfalls of our predictive capabilities, that result in a decision to put a new technology out into the world. 

Reflecting on our limited toolkit for preventing technological harms, Johnson questions how we define “innovation” and whether this hinders our reparative repertoire: “Despite their limitations, all of these things — the regulatory institutions, the risk-management tools — should be understood as innovations in their own right, ones that are rarely celebrated the way consumer breakthroughs like Ethyl or Freon are. There are no ad campaigns promising ‘better living through deliberation and oversight,’ even though that is precisely what better laws and institutions can bring us.”

There is very little glory in innovating for social empowerment — giving people the rights they ought to have anyway results in pretty much zero commercially viable IP. But this is where some of the most important and exciting innovation in tech and society can happen and needs to happen. And it’s an area where some companies, localities, or nations could become leaders and set the bar. 

There’s an interesting case study in giving people affected by technology more of a voice in its development in Voices in the Code by David G. Robinson. The book chronicles the evolution of KAS, the U.S. algorithm that allocates kidney transplants and the different kinds of governance and oversight — including patient participation — that were applied in the process. Robinson writes, “the hardest ethical choices inside software – choices that belong in the democratic spotlight – are often buried under a mountain of technical detail, and are themselves treated as though they were technical rather than ethical.” 

The problem with this, he points out, is that algorithms (and I think this can be extended to many technological products) have a moral logic as well as a technical logic. What makes a “good” algorithm involves moral trade-offs, and “the big problem here is the relationship between technical expertise and moral authority.” The book argues that when technologies are high-stakes and involve those kinds of trade-offs, they need to be subject to more democratic governance and oversight: “it’s better for political communities to face the hard moral choices together than to abdicate and ignore those choices, abandoning them to the technical experts.” (More on this in the next post.)

When we talk about moral trade-offs and the values embedded in technological systems and products, we’re talking about similar things. The difficult part is defining what a morally “good” technology is. This is a particularly important challenge to reflect on at this moment in time, when ChatGPT is dominating the news, and a dazzling array of outrageous headlines flood our news feeds about the rise of super-intelligent machines and what it means for humanity. In one of his Reith Lectures on Artificial Intelligence, Stuart Russell, an eminent computer scientist and early AI pioneer, hones in on why our definitions matter — especially when they become code. “Machines are intelligent to the extent that their actions can be expected to achieve their objectives,” he says. But “if we put the wrong objective into a super-intelligent machine, we create a conflict that we are bound to lose. The machine stops at nothing to achieve the specified objective.” 

By contrast, one of the inherent qualities of human consciousness is uncertainty — and this is a feature, not a bug. Knowing that we do not (always) know what is good, what is right, or what the future holds is a remarkably effective governance mechanism. It keeps us going back to the drawing board, deliberating, asking each other for input, and checking our expectations against reality. 

So, if there is a consistent lesson here, it’s to resist the temptation to base the logic of our technologies on generic terms like “good,” which make it far too easy to justify and uphold the status quo. We can take Winner’s advice that “a depleted language exacerbates many problems; a lively and concrete vocabulary offers the hope of renewal.” 

Five Community-Led Internet Projects That Are Closing the Digital Divide

This post appears in my extremely sporadic Critical Tech newsletter.

Why community networks?

For a long time, I’ve been interested in alternative ways of providing internet connectivity and platform services to people — beyond expensive, top-down, commercial options. There’s nothing inherently wrong with for-profit telecommunications, but this model of service ownership does present certain problems in practice: telecommunications provision often works more like a monopoly than a competitive market, especially in underserved areas, leaving people with little choice if they can’t afford the limited options available. 

(For example, I remember my parents complaining about the stranglehold certain cable companies had in our area when I was a kid. Around the time I finished high school, I completely lost access to the e-mail account I had during my childhood because my parents were finally able to switch cable providers. It felt weird. Like if the company that sold us our house had come back to take the boxes of letters we had stored there for years because we had kept those letters in the house, and now we were deciding to move. In a number of ways, it didn’t make sense to me. And it was a small lesson in digital ownership.)

As the digital age has progressed, the absolute divide between those with and without connectivity has narrowed somewhat and the internet has become the basis of many lucrative industries, there’s also less and less of a market incentive to connect everyone meaningfully. Some communities and places aren’t commercially viable to companies operating at scale. 

And there are other issues, too. Telecoms and internet technology companies play a role in internet shutdowns and practices of digital censorship, which have increasingly become a tactic used by governments during periods of political turmoil. A confluence of political pressures on companies, legal regulations imposed on companies, and technical decisions and protocols implemented by companies themselves facilitate these shutdowns, contributing to crises of political expression and participation that threaten human rights. (Check out this helpful taxonomy of internet shutdown techniques from Access Now for more on these complicated dynamics.)

My interest in “community networks” began in 2011, when I was doing research for my master’s degree in Egypt. I was in Cairo about six months after the revolution that year, and people were still reeling from the impact of an internet shutdown that came into effect on January 27th as protests engulfed Downtown Cairo and lasted until February 2nd. (Some analysts of the protests have observed that the shutdown itself drove even more people to the streets.) Still, protesters did manage to communicate in limited ways during the shutdown — by tapping into the ISP connecting the stock exchange (the one channel out to the wider world that hadn’t been shut off) and sharing key information via Bluetooth. Some tech-savvy protesters also set up a media tent HQ in Tahrir Square, where people could charge their devices and download eye-witness photos and videos onto hard drives.

By the time I got to Cairo, the hot topic in techie circles was how to circumvent the mainstream internet. There was talk of deploying “internet in a box” — a limited-range internet solution that could be set up instantaneously, anywhere, and provide localized connectivity. It was posited as a way to bypass state-controlled and -influenced telecoms companies and provide connectivity in a crisis. It was also the first time I heard the term “mesh network” — a wireless network configuration that relies on many different nodes connecting directly and non-hierarchically to one another, reconfiguring and reorganizing automatically so that networking activity is distributed across all the nodes, and the loss of one node doesn’t catastrophically cripple the whole network. The concept of the mesh network sent me down more than a few research rabbit holes.

I discovered that mesh technology was a popular option for communities on the margins of internet connectivity, neglected by state and private infrastructure investment, to connect themselves locally. And this piqued my interest because my (by this time) doctoral research had veered toward understanding the emerging dynamics of digital inequality in Cairo and the ways the internet was increasingly implicated in longstanding fault lines around class, religion, and politics in the aftermath of revolution. 

I tracked down some obscure projects and hit a number of dead ends in my research on community internet in the Middle East (like a briefly encouraging thread on the cairoscholars listserv that ran dry, and a failed attempt to contact people involved in a mesh network project in Upper Egypt called Nubialin). But although I made little progress pursuing the topic back then, the intersection of alternative network models and lessons learned in revolutionary times lingers on, as cases like (U.S. government-backed) MeshSayada in Tunisia illustrate. But it was also during this time that I first encountered an article about Broadband for the Rural North (B4RN) in the UK. 

I bookmarked it.

And I came back to that bookmark when I launched a postdoctoral project on community networks in 2018 (frustratingly disrupted in many ways by the COVID-19 pandemic that struck in early 2020). One fantastic outcome of this project, though, has been gaining familiarity with the diverse arrange of community-led and -embedded initiatives to close the digital divide, scattered throughout the world. I’m going to spotlight five of them in this post.

What are Community Networks?

Community networks can be broadly defined as “communication networks that are built, owned, operated, and used by citizens in a participatory and open manner” (according to the Association for Progressive Communications, which has supported local and community network initiatives for many years). They are “collaborative networks, developed in a bottom-up fashion by groups of individuals that conceive, deploy and manage the new network infrastructure a common good” (as described in a published output by the UN Dynamic Coalition on Community Connectivity). 

The dynamic coalition has represented an effort to coalesce what might be called an international movement around an otherwise dispersed, diverse, and disparate array of community networks serving communities with different needs and characteristics worldwide. It brought together researchers, policymakers, technologists, and community members to identify shared principles and more effectively lobby governments to foster regulatory regimes favorable to community initiatives and standards-setting bodies to implement protocols conducive to small operators. Between 2016 and 2017, the dynamic coalition facilitated the development of a Declaration on Community Connectivity through multi-stakeholder meetings at the Internet Governance Forum in Guadalajara, Mexico, and the GAIA Workshop in Cambridge, UK.

The Declaration sets out several shared characteristics of community networks:

  • Collective ownership: the network infrastructure is managed as a common resource by the community where it is deployed; 
  • Social management: the network infrastructure is technically operated by the community;
  • Open design: the network implementation and management details are public and accessible to everyone;
  • Open participation: anyone is allowed to extend the network, as long as they abide by the principles and design of the network;
  • Promotion of peering and transit: community networks should, whenever possible, be open to settlement-free peering agreements;
  • Promotion of the consideration of security and privacy concerns while designing and operating the network; 
  • Promotion of the development and circulation of local content in local languages, thus stimulating community interactions community development. 

Ultimately, though, these aims aren’t realized perfectly, nor shared, by all community networks. The politics and priorities of community networks vary widely, depending on the context in which they started. However, in almost all cases community networks represent an alternative to traditional telecoms operators and respond to local digital exclusion, which might be the result of issues like affordability, geography, politics, or social inequality.

Five Examples of Community Networks

These are just a few examples of community networks, operating in very different places and contexts — and they have developed ways of serving the community in terms of technology (infrastructure), pricing, and community involvement that work for the local conditions. But there are many more examples across the world, and I’d recommend the 2018 Global Information Society Watch publication on community networks for a broad overview. The netCommons projectalso brings together lots of experience and research on community networks. 

One of the biggest hurdles facing community internet projects is funding the cost of building and maintaining a network. Another hurdle is technical expertise. Community networks have found creative ways of identifying, cultivating, or importing funding and expertise locally. The costs associated with a community internet project include at least the hardware required (cables, antennae, routers, devices), electricity supply, backhaul (the access to the global internet), and transit (when internet traffic needs to move from one network to another in order to access content). Without getting into the technical details — which are best left to the network engineers, anyway! — these costs can be brought down for community networks by using unlicensed spectrum for transmitting data, peering at internet exchange points (IXPs) to lower transit costs, and using open source firmware and recycled hardware, like routers. National regulations about the use of spectrum, sharing of infrastructure, and data protection can all impact the cost and difficulty of setting up a community network. 

I’ve had the privilege of meeting, and in some cases interviewing, people involved in all of these networks over the last several years, and through these conversations I’ve learned more than I could have imagined about how the internet actually works (if humans aren’t your cup of tea, though, you can also learn this from cats) and about the emotional and embodied relationship we all have with technological infrastructure, whether we have personal awareness and ownership of that infrastructure or not. – Spain began in 2004 as a local project in the Catalonia region of Spain to provide internet connectivity in under-resourced rural areas, and became an official foundation in 2008. Today, is widely considered the largest community network, with more than 30,000 active nodes and even more users. Like many community networks, the idea for came from conditions of exclusion: founder Ramon Roca was frustrated about lack of internet connectivity in and around Gurb, a rural area of northeast Spain. is a “bottom-up, citizenship-driven technological, social and economic project with the objective of creating a free, open and neutral telecommunications network based on a commons model.” The network is predominantly made up of wireless nodes using unlicensed wireless spectrum, but it is also comprised of open optical fiber links. Network owners include individuals, companies, non-profits, and other entities, all contributing infrastructure and connectivity to the network as a common pool resource. This means that many unconnected communities can get online through a hyper-local supplier with a personal interest in the community, and users pay lower rates than they would for commercial internet.

Over time, has collectively developed detailed governance tools, documentation and rules for the network, which inform the use and continuing construction of the network, including guidance on technical specifications, the economic compensation system, and dispute resolution. The network operates under a wireless commons license, which means that contributors to the network infrastructure agree that it is open (everyone has the right to know how it’s built), free (access to infrastructure is non-discriminatory), and neutral (any technical solution available may be used to extend the network, and the network can be used to transmit data of any kind by anyone, commercially or non-commercially). This model allows internet service providers (ISPs) to compete to provide services to customers, but ensure that they have to cooperate to deploy and operate the network. 

Network participants enter into a compensation agreement with that establishes how much they need to re-invest financially into the overall network, which is calculated based on their contribution to the network (in terms of capacity, etc.) and their consumption of services on the network. (The idea is that bigger consumers probably pay more, but bigger contributors also might pay less.) Services for end-users are priced to ensure the sustainability of the network and are reviewed by the collective (not only by individual ISPs that might be part of the network), so the cost to customers is directly linked to the cost of running the network itself, and overall, these costs are lower than they might be for traditional commercial ISPs (not held in common) because of resource sharing across the network: capacity can expanded at the marginal cost of the required additional capacity. has become an inspirational example to other community network projects in part because of its iterative development and willingness to share lessons learned, and the template documentation that the network has developed to facilitate collaboration among different network actors — volunteers, professionals, customers, and public administrations — who almost all community networks must contend with, in one form or another.

NYC Mesh – United States

Founded in 2012, NYC Mesh is a non-profit community Wifi project run by volunteers in New York City. The network is spread mostly across Brooklyn and lower Manhattan, using fixed wireless connections — essentially, Wifi boxes affixed to the rooftops of buildings — to connect thousands of homes to free or low cost internet (users are encouraged to make a monthly donation of an amount they can afford). Today, the mesh is supported by these donations. As of 2021, NYC mesh had over 10,000 nodes connecting private residences but also contributing to public Wifi coverage in the neighborhoods that have connections. 

In 2015, NYC Mesh received a grant from the Internet Society (ISOC) to connect to an internet exchange point (IXP), which has increased its capacity to take on new customers and keep transit costs low through peering. New members can join the mesh by filling in an interest form and sending photos or videos of their rooftops, so that volunteers can assess whether the roof is within sight of another existing node. Volunteers and prospective new members can purchase the hardware needed and complete an installation by following the detailed instructions from the organization. So, mesh members own the infrastructure themselves. A 2020 policy change introduced by Mayor Bill de Blasio allowed free use of the rooftops of public buildings and streetlights in the city for large and small internet providers to install infrastructure, and this has also helped NYC Mesh expand (although it sounds like this plan is currently on hold in 2022).

New York City has a reputation as a global centre of finance, culture, and cosmopolitanism, but it is also plagued by the problems of deep social, economic, and infrastructural inequality. Digital exclusion has been a recent manifestation of the uneven opportunities different communities experience. And the COVID-19 pandemic abruptly exposed the scale of this exclusion. Millions of people are without broadband connections, and many can’t afford the limited options available in their area. In the U.S. around 50 million people only have one provider to choose from. The cost of connectivity drives many people to the mesh.

“A lot of folks have a different interpretation of what mesh is. Sometimes it’s technical and sometimes it’s political…”

Scott Rasmussen (NYC Mesh volunteer), interviewed on the Community Broadband Bits podcast

Many neighborhoods have been waiting for affordable, reliable internet connections for years, and it is often low-income, minoritized communities that are getting left behind by the incumbent telecom providers. Deals made by the city with major telecom companies have not resulted in universal connectivity, nor equitable distribution of infrastructure. The result is a geography of digital exclusion that maps onto existing patterns of social and economic exclusion. So, communities have taken matters into their own hands. 

NYC Mesh isn’t the only community internet project in New York City. 

  • People’s Choice is a worker-owned broadband co-operative in NYC founded by former employees of Spectrum who went on strike in 2017. The co-op launched during the pandemic, and once the network is built in a local community, ownership transfers to the user-members, so profits go back directly to the network members. Service costs between 10 and 20 USD per month. 
  • Silicon Harlem, founded in 2013, provides broadband through its Better B internet service (30 USD per month for 100 Mbps), provided by a collaboration with private companies, educational institutions, and non-profits. It couples broadband provision with tech education and skills developing in the local community. 
  • RedHook Wifi is a free Wifi service that launched in 2011, spearheaded by the Red Hook Initiative in Brooklyn and the Open Technology Institute. It started as a local network to to host an Internet radio station for young people to broadcast music and news, and to support community priorities, like sharing bus timetables and documenting instances of “stop and frisk” searches by police. But it became vital and more popular after Hurricane Sandy in 2012 (crucially, a mesh can stay locally connected even if the connection to the global internet goes down). The project involves training local youth to become “digital stewards” and build and maintain the network, fostering job-ready skills and also keeping the network alive. 
  • The Hunts Point Community Network provides free Wifi in the Bronx and has been operating since 2017, a collaboration between The Point CDC and New America foundation, funded through donations and grants.

Broadband for the Rural North (B4RN) – United Kingdom

Broadband for the Rural North (or B4RN, pronounced “barn”, as it’s known locally) is a volunteer-initiated and largely volunteer-built fiber-to-the-home (FTTH) internet service provider in rural Lancashire, Yorkshire, and Cumbria. It was established in 2011 by a group of volunteers, rallied by self-described local “farmer’s wife” Chris Conder and Barry Forde, a local telecommunications expert from Lancaster University who had previously been instrumental in building an internet network (CLEO) for schools in the county. As a registered Community Benefit Society, all of B4RN’s profits must be reinvested in the community in one way or another.

B4RN serves rural and semi-rural communities in Northwest England, where terrain can be hilly and rugged, and homes can be tens of kilometers apart. Many residents in these areas have almost no internet connectivity, and others have limited connectivity at high prices from incumbent mainstream telecom operators. To reach the most remote properties, these companies often quote installation fees in the tens of thousands of British pounds (per property). Some of these communities in one of the richest and most digitally connected countries in the world have been waiting for adequate connectivity for over a decade. And B4RN is not their first attempt to take matters into their own hands. Before B4RN, volunteers led by Chris set up a mesh network (Wennet and Wraynet), in collaboration with students from Lancaster University.

At its start, B4RN raised funding by selling shares with a guaranteed 5% return after 5 years if the company didn’t go under. (Now, interest is paid out after the first year.) Because B4RN is a full-fiber network, there are substantial hardware and labor costs associated with setting it up; fiber-optic cable is laid in the ground in plastic ducting, which requires digging trenches in the ground. To connect the cables to one another and to private homes, the fiber has to be fused, requiring specialist equipment. B4RN has been able to keep costs low by using volunteers to dig trenches, lay and fuse fiber, distribute information, and raise local funding. Volunteers also negotiate with neighbors for wayleaves — the permission to cross private land — which landowners must agree to give for free. Over the years, B4RN has also benefitted from government schemes to subsidize rural connectivity. First, the Enterprise Investment Scheme and then the Gigabit Voucher Scheme, which allows community members to claim back the costs of building new connections.

“The Computer Club… it’s just a wonderful thing, and it’s unique to B4RN. No other ISP provides this sort of service. And I feel it’s just as important to build this network of people as it is to build the physical internet network. So, yeah, I hope it never ever stops. Funnily enough, all the volunteers we’ve had right from the beginning are still volunteers. There’s one who’d rather watch cricket if cricket’s on, but the majority of the volunteers are still with us and they’re still learning things, and they’re still helping people.”

Chris Conder (B4RN co-founder and volunteer), interviewed (by me) for this podcast on

Today, B4RN connects more than 9,000 properties, and subscribers pay 33 GBP per month for a 1 Gbps connection (yes, that’s a gigabit!). B4RN has also “professionalized” in many ways in recent years. It has a head office and full-time staff, including network engineers who do most of the maintenance on the network when something goes wrong (this used to be done largely by volunteers). Local contractors are often hired to do home installations or even to dig in the ducting. But community volunteers still need to coordinate fundraising and expressions of interest, and local “dig days” remain a highlight and hallmark of B4RN installations — where community members gather to dig the route for the fiber to reach their village, taking the occasional break for a natter (chat) accompanied by tea, cake, or a bacon butty (sandwich). B4RN volunteers also run a weekly Computer Club, where network users can ask their peers questions about their connections or the digital world in general.

In many rural places where B4RN now exists, people find themselves coming together again in ways that used to be more common in these small, close-knit communities, which have witnessed a gradual closure of rural services and spaces, from post offices to village halls, and the internal migration of young people to metropolitan financial centers. Even as the network has grown and professionalized, these social aspects of B4RN remain important.

Zenzeleni – South Africa

“Zenzeleni” means “do it yourself” in isiXhosa, a language spoken in the Eastern Cape of South Africa, where the internet co-operative Zenzeleni Networks has grown since 2013. The Eastern Cape is home to some of the poorest and most excluded communities in the country as a result of systemic marginalization of black Africans under racist colonial and apartheid governing regimes. This structural exclusion is felt everywhere, but it is especially pronounced in rural areas, like Mankosi, Mcwasa, Nomadolo, and Zithulele, where Zenzeleni operates. Jobs and educational opportunities are limited, as is essential infrastructure for everyday life.

Today, these essentials encompass digital services. Even when the internet is available through mainstream commercial telecom operators, sufficient services are financially out of reach for most people in the area. These conditions set the scene for Zenzeleni, which began as a wireless intranet project (providing local communication but not connections to the global internet) launched by a doctoral student at the University of the Western Cape and a community activist, until it added an external connection to the internet via a 3G modem. The project evolved slowly, due to prioritizing community involvement and allowing communities to set the network’s priorities. In 2014 Zenzeleni registered a co-operative ISP, which is run by elders of the communities that build and use the network. Through local partnerships with educational institutions and private network clients, Zenzeleni has increased its network capacity and added new access points to the internet.

“If the network grows, and the community remains the same in terms of its social and economic wellbeing, then you’re just turning into a big network operator. In Zenzeleni, the emphasis is that people own it, people care for it, and you need skills and understanding to be able to do that so that it keeps giving value to yourself and your community.”

Sol Luca De Tena (Zenzeleni CEO), interviewed (by me) for

Local co-operatives in different villages make decisions about how and where to build the network, where hotpots are located, and who can sell vouchers, and the income generated through the co-operatives pays for the bandwidth and hardware. Alongside the co-operative, the Zenzeleni non-profit company provides support in the form of technical and legal advice, help with navigating license rules and applications, research, partnerships, and applying for grant funding (largely to sustain these efforts). The network has a license exemption as a social enterprise, so it pays no license fees, and it buys unused backhaul from other providers. Over time, Zenzeleni is striving to achieve sustainability through a common-pool resource model, based on

Zenzeleni has also been confronted with a challenge that faces millions of digitally excluded communities worldwide: a lack of reliable electricity supply. From the start, Zenzeleni has charged devices and routers with solar power, and local communities have also turned solar charging stations into local business, charging affordable rates for local residents to power up their devices. The practices and patterns of charging devices are contingent on and enmeshed in local routines, including the responsibilities of charging station operators (housework, for instance) and the routes and distances people travel during the day. The sustainability of the network depends on the convergence of multiple contextual considerations, including how telecom services fit into existing community structures, what economic models serve the community best, how to ensure a reliable energy supply, and how to seed local knowledge and skills for running the network.

In 2020, users could connect to the network for 25 ZAR per month for unlimited data, and by 2021, the network had over 13,000 users and was providing crucial information translated into local languages during the COVID-19 pandemic.

AlterMundi – Argentina

AlterMundi is an umbrella NGO that supports several community networks spanning 200 square kilometers of rural Coŕdoba province in Argentina: QuintanaLibre, AnisacateLibre, LaSerranita Libre, LaBolsaLibre, NonoLibre, LaGranja Libre, MonteNet, and more. These areas have thousands of residents, but they are fairly isolated – neglected by the central government and often left to organize local services and maintain infrastructure themselves. Many people work in cities or towns 15 to 60 km away. Until QuintanaLibre started in 2011, this area was served by two wireless internet providers that offered intermittent, low speed connectivity at high prices. QuintanaLibre was born when several local people in José de la Quintana decided to share one internet link between them.

But the idea captured the interest of other residents, and the group needed more bandwidth. Negotiations with incumbent ISPs proved futile, and the tiny network gradually evolved into a project for self-sustaining internet, built and owned by the community. They found people with the necessary technical expertise to share knowledge, learned the basics, and set up a mesh that had a link in a nearby city for more capacity and access to the global internet. Meanwhile, other villages and towns nearby were experimenting in similar ways. As Jésica Gíudice writes, “The collective work of these networks resolves moral debts that the state has with rural communities and other vulnerable and excluded areas.”

AlterMundi facilitates collaboration and sharing knowledge across these various networks. Organizers developed firmware for the mesh, and ultimately co-designed its own hardware, the LibreRouter, to reduce reliance on proprietary software and hardware that needed to be reformatted to work for local needs. New network members attend training sessions and install their own connections, and the networks are sustained by a learn-one-do-one-teach-one model of knowledge diffusion. An app helps members coordinate maintenance of the network and facilitates awareness of the network infrastructure and communication about how to tackle technical problems.

“So in many places that we have been, it happens that they don’t only lack connectivity, but they end up lacking a lot of other things — like proper healthcare, infrastructure, like roads, and in general, these places have been forgotten by the society. And because they are not there every day, they basically don’t see the problem.”

Nicolás Pace (AlterMundi volunteer), interviewed (by me) at the Internet Governance Forum in Paris, 2018

Each community manages its own network, so there are different pricing models and sources of backhaul (the connections to the wider internet). In some cases, connectivity is free or nearly free; in others, members collectively pay for connectivity that they share. Backhaul, transit, and other overheads are often negotiated as donations from universities, non-profits, or private companies. In Paravachasca Valley, for example, Altermundi set up a backbone link with the National University of Cordoba, and from here, the community networks can connect with carriers who donate or sell transit to the rest of the Internet. The result is low-cost, community-owned internet that has also fostered local social networks in the area and strengthened community resilience, deepening existing community bonds and creating new connections with nearby villages and towns.

Social Networks

All of these initiatives share some similar attributes, even though they represent vastly different contexts and are underpinned by different technologies for connectivity. Most importantly, they are all strongly embedded in and driven by excluded communities themselves. Digital exclusion is a multi-dimensional problem that implicates individuals, neighborhoods, communities, villages, the state, and broader systemic dynamics and issues. It’s rooted in intersectional experiences of marginalization. So, it makes sense that in some of the most digitally excluded communities, solutions to the digital divide can be most successful when they are initiated and led by the communities themselves — and when they tackle more than one form of exclusion.

Themes that cut across all these examples include:

  • The importance of context in determining the appropriate technology to use to achieve connectivity and the right level of personal commitment and pricing structure for the community
  • The need for technical expertise to plan the network, sometimes brought in from outside the community
  • The role of non-technical support, to embed the network in the community – including knowledge sharing, skills development, and digital education
  • A commitment to keep the benefits of the network in the community, from financial profits to technical skills

And if this has sparked your interest in community networks and financing models, check out this forthcoming report launch event from APC (Sept 22)! 

More on Community Networks: The Playlist

Digital footprints as barriers to accessing e-government services

A new, open-access academic article co-authored with Dr Roxana Radu in Global Policy journal.

This article builds on existing literature on digital inequality and the digitised welfare state to elucidate one underexplored way in which the rise of e-government platforms further disadvantages already-marginalised people: by requiring that they possess a verifiable digital footprint distributed across multiple public and commercial platforms. We illustrate the pertinence and nuances of this particular risk through lived experience research in a UK public library where limited users receive help with digital skills. Although there is a growing recognition of both the inevitability of digital welfare and the risks to marginalised communities, little work has been done to connect these abstract policy discussions to lived experience—to pinpoint how digitisation creates these exclusions, beyond simply having internet access or not. This article argues that the prerequisite of a digital footprint engenders a double disadvantage: (1) lacking a digital footprint is the result of barriers that are largely invisible to data-driven, digital-by-default systems, and (2) when marginalised users establish a sufficient footprint, this entails a disproportionately onerous responsibility for managing a distributed personal data trail in the long term. This combination of mundane barriers and the burden of responsibility for a digital identity points to policy implications for governments aiming to advance inclusive digital transformation agendas.

We make several policy recommendations:

  • Government service providers (such as welfare, disability, and housing) and other essential service providers (such as banking) should reduce the complexity of the digital identification process for using their services because creating a digital footprint for the first time presents myriad challenges for those with limited digital skills.
  • In the short term, national and local governments need to finance and support adequate stop-gap assistance for navigating essential services that have been digitized, through investment in public libraries, digital help centres, and data literacy and algorithm awareness in the core school curriculum.
  • Public and private digital service designers need to build adequate privacy-protecting safeguards into their services, recognizing that people often do not have a choice about using these services or creating a digital footprint to gain access to them.
  • Service designers should practice inclusive and participatory design and conduct comprehensive impact assessments for all products to ensure that the requisite “footprint” required is fair and accessible to all users.
  • Digital-by-default gateways need to offer more transparency in the public-private ecosystems that underpin them and a clear recognition that this constellation of different service providers constitutes a barrier to digital inclusion. Users must be notified clearly when they are being asked to create accounts or register with third parties (including e-mail providers) and that these are different services with different policies regarding data management and advertising, for example. There should be alternatives to verifying accounts and identities using third-party services, such as e-mail.

UK Digital Poverty Evidence Review 2022

Over the last year, I’ve researched and written the 2022 UK Digital Poverty Evidence Review for the Digital Poverty Alliance, which launched yesterday in the House of Lords.

The report synthesises a great deal of important work on digital exclusion and poverty, and it was impossible to cite everything or give each topic as much space as it probably deserved (you surely wouldn’t read a 2000-page report – who would?!). But I’m a fan of “showing your work,” so I’m making the list of references I consulted available for anyone who wants to dig even more deeply into the research behind the report (as a Zotero library).

The report spotlights three big-picture myths and three game-changing shifts that we need to address to tackle digital poverty in the pervasively digitised world of 2022. These are:

Big picture myths

The kids are alright

There are important demographic divides between those who are online with high levels of skills, and those who are offline with low levels of skills. On the whole, people over the age of 65 are more likely to be offline. This rather coarse statistic has given rise to the myth that young people are naturally “digital natives”: having grown up with technology, they will acquire the necessary digital capabilities simply through high exposure. The evidence increasingly refutes this assumption, with factors such as employment status, education, disability, income, and self-confidence cutting across age and impacting people’s level of exclusion. Often, unequal access to technology is a feature of schooling, with a growing inequity between affluent schools with more access to and choice about technology, and less well-resourced schools with more limited access and choices. As a result, technology provision in education is deepening existing differences in life chances.

Access is access

In the early days of digital divide research and policy, digital inequality was mainly thought of
as the gap between those who have internet access and those who do not. This was called
the “first-level digital divide,” and it has been thoroughly challenged by decades of further evidence showing that there are second- and third-level divides in skills, usage, and outcomes. Still today, digital inclusion is often treated like a switch that can be flipped on once and stays on for life. However, evidence shows that digital inclusion is a process rather than an event. Differences in quality, reliability, location, and experiences of access all influence whether an individual will be able to make the most of the digital world.

Digital exclusion will diminish or disappear over time without intervention

There is a common misconception that time will solve three of the biggest factors in digital exclusion in the UK – exposure, motivation, and confidence. The logic goes that the more people have to do online, the more people will spend time online, and the better acquainted with the digital world they will become. However, the digital divide has remained a problem for digitising societies since the beginning of the digital revolution – lower prices for hardware, more devices, and widespread connectivity have not solved digital exclusion. This is because digital inclusion is relative, the benchmarks are always changing as technology changes, and the solutions depend on social, political and technical responses to inequality. Ultimately, only concerted top-down and bottom-up efforts to address deep-rooted societal inequalities will help make progress on digital poverty. This dynamic approach demands thinking big and small at the same time, and putting the needs of people first.

Game-changing shifts

Digital is not a separate domain, sector, or agenda

In our increasingly digitised world, the division between online and offline has become completely blurred. One of the tensions in dealing with digital poverty is keeping the spotlight on digital and its contribution to disadvantage, while also stressing that digital is pervasive and cannot be treated as a separate issue or programme. A focus on digital poverty, like the one taken in this report, could be misconstrued to suggest that “digital” constitutes its own domain, separate or on top of other domains of social life, such as education or work. The reality is that digital is embedded in all domains. In the words of Ofcom Chief Executive Dame Melanie Dawes, digital is not a separate sector.

The digitally excluded are still digital citizens

Everyone is part of a digital society — whether they are online or not. “Datafication” is the process by which information about people is turned into data that can be processed by computers,32 and this occurs behind the scenes, whether the datafied person is digitally literate or not. It is important to recognise how the digital world affects everyone – even people who are not actively online or have long periods of digital absence33 – especially as more of our everyday lives are digitised through the Internet of Things and Smart Cities, for example.

The digital world can be unfair by design

A growing body of literature has emerged on the issue of algorithmic bias34 and automated discrimination. Tackling the determinants of digital poverty will entail an awareness of the assumptions that go into the design and deployment of technology and how these can replicate and deepen certain inequalities and exclusions. Digital poverty is not just about access to connection and devices; it is also about ensuring the digitised, algorithmic systems do not perpetuate, deepen, or create new disadvantages for people.36 The automation of many processes and services and the invisibility of algorithmic “decisions” can create a false impression that these decisions are objective and neutral. When frontline staff in essential services rely on these outputs, it can deepen inequalities faced by already disadvantaged groups. In addition, the design of platforms and technologies can actively exclude, mislead, or disadvantage certain users. For example, websites that have not been designed to Web Content Accessibility Guidelines (WCAG) exclude assistive technology users and other disabled users.

The evidence also pointed to several key recommendations:

Digital poverty does not respect sector siloes, and neither should the recommendations
for tackling it
. These recommendations have implications for all sectors – Government, local authorities, industry, the private sector, the third sector, and academia or the research sector. They have also gone on to inform five specific Policy Principles, developed in consultation with the Digital Poverty Alliance community to take the agenda forward. These recommendations and principles will contribute to the Digital Poverty Alliance’s forthcoming National Delivery Plan.

  • Affordable and sustainable inclusion: Digital inclusion must be made more affordable and sustainable through both stop-gap digital inclusion initiatives, such as device distribution, and long-term community investment that recognises digital inclusion as dependent on broader (non-digital) community resilience and resources.
  • Inclusive and accessible design: Technologies, platforms, and digital services must be designed to be safe, inclusive, accessible and privacy-protecting from the outset, through participatory design – involving affected communities in the design of technologies that affect their lives – and through effective and enforceable regulation.
  • People-centred and community-embedded interventions: Digital inclusion policy, interventions, and research need to meet people where they already are by fostering and utilising existing community-based, formal, and informal spaces for inclusion, and focusing on helping people meet their own goals and objectives.
  • Skills to engage and empower: The skills needed to tackle today’s pervasive and complex digital world are more than technical competencies, like typing and internet searching. Digital literacy must treat digital as part of civic life, encompassing critical thinking and awareness of data rights, privacy, and consent.
  • Support for the whole journey: Digital inclusion needs to accommodate a shifting and increasingly complex digital landscape by supporting people throughout their entire lives and meeting them where they are in that journey – in school, on the job, through the health and care system, and more. Life circumstances and social context are important contributors to digital poverty, so this requires a focus on the offline, social dynamics of disadvantage.
  • Building the evidence base: Although a lot of research on digital exclusion and poverty exists, there are some significant gaps. Research needs to consider digital poverty in relation to social, economic, political, and health inequality, and vice versa – these issues cannot remain siloed. Data on digital poverty needs to be both quantitative (statistical) and qualitative (interview, observation, and lived experience-based), and it needs to be representative, comparable, longitudinal, and freely available to the public and research community.

And these recommendations went on to inform the Digital Poverty Alliance’s Five Policy Principles:

Policy Principle 1: Digital is a basic right. Digital is now an essential utility – and access to it should be treated as such.

Policy Principle 2: Accessing key public services online, like social security and healthcare, must be simple, safe, and meet everyone’s needs.

Policy Principle 3: Digital should fit into people’s lives, not be an additional burden — particularly the most disadvantaged.

Policy Principle 4: Digital skills should be fundamental to education and training throughout life. Support must be provided to trusted intermediaries who have a key role in providing access to digital.

Policy Principle 5: There must be cross-sector efforts to provide free and open evidence on digital exclusion.

Opening Statement at the 2022 National Digital Conference

Presented at the Digital Leaders 2022 National Digital Conference, Understanding the Evidence Panel

Thank you very much for having me here. Today I’m mostly going to speak from my experience writing the 2022 UK Digital Poverty Evidence Review for the Digital Poverty Alliance, which is launching next week.

For the review, I consulted more than 200 sources of evidence on the five determinants of digital poverty from academia, the third sector, industry, and Government. The five determinants, as outlined by the Digital Poverty Alliance are Devices and Connectivity, Access, Capabilities, Motivation, and Support and Participation. As you can probably imagine, these headings encompass quite a wide range of issues and supporting data and the reason comes down to this: digital poverty is at least as much a social issue as it is a technological issue. So, to tackle it, we need to know more about people – their day to day lives, their hardships, the inequalities they face – and we need to build technologies that take this diversity of experiences and forms of exclusion into account.  

By way of an opening statement, I’m going to highlight three things about where we need to be looking for the evidence to end digital poverty, based on the evidence review that’s coming out on Monday in full. These are top-level observations, and if you want to dig into them, obviously check out the report on Monday, and I’ll be making my entire list of source material – including things that aren’t cited in the report – available then, too.

First, we need to look beyond the longstanding absolute divide between digital haves and have-nots – the classic online/offline distinction – to focus instead on relative differences and divides. On the face of it, the UK is a highly connected country: Ofcom reports that 94% of households have internet access. But these aggregate statistics can obscure the ways that the digital divide is deepening for some people – especially people who are already disadvantaged. There are regional divides, with rural areas especially in Scotland, Wales, and Northern Ireland the least able to access decent broadband. And divides based on income, with households in the lowest socio-economic grades being more than 15 points more likely to use only a smartphone to get online compared to the highest socio-economic grades. And there are divides based on education, with those lacking formal qualifications being 2.8 times more likely to say the internet is “not for them,” according to research Simeon, who is here on the panel with us, conducted for Good Things Foundation. There are divides based on disability, with disabled adults 18 points less likely to be recent internet users according to ONS. Factors like the reliability of your connection, the speed of your connection, and the privacy of the spaces you have at your disposal to connect also all affect your experience of the digital world. 

Second, digital inequality – and therefore digital poverty – is becoming a very complex issue in the digital world today because of what scholars call ‘datafication,’ meaning the collection of information about people and the processing of that data, which now underpins most digital services. 

It’s not just about whether someone has an internet connection or an internet-enabled device anymore. It’s also about whether enough or too little data about them is being collected and whether data-driven decisions are putting them at a greater disadvantage, for instance in risk-scoring for housing or insurance. So we need to look at the evidence around issues like algorithmic bias, digital tracking and surveillance, and the commercial sale of data to understand how people are benefitting or suffering from digitisation. And all these issues are also contributing to what people think about the digital world – their motivation. People care more and more about privacy, and this affects their trust in digital technologies. Lloyds Bank reports that over half of people offline say they’re worried about their privacy. And the Centre for Data Ethics and Innovation has found that people with low digital familiarity are the most likely to be worried about data security and risks. At the same time, people generally don’t understand how their data is collected and used, or how to identify risks to their data or their access to information. Again, CDEI reports that infrequent digital users mostly say they know little or nothing about how data about them is used or collected, but in the general population less than half of people say they know these things. These issues are all factors contributing to digital poverty.

Third, and this is related to the second point, we need to explore and address the double-edged sword of inclusion. What do I mean by that? Well, digital poverty doesn’t end when people finally get online or have access to a reasonable device. It’s not a switch that gets flipped from “off” to “on,” and now people will be able to experience the positive outcomes of digitisation. People may actually be exposed to more harms due to their digital disadvantage, so we need to include evidence about what those harms are, who is most likely to be affected, and how to mitigate them. This means building digital technologies and systems that are safe, accessible, and privacy-enhancing.

In summary, the evidence we need to take into account in order to tackle digital poverty goes beyond what we’ve traditionally relied on – statistics on digital connections and skills – and now needs to encompass all the complexities of a data-driven world and how these are embedded in people’s social contexts.

Queer Rural Connections

When my friend and project partner, Tim Allsop, approached me with a concept for a research, film, and theatre exploration of queer rural life, I was thrilled. Tim, himself, comes from a rural upbringing and had begun reflecting creatively on the impact of that context on his identity and understanding of queerness – which he explores in a beautiful series of essays on Medium.

We decided to combine ethnographic and oral history interview techniques with multi-media storytelling. Tim adapted our first set of interviews into a play (The Stars are Brighter Here), and we collaborated with videographer Suzy Shepherd and musician Conor Molloy to edit some of those interviews into a documentary film, which was selected this year for the BFI Flare Festival.

A transgender woman, Lauren, applies lipstick in a mirror in this still from the film Queer Rural Connections, which features the official BFI Flare Festival logo.

In this film, we meet interviewees who live in and around rural Suffolk and represent several different generations of LGBTQIA+ experiences and activism. They reflect on how being queer and rural has changed over time, a push and pull of connection and disconnection, as social progress has meant that queerness exists more openly in the countryside.

More on this project, and the film (including opportunities to view it) coming soon… watch this space. 🙂

Your computer problem solved – in exchange for cake.

Original illustration by Gustavo Nascimento. Creative Commons BY-NC-SA.

I have been studying community networks – internet networks built, owned, and operated by local communities – since 2018. There is a global community network movement of sorts, but I started this research with one close to “home”, in the rural Northwest of England. Broadband 4 the Rural North started with a handful of tenacious rural residents, who were fed up with their lack of internet connectivity and the unfulfilled promises of England’s leading telecommunications providers to reach their rural homes. They formed a community benefit society, raised funds themselves, and built the fastest and most affordable fibre-optic network in the country, with volunteers in every village mapping the routes and digging the ditches for the cable.

During the pandemic, communities in Lancashire banded together, using human networks they had developed while building the internet network for B4RN to get supplies to people who needed them. They also ran an online “Computer Club” via Zoom to stay connected and offer technical support to B4RN members.

A paper sign at the B4RN Computer Club, reading: “B4RN Computer Club. Your problem solved – in return for cake (or biscuits, chocolates)!”

I’ve done hours and hours of interviews and observations with B4RN, and I finally put together a podcast with some of the audio I’ve collected over the years. GenderIT gave me the excuse and the opportunity, as part of a great collection on community resilience during the pandemic. In this recording, I talk mainly about the volunteer-led B4RN Computer Club – how it has evolved from the in-person Computer Club hosted every Friday at their modest headquarters in Melling, Lancashire, into an online format during the pandemic, and how the club helps bridge the digital divide by sharing knowledge with local people about how to make the most of their internet connections.

I wanted to introduce listeners to these people I’ve gotten to know over the last few years – not just their dogged commitment to helping people get online and feel confident about it, but also their humour and camaraderie. The dynamic in these Computer Club meetings shows how B4RN is no ordinary telco.

Five Essays over Five Days at a Digital Poverty Summit

I’m currently writing an evidence review on digital poverty for the Digital Poverty Alliance, a new charitable organisation focused on connecting and focussing the digital poverty agenda in the UK. During this time, the Digital Poverty Alliance also asked me to attend, observe, and write a summary for each day of a digital poverty summit it had supported alongside several All-Party Parliamentary Groups related to digital issues. I’m reposting those essays here. They’re all available on the Digital Poverty Alliance blog.

Day 1: Digital Capability and Understanding – Digital Skills in the Workplace and the Future of Work

The future of work is digital, and the UK has some catching up to do if it aspires to a digitally capable workforce fit to meet that future. 

This was the predominant message from the first installment of the Digital Poverty and Inequalities Summit, hosted yesterday by the APPG for Digital Skills. Invited contributors included representatives from TechUK, FutureDotNow, Google, Harvey Nash Group, BT, City & Guilds, Community Trade Union, and Prospect. 

Despite encouraging figures indicating that there are 5.6 million more people with foundational digital skills as a result of upskilling during the pandemic, Lloyds Bank reports that 11.8 million (36%) of the workforce still lack Essential Digital Skills for Work. Thinking ahead, the digital workplace is changing more rapidly than ever before, rendering digital skills a constantly moving target. By some estimates (published by the Confederation of British Industry and McKinsey), 90 percent of the UK workforce will need to reskill by 2030. 

Several recommendations surfaced at the roundtable to address where there are important gaps:

  • Evidence

We need to understand more fully what working life looks like for adults in the UK today, as well as understanding the link between digital skills and all aspects of life (e.g. health, recidivism, and of course productivity), both on a personal and societal level. Questions were raised around the role of Government’s existing significant investment in the What Works Network to generate evidence-based insights about digital across sectors to enable more holistic policy social impact.

  • Education

The pathways between education and work are not adequately preparing young people for a digital workplace. Formal education needs a stronger emphasis on digital skills across the whole curriculum, not just IT, informed by the needs of the employment market; and skills training needs to be available for the many people who do not pursue university education, including on-the-job training for both younger and older employees.

  • Lifelong inclusion

People constantly need new skills to be able to engage with a changing digital world. One of the places where people have the highest exposure to digital skills is in the workplace, on the job. When people fall out of employment or retire, their skills can deteriorate, so there needs to be provision for free, lifelong learning at different life stages and circumstances.

  • Prioritisation from the top

Digital skills delivery and digital skills policy is often fragmented across different sectors and at different levels (from the community to the national level). Digital capability needs to be a clear strategic national priority, communicated across government from the highest levels. As recommended by the House of Lords Covid-19 Select Committee, this should be led by the Cabinet Office and supported by respective departments, such as the Department for Education and HM Treasury to realise the benefits to UK PLC as well as for social and economic inclusion.

  • Signposting

Several speakers stated that the problem in delivering digital skills is not supply but demand. A range of digital skills training programmes exist — Learn My Way, the Lloyds AcademyGoogle GarageiDEA, and the new skills boot camps were all mentioned — and one-to-one help exists in Online Centres across the country. But people often do not know where to go for help. There needs to be more cross-sector signposting of available skills resources and training for people at the first point of contact, when they need it, and follow through to make sure they can access them. The Government has a key role to play here, as it manages many of the most important channels to the most vulnerable people, across health, education and housing, for example. (Learn more about how the Digital Poverty Alliance Community Board aims to support this.) 

  • Motivation and skills go hand-in-hand

Both capability and motivation are determinants of digital poverty, and they are very closely linked. As Liz Williams from FutureDotNow put it, “If the pandemic hasn’t motivated people, what’s it going to take?” Several speakers highlighted how a lack of exposure, confusion regarding the language we use to talk about digital skills and the digital world, and/or a lack of confidence can be de-motivating for people in acquiring digital skills. We need to tackle motivation alongside skills from education to employment and beyond.

Although it is impossible to cover the full range of issues relevant to digital skills in the workplace in just one roundtable discussion, there were some important themes missing from the conversation.

  • Locating responsibility for digital skills

Discussions of digital skills in the workplace tend to take the expectations of employers and industry as the default perspective. The question therefore often starts from the same premise. What do employers need? What does the economy need? 

Of course, this is an important perspective because people do need skills that are required in the job market. However, some roundtable participants acknowledged the risk of this default point-of-view: it ignores users’ (people’s) experiences. And in doing so, it individualises the ‘problem’ of digital skills — situating the responsibility for digital skills on the individual rather than placing an equal burden on the system. What is the responsibility of the job market, or even the designers and developers of technologies and digital systems themselves? When digital platforms and technologies are not built to be user-friendly for marginalised users (such as disabled people, people who speak English as a second language, people who have left education, or lack textual literacy), the experience of being online can be disheartening and de-motivating, if not discriminatory.

In research that colleagues and I conducted in public libraries, we found that people face many simple digital barriers in accessing jobs that otherwise require minimal digital skills. For example, the proliferation of online-only job applications for low-paid, hourly work blocks many digitally excluded people from even applying,  and it may also be de-motivating for people to consider acquiring any further digital skills. 

Therefore, additional important questions should include: whose responsibility are digital skills and literacy, and how can the job market be made less alienating for people experiencing digital exclusion? This is a shared responsibility across Government, business, and the tech sector.

  • Critical and abstract thinking skills

In our increasingly complex digital world, many of the digital skills needed to thrive not only in the workplace but in everyday life are not technical skills; they are critical thinking and abstract problem solving skills. And they diverge in important ways from the problem solving skills outlined in the Essential Digital Skills framework. 

Ofcom has identified some of these issues, reporting that people are increasingly unlikely to validate online information sources, have limited understanding of the ways companies collect and use personal data, and fail to accurately identify paid-for online advertising. The Me and My Big Data project found that many people in the UK lack data literacy and feel disempowered in the way their data is extracted and used. And in my own research, I have found that digitally excluded users often struggle most with constructing an abstract set of steps in their mind to get to a digital end-goal. Although they may have basic competencies, like logging into Wifi, this abstract thinking is a key digital barrier.

Therefore, other important questions should be: how can we cultivate both technical and critical thinking skills among even the most basic digital technology users? Can/should the digital world be designed to require less abstract thinking in the interest of becoming more inclusive?

  • Public participation

Both of these themes point to the need for greater public participation in the design of the digital workplace, digital technologies and systems, and digital skills learning programmes. There is a notable lack of lived experience perspectives — the views of ordinary people experiencing compound forms of inequality — in high level conversations about digital skills.Tackling the motivation side of the capability equation will involve not only identifying what skills people need, but crucially what skills they want. We need diverse voices in the room from, for instance, the disabled community, in order to meet people’s needs first.

The recommendations from the roundtables will inform a forthcoming Digital Poverty Evidence Review i2022 for the Digital Poverty Alliance, in which I will explore these further themes in greater depth, drawing on evidence from academia, industry, Government and the third sector. Read the interim report here.

If you have a single suggestion about what Government could do that would make a difference in the area of digital capability, e-mail:

This roundtable was hosted by the APPG for Digital Skills, in collaboration with the APPG Data Poverty, APPG PICTFOR and supported by the Digital Poverty Alliance.

Day 2: Data Poverty

If there is one digital exclusion issue that has been unprecedentedly spotlighted by the COVID-19 pandemic, it is data poverty. And now that the light has been shed, there will be no looking away.

Data poverty was the topic of the second day of the Digital Poverty and Inequalities Summit hosted by a cross-party coalition of All-Party Parliamentary Groups and MPs and supported by the Digital Poverty Alliance. The relatively new APPG on Data Poverty, which hosted yesterday’s roundtable, is a direct response to the urgent realisation, as one speaker put it, that “the digital divide comes with exclusion from society more generally.” 

Last year’s national lockdowns saw schools, workplaces, and public spaces close to prevent the spread of the coronavirus in a sharp disruption to everyday rhythms that suddenly revealed how many people were without the basic connectivity needed to continue life, let alone level up — online. According to Citizen’s Advice, 2.5 million people have fallen behind on broadband bills during the pandemic. Ofcom reports that approximately 9 percent of households with children lacked access to a laptop, desktop, or tablet. Around 17 percent did not have consistent access to a suitable device for their online home-learning, which increased to 27 percent of children from households classed as most financially vulnerable. The recent Nominet Digital Youth Index finds that a third of young people do not have broadband at home. Even among those with home broadband, 13 percent say their connection is not good enough for everyday tasks and 52 percent say there are things they can’t do online due to poor connectivity. A deluge of media coverage and personal stories powerfully illustrated how many British families have faced impossible choices between necessities during the pandemic: “pay the wifi or feed the children”. As the UN Special Rapporteur on Extreme Poverty articulated (in 2019), in a pervasively digital world, the digital divide is a question of basic human rights.

But the roundtable speakers, who represented organisations including The Good Things Foundation, Jisc, BT, Glide, Vodafone, and Nominet, all said that this was a problem well known to them before the pandemic. The cost and accessibility of connectivity and devices is a determinant of digital poverty. According to Lloyds Bank, nearly a third of those offline said that cheaper costs would encourage them to use the internet. Ofcom finds that 10 percent of internet users go online with a smartphone only, rising to 18 percent among those in socio-economic group DE. These issues are closely linked; when people do not have or cannot afford a home broadband connection, and they rely on mobile internet instead, they are paying for more expensive data

The entangled nature of data poverty (how much is about access? affordability? devices?) makes it difficult to define. And the definition often hinges on what a minimally acceptable standard would look like. The Good Things Foundation says that means data that is cheap, handy (easy to access), enough (in terms of speed and quantity), safe (to ensure privacy and protect users from harms), suitable (appropriate for an individual’s life circumstances). Nesta identifies data poverty as an inability to engage fully in the online world due to barriers including low income, not being able to get a data contract, lack of privacy, and local infrastructure. 

But the roundtable discussion demonstrated that precise definitions are less important than understanding the vectors of the problem. Data poverty — like poverty more broadly — is a product and producer of both resource and social exclusion. It is contextual, embedded in individual circumstances. And it is relative, meaning that the benchmark of exclusion changes as the nature of digital technology changes. 

Uniting around the urgency of the issue is the imperative, as captured in the key takeaways from the session:

  • Government must take a leadership role

Eradicating digital poverty cannot be achieved in isolation, and it cannot be accomplished in siloes. Government needs to lead national efforts to tackle data poverty. Despite the rapid rollout of many innovative schemes to fill an emergency gap during the pandemic (see the next point), many speakers said that people often do not know about the schemes that are available. In part, this is due to the piecemeal and fragmented array of partnerships and programmes, which are necessarily led by industry and the third sector. When there is market failure, as there is in this case, the Government must step in. The other part is the user journey, with attendees noting that where there are low cost offers, these are often too complex or hard to find for the people they aim to support. This is reflected in low take-up numbers. 

One speaker remarked, “Sometimes it feels like the Government is just standing back and saying, ‘oh, thank you very much.’” Data poverty impacts society and citizenship, yet it is non-governmental sectors that are having to step in and bridge the gaps — out of sheer public need. Government can do more, and there are many people and organisations who want to help.

Some recommendations included zero-rating essential services and implementing a universal service levy on companies that reap the greatest reward for digital engagement, many of which have saved billions in cost due to digital transformation, which has not in turn been returned to their customers. The Government has saved, too, and these windfalls should be re-invested in digital equity and inclusion. Another recommendation is to impose a social tariff on all operators — an initiative BT has already undertaken. As community members of the Digital Poverty Alliance pointed out, at the very least the Government and big business can signpost to available affordability schemes, subsidise social broadband tariffs, impose regulation requiring minimum standards of connectivity, offer help with paying bills, and help to identify the people most in need through their existing channels.   

  • We need long-term solutions that are sustainable beyond the pandemic

Industry and the third sector stepped up to meet public need during the pandemic with stop-gap measures that helped hundreds of thousands of people. To name just a few: BT, Openreach, Virgin Media, Sky, TalkTalk, O2, Vodafone, Three, Hyperoptic, Gigaclear, and KCOM took measures to lift data allowance caps on their broadband services; DevicesDotNow and others distributed donated and refurbished devices to families in need; and the Department for Education partnered with telecom companies to provide free data to disadvantaged families through schools.

But there is a clear need to develop long-term solutions to data poverty that are sustainable beyond the crisis moment. For example, what happens to the group of children next year who enter school without home access, or to the family whose limited-time free offer of connectivity runs out so they must again choose between food and connectivity? According to the Association of Colleges, 36 percent of colleges in England do not have sufficient access available, even in school. If industry and the third sector are meant to continue support for disadvantaged families and individuals, there must be a long-term plan in place to fund these initiatives and to address the multiple factors that contribute to digital poverty, including access to adequate devices and consumer choice (the ability to choose among fairly priced competitive internet service providers).

  • Data poverty is poverty

A clear theme that emerged in the roundtable was the intersection between data poverty and socio-economic deprivation. Although data poverty is a relatively new concept, it is not distinctfrom poverty writ large. Rather, the digital divide is a determinant of poverty, just like the inability to afford heating or inadequate nutrition. People who lack digital skills also often pay more for utilities and earn less per year. In short, data poverty contributes to the poverty premium. And in the midst of our most profound modern health crisis, research increasingly shows that digital exclusion is a determinant of health outcomes.

For these reasons, it is important to consider data poverty in the same terms in which we consider other forms of deprivation. And we should ask: what is the minimum standard needed to survive in our digital world? Projects like the newly minted Minimum Digital Living Standardresearch network will aim to address this issue, recognising that poverty is often defined by context as much as by simple thresholds like the speed of a connection or the availability of a single device. When families need to share devices, for instance, a limited resource winds up spread thinly across individuals’ needs.

  • There is a need to more accurately identify need

Data poverty is two-fold: it is about getting people access to the data (internet service) they need, but on the delivery side, it is also about gathering better data to locate the need. 

While there is a clear willingness to deliver more affordable access and devices to people who need them, there is a distinct gap in evidence about who those people are and what mechanisms lead to digital poverty. Here, again, is a clear role for the Government, which has the ability to signpost to those with a registered disability, jobseekers, those on free school meals, those in poor health, carers, those on low income, and those in receipt of Universal Credit, for example. These have been key vulnerable groups identified during the pandemic; we need to ensure that the pipeline of information from government to service delivery stays open and that existing channels to these people are shared between government departments so that people’s entire needs are met.

  • Where people have access is as important as other factors

It is easy to overlook the important qualitative differences in access to data that contribute to “data poverty.” For example, public internet access points have long been part of strategies for digital inclusion. The Government’s 2017 Digital Strategy called libraries the “go-to providers” of digital inclusion, and public libraries are, in fact, vitally important access points for people living in data poverty. (My own research with colleagues at the University of Oxford showed that 29% of library computer users in Oxfordshire had neither computers nor internet access at home.)

But public access is not qualitatively the same as access at home, and public wifi cannot be considered an adequate solution for people to be digitally included. Not only do people who rely on public wifi have fewer opportunities to acquire and practise digital skills, but they can also be subjected to more surveillance and tracking on public networks. Certain tasks, like attending court hearings and online banking are more difficult and risky in public internet spaces — and it is often marginalised people who are forced to conduct their private (online) lives in public. Therefore, priority must be placed on at-home or mobile internet suitable to individuals’ needs.

I think at least one further point deserves attention in a discussion of digital poverty. This is the related, downstream impact of data poverty on further digital exclusion. In particular, this is the problem of people living in data poverty becoming “missing data.” One attendee mentioned in the Zoom chat that many people are unable to prove their identity to digital ID systems. (This was a criticism leveled by the National Audit Office on the Verify system for Universal Credit.) The issue of datafied invisibility is a nuanced aspect of data poverty: people become increasingly invisible to digital systems when they do not leave data trails, and they cannot leave data trails when they cannot access or afford the internet.

Avoiding these feedback loops in which the poor have inadequate access to the internet and are further penalised for their inadequate access — by high utility bills, targeted scams, and failed credit checks, etc. — should be of paramount concern to society, the business sector, and certainly to Government.

These and other issues related to digital poverty along with policy recommendations that have emerged from the #DPIS21 meetings will inform a forthcoming Digital Poverty Evidence Review 2022 for the Digital Poverty Alliance. Read the interim report here.

This roundtable was hosted by the APPG for Data Poverty, in collaboration with the APPG Digital Skills, APPG PICTFOR and supported by the Digital Poverty Alliance.

Day 3: Research and Development – How Can the Tech Sector Drive Innovation in the UK Economy and Help Close the Digital Divide?

Both the title and discussion of yesterday’s installment of the Digital Poverty and Inequalities Summit left open the question of the relationship between tech innovation and the digital divide: is the question whether it is possible for the tech sector to both drive innovation and close the digital divide (i.e. are these ambitions at odds with one another)? Or, is it whether tech sector-driven innovations in the UK economy could possibly close the digital divide (i.e. is innovation the answer to inequality)? 

Depending on how one interprets the question, there are two potential debates and two sets of policy recommendations that might emerge from the provocation. The November 17th roundtable was hosted by the APPG PICTFOR and supported by a cross-party group of MPs and the Digital Poverty Alliance. Speakers included MPs from both parties and a representative from Telecoms Supply Chain Diversification Advisory Council, and there were also many contributions from attendees. One of the invited speakers framed the discussion by asking, “what can the tech sector do?” The speaker pointed out that this marked a departure from asking — as is often the case in parliamentary circles — “what can Government do?” 

And it is certainly a critical question. What can the tech sector do? To put it succinctly: arguably, the tech sector has done a lot. And, arguably, it could do a great deal more.

During the pandemic, collaboration between the tech sector, local charities, and Government helped mitigate some of the severe disparities in digital access and skills that were damaging people’s lives. I mentioned a number of these programmes in the blog about #DPIS21 Day 2 on Data Poverty — from device donation schemes to free data packages. Roundtable speakers also brought up the many digital skills bootcamps and apprenticeship programmes spearheaded by companies — Barclays Digital EaglesLloyds Bank AcademyGoogle Garage, and the Amazon apprenticeship scheme. The tech sector is also a major sponsor of digital inclusion initiatives more broadly — from research conducted by charities to afterschool code clubs to APPGs themselves. However, this smattering of fragmented interventions can result in incomplete user journeys, riddled with too many opportunities for vulnerable people to slip through the cracks. Still, it is clear that the tech sector is doing a lot.

It can also do more. One speaker described the “interdependence of innovation and closing the digital divide.” Transformative innovation is contingent on digital and social equity. This means access and accessibility — not just to connections and devices but to the tech sector itself. According to the Wise Campaign, just 16.7 percent of ICT professionals are women. Tech Nationreports that women hold only 22 percent of tech directorships. And a 2017 report by PwC finds that just 3 percent of women say that technology would be their first choice for a career. There is also a 20-point gap between men and women who study STEM in school. These figures point to a societal responsibility across all sectors — and especially those that benefit and create profit from the digital world — to address the systemic inequalities that make the digital world unfair and uncomfortable for many marginalised people and also make it hard for marginalised people to participate in building that world.

Ultimately, there were two questions to address at the roundtable and two resulting categories of themes that emerged:

Driving Innovation

The discussion on innovation centred on education and skills. Industry needs a more digitally capable workforce and stronger tech skills coming out of formal education in order to work in the tech sector. In fact, digital skills are needed across all sectors, with at least 82% of online advertised openings across the UK requiring digital skills and paying around 29% over those that do not. Beyond technical competencies, one speaker pointed out that a future workforce also needs to be adaptable, as the tech landscape changes constantly. 

There were strong resonances in this part of the discussion with themes from the roundtable on capabilities, and the issue of adaptability points to the need for creativity and abstract thinking skills alongside technical competences.

In addition, speakers mentioned the need for diversity in the tech sector, articulating a desire to encourage young people from underrepresented backgrounds to consider tech careers. Not only is the participation of women, non-binary, and BAME individuals critical for to achieve social equality, but their leadership in the sector can also help ensure products and services meet the needs of the whole population. 

However, the conversation stopped short of fully engaging with the question of digital exclusion and the negative feedback loop between digital poverty and employment prospects. The Nominet Digital Youth Index reports that “Tech jobs are least appealing to those most impacted by inadequate tech,” with men and those on higher incomes more likely to consider tech a viable career. Motivation was not mentioned, but it is also key here. A lack of interest in technology or the tech sector can be rooted in many intersectional factors contributing to digital and social exclusion — including negative experiences online like harassment and bullying. According to the same 2017 PwC survey cited above, 83 percent of young women said that they actively look for employers that prioritise diversity, equality, and inclusion.

The discussion highlighted the importance of focusing on the small — local and regional success stories, and the role of small startup companies in the tech ecosystem. Supporting Combined Authorities that drive innovation in their regions as well as small businesses can not only open up opportunities for innovation but also encourage workers to consider working locally and in smaller companies.

Finally, the hunger and need for collaboration across sectors (including Government) and internationally emerged as a prominent theme. The digital economy is a global one, so it will be vital to learn lessons from other countries and build bridges beyond borders at a time when Britain is having to renegotiate its relationship with even its closest economic partners.

Closing the Digital Divide

On closing the digital divide, the roundtable discussion focussed mainly on infrastructure to deliver connectivity. In 2021 it is unacceptable that parts of the UK are entirely without internet connections, particularly in rural areas. Recommendations on this topic included the need for the telecom sector to be completely transparent about where there is market failure (that is, an area that is not commercially viable to connect) so that Government can step in or assist. 

And, as one speaker put it, the policy cannot be “connect and forget.” Connectivity must come with long-term, community-embedded digital and social inclusion in the form of robust digital education in schools and local resources on digital skills.

The rural-urban digital divide is still an important consideration in the UK, where of the roughly 2% of properties in England unable to get even 10 Mbit/s connections, over 50% are rural. Although it did not get a mention at the roundtable, Government initiatives like the Rural Gigabit Voucherprogramme have helped telecom operators extend coverage to harder-to-reach areas, including small and community-owned internet service providers (ISPs). For the last several years, I have done research in rural communities that are working to get internet connections, and they often face bureaucratic barriers (the process of applying for vouchers requires whole departments for many ISPs) or severe delays (when local councils give a tender to a provider that will not build within the year). Despite infrastructure sharing regulations that allow multiple operators to use existing passive networks, another issue in infrastructure rollout is overbuild, where telecom companies install more infrastructure where it already exists rather than extending infrastructure to new areas. These are important issues at the intersection of the tech sector and Government, which deserve discussion in a forum on the role of industry in closing the digital divide.

There is a tendency for conversations about the tech industry to veer toward what academics call “technological solutionism,” meaning that technology is seen as the answer to social problems. Forums like these throw up an important question, as the tech sector steps up to fill some gaps in digital inclusion: is tech solutionism inevitable when we leave the solutions to the tech sector? Almost in response to this unspoken question, a final big theme from the roundtable was the role of Government. Echoing the first two days of the Summit, discussions pointed to the need for Government to set a clear agenda and to help the tech sector with the kind of social transformation — of education, for instance — needed to address both inclusion and innovation. 

In my view, the conversation skirted some of the most pressing issues in relation to the tech sector’s role and responsibility in relation to the digital divide (which encompasses many more issues of exclusion beyond connectivity alone). For example, there is the issue of technology design — and the need to centre the experiences of disabled users, second-language speakers, the elderly, cognitive diversity, and more. There is also the issue of how the tech sector contributes to deepening disadvantage for some people — through surveillance and risk profiling, for instance. And there is the role of the tech sector in mitigating online harms — including both the content people access online but also how their data is extracted and repurposed. 

Of course, the tech sector is a broad category that could conceivably include everything from online platforms or telecom companies to hardware manufacturers or infrastructure suppliers. It is a challenge to unpack the role of such a diverse sector, let alone in a single roundtable. By the end of the discussion, though, everyone seemed to agree on one thing: technology is likely part of the solution to the digital divide, but it is certainly not all of it. 

“We all want to help,” said the final speaker, an attendee representing a tech SME. There is an unmistakable drive within the tech sector to close the digital divide and end digital poverty; we need a collaborative and critical cross-sector community to accomplish it. This is a space that the Digital Poverty Alliance hopes to occupy, as a convenor of dialogue and collaborations. As a member of the Digital Poverty Alliance community, I see these roundtables as crucial starting points for updating the agenda around digital poverty, and the recommendations and gaps that emerge will inform the UK Digital Poverty Evidence Review 2022. 

Read the interim evidence review here.

This roundtable was hosted by the APPG PICTFOR, in collaboration with the APPG Digital Skills, APPG Data Poverty and supported by the Digital Poverty Alliance.

Day 4: Education and the Digital Divide

“This is about the new normal,” declared a teachers’ union member at yesterday’s Digital Poverty and Inequalities Summit, which tackled the issue of education and the digital divide. The comment succinctly captured a chorus of personal experience and insight that reverberated with real feeling through the discussion. As the title of the roundtable itself suggested, this “new normal” arguably encompasses both the reality of blended online and offline learning that will endure beyond the COVID-19 pandemic and the realisation of the profound digital inequalities that are exacerbating an education gap for already-disadvantaged students. 

The discussion on education rather fittingly focused on what we could learn from the pandemic moment to inform a more digitally and educationally equitable future. Speakers universally shared a concern and commitment to apply lessons about what worked and what failed to future strategic planning about technology in education. As one speaker put it, the worry is that because this period has been so challenging, educators will now “walk away and just say ‘thank goodness’.” 

But none of the roundtable contributors seemed inclined to walk away. Speakers included three former Secretaries of State for Education or Children, MPs chairing other APPGs for Social Mobility and Education Technology, the Shadow Minister for Schools, the General Secretaries of the NASUWT and NEU, senior representatives of the National Association of Head Teachers, Ofsted, Teach First, UNICEF, BESA, the Learning Foundation, Times Higher Education, and Digital Unite. Several speakers recounted first-hand experiences of families asking for help accessing devices and connectivity during lockdowns — and many receiving it through schemes like the Department for Education’s Get Help With Technology programme. And there was much praise for teachers and schools, as well as community initiatives, like local football clubs, that stepped up to provide digital resources to children in need. 

It was clear that the pandemic exposed the scale of a longstanding problem: today, digital exclusion is a key contributor to social disadvantage. According to a report by the Sutton Trust, in the first week of the January 2021 lockdown, just 10 percent of teachers said their students had adequate access to a device for remote learning. And Ofcom estimates that more than 1.7 million children do not have access to a laptop, desktop, or tablet at home. 

And the disparities were greatest for the most disadvantaged; a UCL survey found that one in five children receiving free school meals had no computer access at home. A survey by TeachFirstreported that 84 percent of schools with the poorest students did not have enough devices and internet access to ensure they could keep learning.

In considering how we learn from the crisis and adapt to a new normal, several forward-looking themes emerged over the course of the discussion:

Teachers need support and training to make the most of digital technologies for learning.

“Technology is a tool, not an end in itself” was a repeated refrain in the roundtable. Strategic thinking around a digital education needs to focus on how teachers and technology can work together to deliver a better education — which also means a fairer and more equitable educational experience. There were many anecdotal lessons learned during the pandemic about best practice in online and hybrid learning. For example, one speaker pointed out that “there was a quiet accrual of more mundane uses of technology,” citing online vocabulary quizzes for foreign languages as an example. Although the “digital classroom” often conjures images of smart whiteboards and virtual reality headsets, there are fairly simple digital tools available to teachers that are under-utilised for engaging students in traditional classroom settings.

But teachers need training to make the most of digital technologies. Several speakers were part of the education system when information technology (IT) was a new frontier, and one recalled how “tech was used by some and feared by others,” which led to different learning experiences for students in the classroom. Many nodding heads in my Zoom grid seemed to indicate that this is still a relevant issue. Another speaker pointed out that young aspiring teachers are often assumed to have digital skills, and as a result, digital skills are not included in teacher training. But it will be crucial to develop pedagogy around online and hybrid learning, with a distinct focus on how to integrate digital literacies and technologies into teaching. Speakers raised open questions, such as “what is tech good at, and what are people good at, and how can they work together?” Or, “when is face-to-face teaching essential and when could online learning be more effective?” 

I would venture to suggest that behind these important questions about best practice and pedagogy is a need for immediate research on learning experiences during the pandemic with the people who delivered them: teachers. This research must include deep, thoughtful qualitative insights in order to develop better teacher training and equip teachers with strategies that work, and it needs to be done now — while the learning is fresh.

Education extends into the home.

The digital divide in education reflects a societal divide, and we cannot fix one without addressing the other. Schools are often expected to compensate for lack of support at home for children — they are meant to be great levelers. But speaker after speaker pointed out how schools cannot do this leveling alone. There is an educational continuum between school and the home and community, so thinking about education means thinking about all of these domains at once. 

The pandemic blurred the lines between school and home, drawing attention to the ways in which different private environments impact learning. For example, some children have quiet, private spaces to study, while others have to share devices and space, contending with constant distractions and demands on their time and attention. Roundtable speakers pointed out that this has always been the case; online learning during the pandemic just made these differences more obvious. 

As Alicia Blum-Ross and Sonia Livingstone write in their book based on survey data and qualitative interviews, Parenting for a Digital Future, “although both better-off and poorer parents try to use technology to confer advantage, they are very differently positioned to do so.” Socio-economic differences are especially pronounced in the home, where children are influenced by the dynamics of family and space. One speaker recounted how some parents on low incomes needed to borrow their children’s devices during the pandemic in order to work or search for jobs. 

And digital skills are also an issue among family members. “We didn’t train the parents,” one former Secretary of State for Education said, and this was a major oversight in the rollout of IT in schools. Motivation to engage with the digital world has a lot to do with context, others pointed out. After all, we know from national surveys, including Ofcom and Lloyds Bank, that people are most comfortable learning and asking for help with digital skills from people they trust, like friends and family. And with nearly a 34 percent reported increase in homeschooling since last year, addressing the digital divide in education cannot just stop at school gates; it has to extend to parents, who need access to free, lifelong digital skills training.  

We tend to focus on the digital divide, but technology offers opportunities, too.

The expansion of digitisation and digital technologies in schools has worsened inequality for many disadvantaged students, but speakers also painted a more optimistic picture about how technology offers opportunities to make education fairer and more inclusive. Digital technologies can help to engage students with different learning styles and needs, and it can also enable students to learn in more individualised ways than would be possible in a traditional, analogue classroom. The potential to adapt course material to different ability levels offers exciting possibilities for education that meets students where they are and accommodates cognitive diversity.

In addition, digital technologies can help improve teacher productivity and enable teachers to more effectively share knowledge. Despite an acknowledgement that teachers worked harder during the pandemic in a hybrid format than perhaps ever before, several speakers mentioned the role of technology in potentially reducing teacher workload by streamlining administrative tasks, including assessments. One learning from the pandemic was that online options for some educational engagements can be equalising; online parents’ evenings allowed some working parents to engage with teachers for the first time because they could do so from home, rather than traveling to the school. 

There was also enthusiasm for innovations that could lead to what we might call the “datafied classroom” — the use of data collection and analytics to influence student outcomes. One speaker mentioned the potential of machine learning to track students’ performance in class to help identify individual learning challenges that would otherwise go unseen. Teachers could be notified by digital systems if students are struggling or bored. “This is the direction we should be moving in,” the speaker said, adding that down the line there is the potential that a young person’s progress could be constantly monitored, ultimately replacing the need for exams. “That’s not a threat; it’s an opportunity.”

Listening to this roundtable discussion, I was surprised to hear such unmitigated optimism about using datafied predictions in education, especially following the highly controversial Ofqual algorithm that predicted students’ A-level results in 2020 and demonstrated biases that devastated many students’ university prospects and prompted public protests. Any discussion of student data and algorithmic processes in education should include at least a nod toward the equality and privacy implications of such an extensive proposed regime of surveillance and assessment. The Ada Lovelace Institute last year published a blog outlining what safeguards should be in place following the Ofqual debacle, and has also published resources on algorithmic accountability that can inform public policy. Although, as this theme in the discussion highlights, there are opportunities for technology to improve classroom experiences, at this stage no technological solution should be posited without critical reflection on potential harms and downstream impacts on inequality.

We need to involve children in decisions about digital education and tools.

The final and perhaps most important theme of the roundtable was on “learning from the experts,” as one speaker put it. The experts, in this case, are children and teachers themselves. Taking a children’s rights approach to education and the digital divide means not only addressing the whole spectrum of children’s wellbeing in education (from access to devices to critical thinking skills for dealing with the digital world), but it also requires that children are consulted in the design and deployment of technologies for learning. Designing technologies withand not just for children can result in better digital consent policies and more inclusive, accessible tools that meet the needs of people with physical or cognitive disabilities, language barriers, and more.

Academic research — by danah boyd and Sonia Livingstone in particular — has long argued for including children as decision-makers in digital policy. And the ICO has issued some guidance on how to engage with children in the design of technology, recognising the importance of user-driven design. Still, the narrative around children often focuses on protection rather than empowerment. But the equitable, fair, and just digital future we want must be built with children’s rights at the core.

Even in an hour and a half-long roundtable, with many distinguished and informed speakers, there were topics left untouched that deserve a mention here. For example, the discussion did not address digital inequality in higher education (a Jisc survey reports that 63% of higher education students had problems with wifi connectivity, mobile data costs, or access to suitable devices and spaces to study during the pandemic). Nor did it engage with the role of algorithms and big data in education — which, as scholars Elinor Carmi and Simeon Yates argue, must include education about algorithms and big data. 

To me, the most notable omission was the topic of “EdTech” — technology and platforms marketed specifically for educational settings, which has seen accelerated uptake during the pandemic. The language quizzes mentioned by a speaker (and referenced above in this blog) are an example. In many ways, EdTech is revolutionising learning in positive ways, helping teachers mark work faster and collaborate with colleagues and helping to engage students with multimedia and interactive content. But the adoption of EdTech deserves more circumspection. 

Technologies for learning are often integrated into the classroom without due consideration of children’s data or privacy and the long-term implications for who has power and influence in an educational system (increasingly, power concentrates in the hands of EdTech companies, which build the technologies and capitalise on collecting and analysing student data). EdTech makes a lot of things more convenient, but the tyranny of convenience (as legal scholar and author Tim Wu put it) is that it masks the choices that tech companies are making about how we live, work, learn, and play. The much-debated and -anticipated Online Safety Bill, which holds tech companies accountable for how their products are designed and marketed for young users, does not specifically apply to EdTech. As Sonia Livingstone has written, “Schools have few mechanisms, and insufficient resources, to hold EdTech companies accountable for the processing of children’s data. EdTech providers, on the other hand, have considerable latitude to interpret the law, and to access children in real time learning to test and develop their products.” 

And this is an even bigger issue, now that the digital divide is front-and-centre in our debates about the future of education. Some children — particularly the most disadvantaged — will rely on school-issued digital devices and free digital services and platforms in school and at home. If those devices and platforms are designed to track students’ activities, those students can be perpetually surveilled, entrenching inequalities in surveillance and policing of behaviour for the most marginalised. The issues of the school-home continuum and children’s rights are clearly implicated in the rollout of EdTech in schools, so it needs to be on the agenda for tackling the digital divide.

Acknowledging the interconnectedness of the various issues that arose at the roundtable, speakers championed the goal of working together. The topic of education is a particularly personal one. Speakers regularly remarked on how they were coming to the issue not only as a professional, but also as a parent. With the will to learn the lessons of the pandemic, all that remains is to ensure that we engage with the full complexity of those lessons — the triumphs and failures, the visionary innovations and the blind spots. “All the puzzle pieces are there,” said a speaker representing the Digital Poverty Alliance, “they just need to be put together.” 

This roundtable was hosted by the APPG Digital Skills, in collaboration with the APPG Data Poverty and APPG PICTFOR and supported by the Digital Poverty Alliance.

Day 5: Beating the Barriers – Online Safety, Security, and Accessibility

In September 2020 the Government announced a new National Data Strategy, which aspired to “make the UK the safest place in the world to go online.” Safety was at the heart of this strategy for tech innovation and growth, and its legislative manifestation is the draft Online Safety Bill, which sets out a new regulatory regime to tackle harmful content online by placing a duty of care on certain internet service providers that allow users to upload content and search the internet. Online safety, security, and accessibility were the focus of the Digital Poverty and Inequalities Summit on Wednesday, and the bill was centre stage.

Roundtable speakers and contributors included members of the Commons and Lords involved in drafting or evaluating the bill, representatives of Barnardo’s children’s charity, the Children’s Media Centre, TikTok, the Centre for Countering Digital Hate, and the NSPCC to name a few. Unlike the other summit roundtables, this one was distinctly more focused — with a piece of draft legislation in the pipeline, there is a clear goal with potential for impact on how people experience the internet. I was struck by how this fact rendered the discussion more consequential but perhaps less capacious. With the country on the cusp of legislation that would protect people from a panoply of online harms, harmful but elusive issues like inequality, bias, and discrimination received hardly a mention. 

That said, the Online safety bill has been heralded as groundbreaking, even revolutionary, with a great deal of potential to set a benchmark that more of the world will follow. Undoubtedly the anticipation around this bill is in part because it is arriving “late” in the evolution of the internet and online platforms. One speaker called it “a good late step.” It is also in part because its present arrival opens up the potential for it to be a repository of our regulatory hopes and dreams about how to make the internet better — to fix what has seemingly gone wrong. But if it is to be effective, the bill must rise above the specific grievances that make it urgent and necessary — to tackle the systemic and system-level issues that underpin the worst abuses online. “If too much is loaded onto this legislation,” one speaker warned, “it will fall under its own weight.”

Although perhaps contributing to that burden, the discussion centred on several issues that speakers hoped the bill would ultimately address:

  • The Online Safety Bill must do more to address the most egregious harms to children, especially exposure to pornography and grooming.

“Childhood lasts a lifetime,” one roundtable speaker remarked. And it was clear that most of the contributors to the discussion viewed the protection of children as a primary concern for the bill. Speakers see the legislation as a chance to achieve what the 2017 Digital Economy Act has failed to do: implement robust age verification for pornographic content and reduce child exposure to sexual content and sexual exploitation, such as grooming. Behind these concerns is a broader anxiety about the long-term social impact that these experiences can have on behaviour and wellbeing. And negative online experiences are arguably a bigger issue, encompassing a whole range of social and socialising experiences. According to The Wireless Report, four out of every ten young people have been subject to online abuse, and 25 percent of young people have received an unwanted sexual message online. Ofcom reports that more than half of 12 to 15 year-olds have had a “negative” experience online, such as bullying, and 95 percent of 12 to 15s who use social media and messaging apps said they felt people were mean or unkind to one another online.

Roundtable contributors also raised the issue of encryption and the potential of end-to-end encryption on social media platforms in particular to hide the activities of child abusers. There are no simple answers to these thorny issues. Encryption can hide illegal or harmful activities, but it can also protect privacy, activism, and free speech. So called “back doors” that would allow law enforcement to access certain encrypted content also opens up the potential for exploiting those security weaknesses by others. Although some speakers returned to the “duty of care” outlined in the draft bill to argue that platforms will have to prove that encryption, in combination with other design choices on platforms, is consistent with a duty of care to users, few of the issues that sit at the uncomfortable nexus between safety (or its foil, harm) and security are black-and-white. Flexibility in approach will likely be the bill’s ultimate strength, but it inherently leaves open many questions that people want answers to. Really, what people want is for tech companies to have to answer to them.

  • Ofcom must be adequately supported to take on its new power and responsibility under the bill.

Another theme from the discussion was the need for Ofcom to be resourced effectively to exercise its new powers under the draft legislation and to shoulder its new regulatory responsibility. Indeed, this is a whole new frontier for the regulator presently tasked with overseeing the telecoms market. The Ofcom chief executive has expressed some trepidation about the sheer volume of user complaints the regulator may face and the legal battles likely to be fought with tech companies that fail to comply with the new regulations. Secretary of State for Digital, Culture, Media and Sport Nadine Dorries wants criminal liability for tech company directors, setting Ofcom up for a confrontation with the likes of Mark Zuckerburg. 

Bill supporters at the roundtable were quick to offer reassurance that Ofcom would be equipped to handle its new duties, but it is understandable that questions remain. The multi-billion dollar platforms in the eye of the storm have struggled (and often failed) to handle reported abuses on their own sites, which host billions of users speaking different languages and with different cultural reference points. Critics of big tech will argue (probably rightly) that those failures are largely down to lack of will; harmful content still makes money. But there are other factors, too. They are also due to an egregious lack of local, contextual knowledge — essential for tackling harms, which are socially constructed and embedded. And due to scale — companies have employed both human moderators and algorithms in an effort to manage the volume of content and complaints, and it is still not enough. Ofcom has reason to be concerned. And therefore, the bill’s drafters do, too. 

I was left reflecting on the important questions we still need to ask about the aspirational outcomes the bill is meant to achieve. Goals like transparency and accountability will be most impactful at the system level in taking companies to task, but what about user empowerment and agency? Big tech might think about users as a stream of data points, but this bill has the potential to treat them like individuals — human beings with a context as well as a complaint — and that would be truly revolutionary. So, to return to this theme from the roundtable, is Ofcom prepared to perform that role? 

  • A legislated approach to online harms must be adaptive and focused on the systems level in order to be future-facing.

The last theme worth drawing out from the roundtable discussion was the issue of future-proofing the bill. “Future-proof” is a common expression in technology development and deployment, but I think it is not quite the right way to frame the concept. It would be better (albeit less catchy) to conceptualise it as “uncertainty-aware.” Coupled with the almost universally shared feeling that this bill might be too little, too late in a digital ecosystem that has developed largely without the kind of toothy government regulation that can bite, there was also a palpable feeling in this Zoom call of wanting to get it right this time: getting ahead of the game, rather than playing catch-up later on. 

One roundtable contributor said, “When rules are too prescriptive, they’re easy to get around.” The solution, according to multiple contributors at the roundtable, will be to ensure the bill can be adapted to yet-unanticipated future scenarios. It must comprehensively address and define (to some extent) the dangers of the internet as we know it today, but it must also leave open the possibility that new powers and responsibilities may need to be bestowed on the regulatory process. It is important to recognise that this uncertainty-aware approach is not the child of necessity, born of the digital age. It is how laws are often made (and changed). In fact, one speaker explained that the idea behind the bill is not to do something radically new but to “level the field between online and other environments.” As media scholars have long argued, while the digital age has ushered in unprecedented technological and societal changes, it is overly sensational to treat it as entirely new and unfamiliar.    

What is difficult, I would argue, in the drafting of this bill is that there are such clear “perpetrators” of harm exacerbation and perpetuation: digital platform companies (Facebook and Google, for instance). This is what happens when we outsource our democracy to undemocratic companies in Silicon Valley, one speaker said. They are in our mind’s eye when we think about how to make this law work. And that is helpful on the one hand because it can concretise certain concepts and terminology in an effort to close loopholes for the companies we know we want to get their houses in order. But on the other hand, we also somehow need to keep a focus on the bigger picture: tackling online harms requires challenging the underlying logic of the digital economy, which trades on people’s personal data and analyses it without adequate consent in order to manipulate behaviour and generate more profit. At least one speaker made this point: it is not as much about the harmful material online as it is about how that material is surfaced and promoted by algorithmic processes. And this is an important point. As an investigation by The Markup found recently, algorithms on Facebook show some users extreme content not just once but hundreds of times. It is about the content and it is about what makes the content valuable — user attention

A joint committee held hearings about the Online Safety Bill that ended earlier in November and is set to conclude its report by December 10th and publish shortly after that. It will be interesting to see which aspects of this conversation — and contributions to the hearings — make it into the revised document. 

One theme that has consistently emerged in all of the previous roundtables during the Summit was absent in this one: the social and societal dimensions of online safety. One speaker did mention that there is a continuum between the online and the offline when it comes to harms. But there is a risk that in focusing on defining what constitutes a harm worthy of regulation, we never get to the crucial conversation about the uneven distribution of harms in society — how and why certain harms disproportionately accumulate for certain people. We know, for instance, that there is a gendered dimension to pornographic content and exposurewomen, girls, and LGBTQIA+ individuals have faced increased online harassment during the pandemic, and children with an impacting/limiting condition are more likely to experience bullying and other negative interactions online. But issues like accessibility did not feature in the discussion. Many of the harms exacerbated by digital content are socially embedded and conditioned. Therefore, platform regulation must accompany comprehensive sex and relationship education that addresses not only interpersonal communication and interactions online but also media literacy. Our digitally mediated lives are a mirror to norms, behaviours, and inequalities in society more broadly; the capitalisation of data and the algorithmic manipulation of data for commercial ends can turn the mirror into an anamorphic funhouse. A truly systems-level approach to online safety needs to take on systems of oppression and marginalisation both in cyberspace and in society as a whole.

This can only be done with the participation of people in the processes of accountability outlined in the bill. People need to be empowered not only to report harms but to define what harms are(right now, the draft bill leaves the category open to interpretation by the Culture Secretary, Ofcom, and Parliament in consultation with one another). And in addition to algorithmic transparency and accountability to a regulator, there must be transparency to the citizen-user in the form of meaningful consent regimes that give people more actual control over their data and reporting regimes that make people feel like the harms they have experienced are real, legitimate, and actionable. Legislation wields the semantic power to define certain terms and relationships, like user and harm. Tech companies have built digital spaces that define us (users) as consumers first and foremost. The law has an obligation to reassert our citizenship, instead.

This roundtable was hosted by the APPG Digital Skills, in collaboration with the APPG Data Poverty and APPG PICTFOR and supported by the Digital Poverty Alliance.

Rethinking Digital Skills in the Era of Compulsory Computing: Methods, Measurement, Policy, and Theory

Around the world, digital platforms have become the first – or only – option for many everyday activities. The United Kingdom, for instance, is implementing a ‘digital-by-default’ e-government agenda, which has steadily digitized vital services such as taxes, pensions, and welfare. This pervasive digitization marks an important shift in the relationship between society and computing; people are compelled to use computers and the internet in order to accomplish the basic tasks. We suggest that this era of compulsory computing demands new ways of measuring and theorizing about digital skills, which remain a crucial dimension of the digital divide. In this article, we re-examine the theory and measurement of digital skills, making three contributions to understanding of how digital skills are encountered, acquired, and conceptualized. First, we introduce a new methodology to research skills: participant-observation of novices in the process of learning new skills along with interviews with the people who help them. Our ethnographically informed method leads us to a second contribution: a different theory of skills, which identifies three primary characteristics: (1) sequence, (2) simultaneity, and, most importantly, (3) path abstraction. Third, we argue that these characteristics suggest the need to change current ways skills are measured, and we also discuss the policy implications of this empirically informed theory.

The whole article is available open access:

My interview on the Critical Future Tech podcast

I really enjoyed this conversation with podcast host Lawrence Almeida on the Critical Future Tech podcast (although I don’t enjoy the sound of my own voice enough to listen all the way back through it!). Click on the button below to listen, or follow this link:


Lawrence – Welcome! Today we have the pleasure of talking with Dr. Kira Allmann. 

Kira is a post-doctoral research fellow in Media, Law and Policy at the Oxford Center for Socio-Legal Studies. Her research focuses on digital inequality, how the digitalization of our everyday lives is leaving people behind and what are the communities doing to resist and reimagine our digital futures at a local grassroots level. 

Kira, welcome to Critical Future Tech.

Kira – Thank you so much for having me. It’s a pleasure to be here.

L I’m really happy to have you for this tenth edition of Critical Future, which is a project that aims to ignite critical thought towards technology’s impact in our lives. 

I am passionate about the positive impact of technology, but also I’m equally obsessed with the potential negative side effects that it can bring, right? And you are someone that has clearly a lot of interest in understanding and reducing digital marginalization. And I realized that when I read the Digital Exclusion Report that you did, for the Oxfordshire county libraries right? 

Before we get into all the topics that I want to go through with you, I want to just talk a little bit about what may be a digital divide by going through the story that you have in that report. 

For the listeners, the report starts with a small story. 

“A man that approaches a staff member of a public library. And the staff member is kind of swamped in customer help requests here and there. That man asks for a phone charger. Not a power outlet, right? A phone charger. And the staff member says they don’t provide those for customers at which point the man says that he’s actually homeless and he has no way of charging his phone. He’s asking for that help ’cause he wants to charge his phone for a bit. So the staff member realizes that this isn’t your regular digital help request and ultimately they’re able to find a charger for that man, which allows him to charge his phone.” 

So you volunteered as a digital helper for that library, right? And what I want to ask you is: was that the moment that made you become interested or that made you sensitive towards this sort of digital divide? Was that the first time or were you subject to that before that?

K That’s such an interesting question, thank you for that. It actually was not the precise moment that got me interested in the role that libraries were playing in bridging the digital divide. It was actually, remarkably, one of many such moments that I had experienced. 

I started volunteering at the library in part, because I did have a broad awareness of the digital divide in the UK. It was the focus of the research that I was just starting actually at that time in my postdoctoral research fellowship on digital inequality. And really, I just kind of wanted to give back. 

When I set out to volunteer in the library, I didn’t actually have any intention for it to turn into a research project or a collaboration with the county council library at all. It was really just something I wanted to do for the community. But it became really apparent that from day one – and I unfortunately can’t remember the specific scenes I saw on day one – it became really apparent that this was actually a really important site for observing the lived experience of digital exclusion on the ground. 

In talking with fellow digital helper volunteers, other people who were doing the same kind of volunteering that I was doing, and also the library staff, I also learned that it was just really difficult for the library to keep track or document or collect data given how thinly spread they were on the ground on the really vital work they were doing to help people like the man that I described in the opening scene. 

So I thought I had access to the amazing resources of a great university institution, if I could somehow kind of put those resources toward helping the library, get a bit better data on the work they were doing and to kind of spotlight what was happening on the ground then that seemed like a really good use of those university resources. 

So that’s actually how the project came about, through constant conversation with the library staff members that I was working with everyday. 

But to return to your original question,only that was really just one of many scenes that I observed as a digital helper in the library. Certainly not necessarily the first or only one that made me think differently about where we should be studying the digital divide.

“The digital divide is actually a very complex concept that is very important because it has become a key contributor to inequality.”

L Awesome. That intro showed that digital divide can be manifested in many ways. So I’m going to ask you, can you tell us what is the digital divide?

K Well actually it is a little bit difficult to pinpoint a single definition of the digital divide. 

I think that when most people use the term in a kind of colloquial everyday conversation, what people have in their minds is the gap between people who have access to the internet and maybe internet connected devices like computers and smartphones, and those who don’t have that access. That’s kind of the simplistic “haves and have nots” kind of dichotomy. That’s the basic idea that a lot of people have in their minds. 

But the digital divide as you’ve rightly pointed out is a lot more complex and nuanced than that and to call it “the” digital divide is probably a little bit misleading, but we all do it, I do it as well. 

There are actually quite a lot of intersecting overlapping compounding divides that have a digital component to them. 

Let me start by just quite simply explaining how scholars think about the digital digital divide. 
Scholars, basically, have stated that there are three levels of the digital divide. 

The first level being the one I just articulated, which is a divide between those who have and don’t have access to the internet. 

The second level is more of a divide in skills and literacy. This is basically saying you may have access to the internet, but you may not actually be able to use those resources to their fullest capacity because you just don’t have the knowledge of how to use them. And obviously there are many layers of skills and literacy that might come into play on that level, the second level. 

The third level is really on outcomes. How do you take your access and your skills and literacy and turn them into meaningful, positive outcomes in your life. Meaning maybe attaining greater educational opportunities or greater economic gain. 

Those three levels are kind of broadly what scholars talk about when they talk about the divide, but even that is a little flattening at times, because drawing those clear dividing lines between the levels is often very difficult. They all intersect with one another and affect one another in various ways. And of course, within each of those levels, there are a lot of nuances and differentiations. 

Also the experience of being digitally excluded is often compounded by other forms of inequality. Things like linguistic inequality, racial inequality, gender inequality, socioeconomic inequality. All of these kinds of what we might call quite simplistically, offline inequalities, compound and affect people’s access to digital resources like the internet and digital devices, but also how they use them and what kinds of experiences they may have online, let’s say when they do get online. 

So basically the digital divide is actually a very complex concept that is very important because it has become a key contributor to inequality. If you’re interested in inequality, digital is a space that we all need to be looking all the time. And to relegate it actually to just the issue of internet access, for instance is really kind of an oversimplification.

L Yeah, but that’s the most visible that you can go for. Especially since the pandemic where everyone is remote there were a lot of cases in the U.S., in Europe, places where you would think everyone has access to stable, reliable internet, where that’s not really the case. 

And that is also one of the things that I read when researching some of your work on rural areas and how they can be impacted and even how they can overcome that with the example of the community-led internet that has fiber optics, that is really an incredible story. 

One thing that you mentioned when I first heard your talk was: I can have a reliable internet connection, but because I don’t have a high income I don’t have a Mac or I don’t even have a computer. The only thing that I have is my mom’s smartphone. 

That was very interesting because you believe that any youngster, they are all literate. They can all work with Excel and do spreadsheets and so on. And that’s not really the case because of that example that you gave. 

That was for me, very interesting, because that is also a way of divide, right? Again, you lack the hardware in this case to learn and when you arrive to the marketplace, you’re actually at a disadvantage towards other people that have had the experience of using say, you know, like a spreadsheet software or something like that.

K Absolutely. And actually that was something that I observed and that was told to me in various interviews during the library project as well. 

This issue of making assumptions, for instance, about what kind of people will have access to what kinds of devices and you spotlighted two key assumptions that often permeate expectations about the digital divide. 

One is that, basically, wealthier countries like European countries and the United States don’t have a digital divide problem because the internet is ubiquitous. This is an assumption that is definitely false as the pandemic has actually quite starkly revealed. And another assumption is that young people are “digital natives” which is a term that I think has been thoroughly critiqued and debunked by other fantastic scholars and policymakers. But it’s this idea that basically young people kind of grow up around technology, so they won’t have any deficiencies in terms of digital literacy or access. They’ll be absolutely fluent in things like Excel like you mentioned. They’ll be fluent in smartphones, laptops, iPads, everything. 

The reality is that that just isn’t true. What you see in a place like public libraries, you see a lot of kids coming in, for instance, who only have access to a smartphone. And when it comes to, say, printing a document off that they need, for some reason, maybe it’s a payslip or something like that they really don’t know how to use even a keyboard and a mouse. And this was something I heard from a lot of staff members that many of the students they were dealing with were pretty flummoxed by the setup of a desktop computer. 

Even things like entering passwords, for instance, into a desktop version of a platform like Gmail. Because a lot of us actually rely on saved passwords and fingerprint ID and things like this on smartphones, we don’t retain a memory of what our passwords are and when we suddenly have to enter it on a different platform, we get locked out. 

This is something you see a lot, especially among young people who really only have single device literacy. That’s something that I tried to highlight a little bit in the library report, and I’ve certainly brought it up in other forms as well around education and digital inequality, because it tends to be kind of an invisible form of digital inequality, largely because of those assumptions that people make about certain demographic groups.

L The single device literacy is an interesting term that also takes me to an idea which is: the ecosystem of platforms and systems that you may interact with — even just on a smartphone, if that’s the only thing that you got — is becoming more and more reduced. 
For instance, in some countries, basically, Facebook is the internet, you know? That’s where you search, that’s where you read about things that others share. And the same ecosystem also exists when you have packages where for X euros or pounds you will get free access to Facebook, Instagram, Spotify, and a couple of other things which have unlimited data so you are going to navigate that universe almost exclusively, but not necessarily Wikipedia articles which will use your data plan and then you will pay for that.

K Yeah, you’re absolutely right and the term that I usually apply to this phenomenon you’re describing, this kind of echo chamber phenomenon, is proprietary literacy. 

I basically mean that a lot of users who have limited access and their access is through say a platform like Facebook, they become very fluent in that platform and that company’s toolkit basically, but nothing really beyond that company’s toolkit. 

So another great example of this (well not great in the sense of positive, it’s just a good example to further illustrate the point) is the prevalence of, for instance, Google Classroom in schools that are under connected to the internet. Google has stepped in and a lot of cases where schools can’t afford or have limited connectivity for various reasons to get devices and internet access. 

Google has stepped in to help provide tools for students to be able to get online and develop skills but usually these students then only have access really to the Google suite of software and even Google hardware like Google Chrome books. And what happens is those students wind up growing up sort of really familiar with Google and not that comfortable, not that fluent in other platforms, other proprietary software and other kinds of hardware. 

I’ve spoken to teachers in rural schools that are members of the Google Classroom program who say that their students basically only want to use Chromebooks and that when they have the opportunity to get a device for the first time, what they want is that Google device and it’s not surprising because the devices that they have access to in the school are exclusively Google products. 

And so that is also, I would argue, a very limited form of digital literacy. It’s quite narrow this platform or proprietary literacy.

“If we want that imaginative space to be open, it’s best to cultivate literacy in a wide range of platforms and devices and also to think about digital use less as an issue of consumption than it is an issue of participation.”

L That is very interesting. And I don’t want to get into monopoly or antitrust thoughts right now but my question is: if you have a device, say the Google Chromebook, you use all of Google’s apps and Chrome and so on and all of that allows you to interact with society, right? So you’re able to pay your taxes to consult anything that you may want and work and communicate and you’re able to do that in that ecosystem from Google, what’s the problem with that? 

What is the problem of being locked into that ecosystem? Or do you see any problem with that, that person can live a digitally included life?

K Arguably this phenomenon is not new. Throughout the history of technology there tend to be kind of dominant technologies that lots of people buy into, they become more fluent and literate in the one that they know. I remember for instance, I had a school that bought a lot of Apple products when I was a kid and so I was a lot more comfortable with Apple products because that was what I had. 

It’s not necessarily a new phenomenon but I think there is a reason to be sort of just critical about it to kind of stick with that theme. That’s because we do live in a much more diverse digital space than a monopolistic one. In fact, there are lots of different products out there, there are lots of different companies competing and arguably we want to live in an innovative dynamic future in which new ideas are generated and there will be new companies and new products and maybe even alternative ownership models for platforms and things like that. 

If we want that imaginative space to be open it’s best, I think, to cultivate literacy in a wide range of platforms and devices and also to think about digital use less as an issue of consumption than it is an issue of participation. 

The thing about having sort of proprietary literacy as the predominant form of literacy, especially for digitally excluded communities – the communities that have limited access – what tends to happen is that these users are really being cultivated as future consumers of products. They’re being motivated, they’re nudged to buy products that are produced by a particular company. 

You may have various views on the usefulness or the value of that socially but arguably it could potentially reduce competition in the long run and it also views children, the student users of these platforms as consumers first and citizens second. 

I would suggest that that isn’t really encouraging the kind of diversity and dynamic thinking that we need in terms of building a more inclusive digital future in the long run.

L Thank you. That’s a great answer and touches on something that I want to talk about a little bit later, which is Critical Tech Literacy. We’re hinting a lot about people being critical of things, even though they are great to be used like Apple and Google products. And by the way, Apple is also another company that’s very keen on having a foothold on education. 

So talking about digital divide: we understand that it’s a complex issue and it is manifested in different ways. 

I am a technologist, I’m a software engineer. I build products online for users around the world and I already know about some things that can contribute to digital exclusion such as: it’s English only or it requires fast connections for you to connect so if you can’t go for that, then my product doesn’t work for you and I’m excluding you. 

Those sorts of things are kind of known for the more attentive technologists and so my question is: what are some things that can hint at digital exclusion? Putting aside those obvious hurdles that I just mentioned, what are things that I could be on the lookout for or that maybe I’m not aware of as I’m building new digital products that I can look for and anticipate and incorporate into my solutions?

K Of course it’s very difficult to anticipate what a better kind of more inclusive build will be without talking to users. 

I’m an anthropologist so I always believe that the best way to get a sense of what’s actually happening on the ground in people’s real lives is to observe them in their everyday lives, doing ordinary things. It tends to be very revealing. And this is slightly different than arguing for something like user driven design which I also think is a very important aspect of design development. 

But what you’re asking is: how do you undercut your own assumptions? And that’s very difficult because it’s very hard for all of us to be so self-aware that we can be conscious of our own assumptions that we build into our technologies. 

Usually the best way to do that is to step out of our own perspective and occupy somebody else’s perspective for a while. 

I can give an example of this from a conversation I had with a library staff member, actually in Oxfordshire libraries, who runs tablet and smartphone sessions mostly for pensioners — for elderly folks — in the community. He was saying there are all these symbols that especially tablets and smartphones use to navigate around menus that a lot of older folks just don’t really understand. I mean they can functionally touch things and they know that an application will open if you touch this thing and things like that but there are things that are just not intuitive to a certain generation. 

For instance how on earth would you know that a little circle with a line coming out of it is a magnifying glass, and that means “search”? I tend to refer to this as the visual vernacular of platforms or apps. 

There are a lot of sorts of things that we have intuitively come to understand as users of digital technology that aren’t necessarily universal. The sort of three lines that indicate a menu – you can expand into a menu – a lot of people find that confusing. A lot of older folks don’t see a camera app icon as being a camera. It doesn’t look like a camera to them, it’s like a circle inside a square and they say things like “how is this a camera”?

“The issue is that digital inclusion isn’t a switch that just gets turned on at some point and then it’s always on. It’s actually more of a process where people can fall in and out of being included over the course of their lifetimes.”

L To be honest I threw that question out there not expecting a bullet list of things. 

The first thing is of course be aware that your users may have special needs that your product doesn’t account for. Of course understand your users, understand for who you’re doing the product or the service that you’re building. Talking with them is essential. 

Right now you were talking about the icons and it’s funny because sometimes I’ll be prototyping some interface and I’m like: “all right I need a search icon here”. So I go on this website that gives me a lot of free and paid icons and I just type “search” and I have a lot of magnifying glass icons, you know? 

So there is this notion that like “that is a search icon”, you know? At least for web developers and designers and so on. If I say to my designer colleague “put a search icon here”, he’s not going to put anything else besides that. And it’s interesting that some groups may not realize that. 

Do you think that that will come to an end at some point? We’re going to have a generation that has interacted so much with those interfaces that at some point do you think this gap is going to narrow itself because everyone is a bit more digital native to some extent, or is new technology going to come up like VR or AR glasses and then our generation, we’re going to be like “whoa, I cannot reason with this” [laughs]. Do you think that’s going to be the case?

K It’s probably unlikely to be totally eradicated. This problem is very unlikely to totally go away and that’s for a few reasons. 

You highlighted one of them, which is that technology changes all the time, very rapidly. And for a lot of us – especially those of us who have been kind of consistently connected since let’s say the beginning of the digital age. – it’s even hard for us to remember when those transitions occurred: when certain icons morphed into other icons and when something became the standard symbol for search or when something became the standard symbol for save and that’s because that change happens gradually and happens frequently. 

As long as you’re constantly connected you might experience the change and take it on board, but not necessarily note it. I think that the issue is that digital inclusion isn’t a switch that just gets turned on at some point and then it’s always on. It’s actually kind of more of a process and people can fall in and out of being included over the course of their lifetimes as well. 

That this is something that is very important for understanding why the digital divide is unlikely to just kind of naturally close as a function of sort of demographic shifts. As young people get older they’ll just remain digitally connected and included, and we’re just not going to have a digital divide anymore. 

The reason that’s unlikely to be the case is for the reasons that we were discussing earlier that the digital divide is actually a function of a lot of compound inequalities. For instance people may be highly digitally connected when they’re employed, but then when they become pensioners they’re on lower incomes. They may actually be only living off of their state pension for instance and due to that, they may decide “I actually don’t need internet connectivity for the next few months or the next year, because it’s a bit expensive and I’ll just roll that back”. 

And then if you’re offline for a year or two years the digital world does move on in that time and when you come back online a lot of things can be really confusing. 

This is something we can see already. For instance people who leave school at 16 (you can leave school at 16 in the U.K.) and then maybe are in and out of employment for a few years and then get a job that requires digital skills, let’s say in their twenties, will often be very behind in terms of digital literacy, because they just had that gap of a few years when they weren’t regularly connected or maybe they only had a smartphone and they kind of really didn’t do that much on a laptop and all kinds of applications have changed. 

For instance our regular Microsoft Word users, sometimes you get an update on Word and you’re like “where did everything go? I don’t know where anything is anymore”. Just think of that on a much larger scale: if you’re a little disconnected for a few years due to unemployment or lack of income or something like that – life stage changes basically – that will continue to affect people basically as long as inequality continues to affect society. 

That’s why the digital divide is unlikely to be really just purely a demographic or a time problem, mainly because people fall in and out of various levels of inclusion over the course of their lifetimes. That’s something that digital designers could certainly be aware of. 

To return to your earlier question about what else designers can be aware of. We talked about the visual digital world but one other thing I wanted to mention was the importance of simplicity and how many assumptions go into deciding what is simple for a user. 

I know that a big thing in app design and development is intuitive design: this idea that things should be as easy as possible for users. But a lot of times what digitally fluent people like you or I would assume is easy is actually very difficult for users who are digitally excluded or digital novices — they’re coming to devices for the first time. 

Even something like having to create a user account can create a barrier for a user to use a particular platform or application or requiring somebody to create an email before they can use your platform or account adds an additional layer of complication to a user who may potentially desperately need access to the platform that you’ve built if it’s for something like say banking or welfare. 

It’s very important to think about what simplicity is to a user and not to you as a designer.

“Critical tech thinking is about applying a critical lens to technology. This is increasingly important because of the fact that the digital world that we encounter today is not a fair one.”

L I could go on on discussions that sometimes I have with designers or fellow front end developers about “No, just put a tooltip that just shows up when you hover on it” and I’m like, “yeah I like that you’re saving space but if they don’t know they can hover that thing and that thing has some info there and they are not used to your interface, your product, then that doesn’t exist and you’re not helping them.” There are so many stories like that and I’m going to use this to move to Critical Tech Literacy. 

Thinking critically about technology as a whole regardless of whether you’re a technologist like a programmer or a researcher. We all use technology nowadays, virtually it is everywhere, it is eating everything so it is important that we think about it critically. I’m going to read a quote from one of your slides that I screenshot. I’m going to read that and then we can dive into it a little bit. 

“Critical Tech Literacy means cultivating skills to think critically about how we engage with the life critical technologies that have become essential to everyday life. It includes sometimes taking a critical stance towards technologies that perpetuate or create inequality and unfairness in society.” 

So, first I was like “wow, Critical Tech! that is the same name! [laughs]” I went and researched it to understand what was out there regarding this theme and I mainly found literature on how critical it is for people to be literate in technology. In the sense of: you need it to work, you need it to be competitive, to be productive. 

But that’s not really what you’re saying in this sentence, right? The floor is yours to expand on what you mean by Critical Tech Literacy in this case.

K Critical Tech Literacy is actually a term that I have alighted on that I’ve kind of started using really only very recently actually in that webinar that you attended. And yeah, I am using it differently from the literature that you described. 

What I’m talking about is really kind of blending critical thinking with digital literacy. 

Digital literacy really deals with competencies: how can you use technology and can you use it effectively for achieving your goals – those outcomes that are part of the third level of the divide. That’s digital literacy. It’s a nuanced concept but it’s very widely been adopted in policy circles. 

Critical thinking is about applying a critical lens to technology. I would argue that this is increasingly important because of the fact that the digital world that we encounter today is not a fair one. Especially in recent years, there’s been a lot of excellent scholarship and reporting on the ways in which bias is built into technology, which should not be surprising because technology is a social product. 

Bias is built into so many things that we use in our everyday lives, there’s no reason we should assume that digital technology is any different. 

But still today, digital literacy is kind of approached – especially in school curricula – as a set of competencies: “How do you deal with digital technology? Are you able to perform certain tasks with technology?” And in its sort of most critical form: “can you keep yourself safe in the digital world?” These are the focuses basically of digital literacy, especially at the school level. 

I think that we really need to move more in the direction of teaching kids to think critically about the technologies they use, how the technologies are built, what biases have been built into them and how to live balanced lives with technology. 

Technology is pervasive and also largely built and marketed by private companies that have an interest in cultivating consumers who will continue to engage with those products in order to create value for the company. What that means in the long run is that sometimes that constant engagement isn’t necessarily in the best interest of the user. 

How do we start thinking critically about the pervasiveness of technology in our everyday lives? 

That’s really what I mean by Critical Tech Literacy. It’s about thinking critically about technology so that the next generation of tech users and designers: how do we ensure that they’re thinking about the assumptions that are built into technology, about their own positionality in relation to technology and how technology is a social product? 

These are all concepts that are very widespread in academia, and we use all kinds of complicated language to talk about them but they’re concepts that can be translated into a digital literacy program for all ages. They’re not really that complicated in practice and so my argument for Critical Tech Literacy is that we should really take some of these very important conversations that are happening in the academy and make them a lot more widespread.

“If we want the technology marketplace to be dynamic and increasingly fair then we need to prepare students of technology today to be thinking like that.”

L And I’m a hundred percent behind that as you may imagine by having invited you to talk about it. 

I feel that technologists are more and more aware, even though it may not be as mainstream as we would like it to be but there are things coming out in the mainstream: books like “Weapons Of Math Destruction” and even documentaries such as “The Social Dilemma” which explains in very simple terms how technology can be biased and can be used against you. And so we should be aware and be critical about what we’re building. 

One thing that is funny, that is maybe just my perception, but when you put the word “critical”, people instantly are like: “Wow, you’re going to do destructive criticism.. And what? You don’t like technology?” And that’s not the thing. Actually, I love technology. I work in that field and what I just don’t want is to contribute to things that are then going to have negative side effects for groups that I may not even be aware that that is happening, right? 

As technology becomes more and more pervasive, inevitably, it is important we wonder what is going on and not just take it in a passive manner. 

My worry is that governments or schools or even your employers are gonna say: ” what’s the concrete outcome for that?” How to use the tool, how to navigate the web – that is understandable: you’re productive, you can get a better job. 

But what is the advantage of being critical about technology? How would you get buy-in from a company or from a government and explain that we actually need Critical Tech Literacy on a more abstract level, on a more existential level and not on a practical level? How could you convince company’s management teams or a government to say: “we need more of this”?

K I think that there is really a ground swell right now of increasing awareness as you said of the issues related to how digital technologies can deepen certain social inequalities and there’s been a bit of a backlash against that. 

The debates that we’ve seen in Europe and the U.S. around data management and privacy are kind of the tip of the iceberg and I doubt that these issues are going to go away anytime soon. The debates around things like Clearview AI, the scraping of personal content without consent, what terms and conditions actually mean for users, things like this. These are debates that are not going to go away. Companies won’t be able to dodge them, governments won’t be able to dodge them and the more awareness that people kind of generally have, the more they will stay on the agenda. 

Future technologies, whether they’re built by companies or governments or NGOs or individuals or whatever, are going to have to design their platforms in fairer ways. That’s the direction of travel right now. 

So it is actually very much in the interest of companies, government and schools to think about who the next designers of technology are likely to be. Undoubtedly kids in schools today are growing up kind of with ambitious plans for what technology should look like in the future, because a lot of them are heavy technology users, that’s the reality. 

If we want the technology marketplace to be dynamic and increasingly fair – I would argue that that’s a good social goal in and of itself – then we need to prepare students of technology today to be thinking like that. We need to prepare them to be questioning their own assumptions, to be thinking about living in balance with technology so that they can build better products that enable users to have more control over their data. 

And actually, I would also argue that while it is a kind of abstract esoteric concept, this idea of critical thinking about technology, there are some really concrete aspects to this. 

So for instance, that webinar that you attended (hosted by the University of the Arts London) in the workshop component we asked participants about their level of confidence, for instance with different digital skills. And a lot of people, because this was a very digitally literate crowd, ranked really highly on things like “I can produce a word document” and ” I can search the internet”, “I can even discern quality information from questionable information online”, things like this. 

But when it came to things like “I feel I have control over my digital footprints (the data trail that I lead)”, these kind of trickier areas where people are feeling kind of insecure, the confidence level went way down. 

And this was just in a small group of participants in this workshop, but these are very digitally fluent people. When it came to things like, “I feel like I have control over my data”, or “I feel like I can switch off when I want to”, these were things that people ranked pretty low in terms of their confidence. 

Those are things that going forward, people are going to want to have more control over and they’re going to want to do. That’s what Critical Tech Literacy is all about, and that is going to affect the entire economy around technology. And so it’s got to be of interest to companies, governments, and schools, unquestionably.

“Critical in and of itself does not mean you’re always criticizing technology. It really just means developing an awareness and a kind of constant practice of reflection about the role of technology in our personal lives and in society and how technology is shaped by social forces.”

L And I would even just add something which is: on a purely competitive aspect, technology is first functional, right? I can write a document, I can communicate with someone, I can find something that I’m looking for. That’s the functionality part of it. And we all love Google because it’s so great at delivering that functionality. 

And as those needs are fulfilled by the services and the products that we use and we become acquainted, we start looking maybe for a sort of higher order need, which is: “I still want to retain some control over more abstract, more higher level things such as my privacy, how my data is shared. 

So it’s like a sort of Maslow pyramid where you have your functional needs fulfilled and now you’re moving towards those more abstract needs that need to be fulfilled.

K Yeah, I think that’s a great addendum for sure and to echo something else you said as well, I am not anti-technology either. 

I love technology and I use Google and I have Apple products and I’m also not against these companies just because they’re companies. I think you made the point earlier that it’s quite common that people hear the word critical and they think you mean criticism. And to be fair, sometimes I do, sometimes I do mean criticism. 

But critical in and of itself does not mean you’re always criticizing technology. It really just means developing an awareness and a kind of constant practice of reflection about the role of technology in our personal lives and in society and how technology is shaped by social forces. 

That is not value neutral, it has value. But it also isn’t inherently critical or anti-tech. And so I do think it is important to constantly stress that it may lead to criticism when things go badly or when biases lead to exclusions that harm people, then it is deserving of criticism, but that isn’t necessarily what critical means.

L What we’re going for is building the futures that we were promised in science fiction. The good science fiction, the utopian one, not the dystopian one, right?

K Yeah, exactly! It really is about building better futures for society! 

My ethical orientation sees those futures as being more equal and fair and inclusive and just and so those are the values that I would argue need to be built into our social products like technology. It’s an optimistic view actually. It’s not a negative destructive view.

L And on that note, thank you so much for being here with us. It was a super interesting conversation. Tell everyone where they cen keep in touch with you. Where they can follow you, your work and your research.

K Great! Thank you so much again Lawrence for having me on the program, it’s been an absolute delight. I’ve really enjoyed the conversation myself. 

If people would like to follow up and stay in touch and follow this work you can go to my website which is You can follow which is where we’re doing a lot of the collaborative work and collaborative development around Critical Tech Literacy resources. There we will be putting up some free open resources on how you could run workshops and sessions on Critical Tech Literacy over the coming months. 

And I’m also on social media. You can find me on Twitter and Instagram, all at @kiraallmann, just my name so it’s very easy.

L Great, everyone go follow Kira. She publishes a lot of amazing research and great articles. 

Thank you so much and we’ll keep in touch!

K Great! I look forward to it.