Digital Social Policy: Past, Present, Future

We undoubtably live in a digitally infused world. From government administrative processes to financial transactions and social media posts, digital technologies automatically collect, collate, combine and circulate digital traces of our actions and thoughts, which are in turn used to construct digital personas of us. More significantly, government decisions are increasingly automated with real world effect; companies subvert human workers to automated processes; while social media algorithms prioritise outrage and ‘fake news’ with destabilizing and devastating effects for public trust in social institutions. Accordingly, what it means to be a person, a citizen, and a consumer, and what constitutes society and the economy in the 21st century is profoundly different to that in the 20th century.

Digital technology not only relates to the present. The implications of digital technologies over the past  years have been profound, yet largely absent from social policy research. One explanation for this absence in social policy research may be that technology is viewed as part of the government administrative practices, or public administration, which is often considered empirically and conceptually separate to the substance of (social) policy. Yet public administration scholars have also lamented the absence of digital technology in their discipline (Meijer, ; Pollitt, ). This generalised absence points to another possible explanation, that the social sciences more broadly have a blind spot when considering the role of nonhumans in the social world. Indeed, Law () argued that technology appears to exist as 'monsters' in the social sciences; seen as unusual and exotic, but not significant in understanding society and social dynamics. Such ontological explanations may be coupled with an expertise limitation. Social scientists, including social policy academics, do not typically have much training or expertise in technical matters, and may feel ill-equipped to examine them.
Regardless of the reasons, digital technologies are now constantly in our hands, touching our fingers, and operating / around us, and the operation of social policy is not exempt. It is therefore essential to bring the 'digital' into 'social policy'. The key purpose of this paper, therefore, is to provide a primer, map or navigational tool, to apprehend the ways in which digital technologies have come into, shaped and transformed, and are redefining social policy and its administration. Such an intellectual capacity is necessary to more fully consider the ways in which digital technology through social policy (re-)structures society, treats citizens and service users, and (re-)distributes resources and dis/advantages. To this end, section two provides an empirical overview of the evolution of digital technologies in social policy processes. It highlights how these past, present and emerging technological developments have been and continue to be key to major social policy and administration transformations. Section three then explores the policy, legal, ethical, and power dimensions of these changes, and outlines responses to them. In section four the paper suggests both conceptual and methodological skills that can enable social policy researchers to better engage with and through digital technologies. The conclusion explores the key implications of the paper for future social policy research.

Digital technologies in the evolution of social policy and administration
Shortly after World War II, governments around the world started adopting new electronic computers to support the administration of social policy. In its  report, the British Ministry for Pensions and National Insurance included a photograph of its newly installed "Automatic Data Processing equipment" to "reduce staff costs" by handling staff payroll and production of statistics (). Across the hemisphere, in  the Australian Department of Social Security began electronic payments to pensioners (Department of Social Security, ). In the UK, local government authorities began computerising their social service records in the early s (Lingham and Law, ), a similar timing to that in the USA where Boyd et al. () found computerisation in local governments was primarily introduced for administrative purposes, rather than education or direct service provision.
Over the next  years computers grew to become the backbone to social policy administration and practice. With back-office computers connected to online websites and smart phone apps, people can apply for benefits and services / and report to government to ensure ongoing compliance. Concurrently, computers constantly assess incoming data and match these with other government and non-government (e.g. banks, social media) datasets to ensure compliance, which in turn automates identity and eligibility assessments, suspension and cancelation of benefits or services, and issuance of penalties and debts (Eubanks, , ch. ; Braithwaite, ). Clients can increasingly engage with governments through chatbots and online services (Henman, ), sometimes with automated voice, fingerprint or facial recognition systems. Data analytics and machine learning (a.k.a. artificial intelligence (AI)) increasingly enable governments to profile citizens, giving differentiated, personalised or 'special' treatment to beneficiaries, children and adults 'at risk', or (potential) offenders (Gillingham, ; Kehl and Kessler, ; Desiere and Struyven, ).
Web-linked digital technologies are used to remotely monitor the sick, and robots provide companions for those alone (Majumder, Mondal, and Deen, ; Vandemeulebroucke, de Casterlé, and Gastmans, ).  Many justifications for digitisation have not changedefficiency, costcutting, staff-savings, consistency of decisions, and reducing errors. Over time new justifications include policy responsiveness and agility, customer service and service innovation, personalisation, overpayment and fraud detection, enhanced governance, and improved accountability and democracy (e.g. OECD, ). While productivity has increased overall, this does not necessary equate to cost-savings (Henman, ), and legacy and complex computer systems have at times also reduced policy responsiveness. At the same time there have been a string of major ICT disasters (Goldfinch, ), with the UK's Universal Credit system a prominent recent example (Evenstad, ).
Over these decades some broad trajectories can be discerned in digitally enabled social policy administration and delivery, the substance of social policy, and the governance of social policy.
First, in the administration and delivery of social policy, computerisation was first used to automate routine, well-defined activities, such as keeping databases of individuals' national insurance instalments, payment of benefits, production of statistics, and calculation of benefits. Increasingly, automation extended to new areas of activity and enabled new forms of social policy administration. What were previously regarded as non-routine decisions requiring professional or administrative judgement have been supplemented and supplanted by Decision Support Systems, Expert Systems, and more recently AI. Bovens and Zouridis () have characterised an ongoing shift from streetlevel, to screen-level, to system-level bureaucracy as computers become more central to front-line operation and then systemically automated operations. These developments have led to charges of deskilling (Karger, ) and reductions of administrative and professional discretion (Høybye-Mortensen, ; Zouridis, Van Eck, and Bovens, ). Technology cuts different ways, and while codification of rules and laws limits discretion, it has also importantly helped to clarify citizens' eligibility and social rights. Alternative models of computerisationto help support government officials, rather than to automate themhave been observed, particularly in the Scandinavian welfare statesthough the distance between administrators and citizens seems to have increased regardless (Snellen and Wyatt, ; Adler and Henman, ). Digital technologies have also seen a spatial and geographical decoupling of social administration. With networked computers, telephone centres become possible, and with the internet and online computers, websites and smart phone apps are introduced, thereby shifting from a -to- bricks-and-mortar administration to / operations. Digital data networks have also facilitated the outsourcing of social services to non-government and commercial agencies.
Overall, Dunleavy et al. () have articulated a 'digital era governance' replacing new public management rationalities in order to better enact personalised, whole-of-government approaches.
Second, digital technologies have provided the mechanisms for changes in the substance of social policy. In addition to the shift to more codified social policy, policy and its delivery have become much more differentiated, individualised, personalisedfor example, creating different payment rates for different sub-populations, geographical areas, or risk/need profiles (Henman, ). Instead of universal 'one-size-fits-all', policies have been able to be more nuanced, to better respond to human diversity. What was enabled by administrative or professional discretion becomes codified into complex algorithms. Networked computer systems have also increasingly supported a growing conditionality of social policy, by making eligibility to certain services and benefits conditional on circumstances or behaviours evidenced in digital databases. Consequently, conditionality of social policy has increasingly joined up two separate policy areas and cross-cut different policy objectives, such as removing child benefit from parents whose children are truanting or not immunised (Henman, ). Both differentiation and conditionality in social policy increases its complexity with implications for citizen access and accountability (Henman, ). Computer modelling and simulation tools have also supported the development of such complex policies and enhanced policy makers' capacity to create more nuanced, farseeing, and far-reaching policies (Harding, ). Over time, social service agencies have amassed enormous digital administrative datasets, of 'big data' (Mayer-Schönberger and Cukier, ). Using data analytical techniques, these datasets are being used to shape social policies, such as New Zealand's social investment approach, which involves using actuarial approaches to target decisions and inform directions in social security, care and protection of children and delivery of social services (O'Brien, ). Finally, social policy itself has considered how digital technologies (re-)produces and reinforces social disadvantage where, under the nomenclature of the 'digital divide', people without access or ability to use digital technologies (e.g. computers, internet, smart phones), people who are typically already disadvantaged, are further excluded from full participation in society (Notley and Foth, ; Kim, Lee, and Menon, ).
Third, the governance of social policy has transformed as a result of digital technologies. As Bellamy and Taylor () observe, computerisation is as much about automation as it is about informatisation; namely, the production of data, information, and in turn knowledge. Such knowledge is increasingly central to the governance of social policy for: operational management; understanding citizens' needs and trends; and reflecting on and revising social policy. Digital data and algorithmic decision making also transfigures accountability processes. In administrative review and appeals, digital data can ostensibly provide objective traces of administrative transactions, computer algorithms can provide explanations for decisions, and computer code can be external audited (Adler and Henman, ). Considerations of block chains to enhance these processes are being explored (Berryhill, Bourgery, and Hanson, ). Countering these, cultural attitudes that 'the computer is correct', a lack of administrative openness, and the complexity of algorithms (made worse with recent moves into machine learning) can undermine administrative justice processes and outcomes (Henman, ).

Implications of digital social policy
Recent high-profile controversies in social policy have highlighted considerable policy, legal, ethical, political, and power issues of digital social policy. Such controversies include: England's OfQual algorithm of  (Kelly, ); Australia's Online Compliance Intervention (OCI) system, colloquially called 'robodebt' (Carney, ; Mann, ); the illegal Dutch SyRI social benefit fraud system (Bekker, ); the use of COMPAS in USA criminal justice systems for parole and sentencing decisions (Kehl and Kessler, ; Hannah-Moffat, ; Hartmann and Wenzelburger, ); Alleghany County's Family Screening (Vaithianathan et al., ; Eubanks, , ch. ); China's social credit system (Dai, ); and USA's Medicaid's Electronic Visiting Verification (EVV) system for carers of people with disability (Mateescu, ). These examples illustrate some key issues arising from digital social policy.
Digital technology often enhances state surveillance and control. Digital social policy may be designed and deplored to reinforce political agendas and rationalities (Graham and Wood, ; Benjamin, ). This is particularly pertinent in much social policy, where the focus is on disadvantaged or marginal peoples within a historical system of negative social valorisation and state control. Indeed, the above examples of Australia's robodebt, the Dutch SyRI system, and USA's EVV system, are premised on suspicion of welfare recipients as fraudsters resulting in unethical and often illegal curtailing and cessation of social benefits and rights.
Computer algorithms and big data are culturally constructed as accurate, objective and true (Holbrook, ), which can undermine critical appraisal of digital social policy and administrative decisions, and therefore reduce government accountability. The growing amounts of data collected and their submission by citizens has also placed greater administrative burdens (Herd and Moynihan, ) on social policy recipients, that can reproduce social divisions in welfare (Henman and Marston, ). Taken together these two dynamics can implicitly and explicitly lead to what Lipsky () called 'bureaucratic disentitlement', or what might today be renamed as 'algorithmic disentitlement', when in the words of BBC's TV series Little Britain, 'the computer says "No"'. Again, all the cases above demonstrate the difficulty in understanding how and why algorithmic decisions are made, and the significant challenges in contesting and overturning them, often requiring major, concerted legal and political interventions.
The rise of big data, data analytics and machine learning have accelerated several concerns. Predictive or risk assessments for profiling generate significant policy, legal and ethical concerns about treating people differently on the basis of possibilities or calculated futures, not actualities (Henman, ). In the USA, such approaches have been argued to breach the fourth amendment prohibiting unreasonable searches and seizures (Slobogin, ). Concerns also arise about bias of data and algorithms, treating people differently based on protected characteristics such as race/ethnicity, gender, sexuality, and religion. Racially biased algorithms are particularly evident in the COMPAS profiling system (Washington, ), which arises from the use of historical crime data generated within racially policing to build such systems.
The drift to more automated social policy heightens concerns about its 'black box' or opaque nature (Pasquale, ), thereby reducing accountability and fairness. In response, there is significant technical work on building algorithms to purportedly achieve fairness and transparency, including via making algorithmic decisions explainable (https://www.fatml.org/). These developments have also called for continued human oversight. While professional discretion was a way to ensure people were considered according to their individualities, recognising that 'one-size-does-not-fit-all', with the complex differentiated algorithms (machine learning as the ultimate way to do this), we are now learning that 'one-algorithm-does-not-fit-all'. We potentially need new ways to augment digitally-enacted with human-enacted social policy.
Many of these challenges of digital social policy have been flagged over the last four decades, but they have largely remained at the fringe of social policy, public administration, and legal considerations. Fortuitously, the development of machine learning (under the marketing banner of 'AI') has stimulated much interest into the ethical, legal, and human rights dimensions of the use of AI in government. Multiple reports by governments, think-tanks, research institutes and corporations have charted the broad issues (see Alston,  for a focus on welfare states). Fjeld et al. () have helpfully summarised these reports and identified eight major areas for consideration: privacy; accountability; safety and security; transparency and explainability; fairness and non-discrimination; human control of technology; professional responsibility; and promotion of human values. The current agenda is to provide policy, legal and regulatory responses to address them. Emerging policy and legal responses to these challenges are discussed in this paper's conclusion.

Conceptual and methodological innovations for digital social policy
In addition to greater empirical knowledge of digital technologies in social policy, critical digital social policy research requires both conceptual and methodological innovations. Conceptually, four key areas are canvassed.
First, a digital social policy sub-discipline necessitates an ontology that incorporates digital technology into its remit. Social policy cannot be solely a study of people and institutions, but must recognise the real ways digital (and other) technologies shape and enact social policy and its affects. Latour () makes this point in referring to technologies (and nonhumans) as the 'missing masses' in understanding society. Such an ontology appreciates the ways in which both the social shapes the technological, and the technological shapes the social, thereby avoiding simplistic technological or social deterministic accounts. Like all socio-technical innovations, there are both new opportunities and forms of knowledge and action, alongside closures of same. There is a range of social theoretical approaches that can grapple with these ontological considerations, particularly in Science and Technology Studies (Fuchs, ; Matthewman, ). I have found Actor Network Theory (ANT) to be most helpful (Callon, ; Law, ; Callon, ; Latour, ) as it takes seriously the materiality of our world. Arising within and partly in response to a period of hyper social constructivism, the re-discovery of materiality is crucial for digital social policy, even if the operations of digital technology can seem quite immaterial and ephemeral. New philosophical approaches to materialism make up a key plank in this thinking (Verbeek, ; Ihde, ).
Second, once taking seriously the materiality of technology (and social policy), the conceptual challenge is to understand how this materiality is shaped and how it shapes us, in a way that is not deterministic. Here, the concept of affordance is key to this analytical work (Davis and Chouinard, ). Davis (), for example, examines the ways in which artifacts request, demand, allow, encourage, discourage, and refuse. Think of how algorithms determine eligibility to services or cut off benefits. Computer databases also structure the type of data that is collected and thus enable and constrain the nature of knowledge that can be known with them (Henman, ). Computers also make instantaneous calculations and circulate data across networks at close to the speed of light (Castells, ), and support the easy circulation and reproduction of digital data.
Third, a basic working knowledge and critical understanding of both digital data and algorithms is required. There are now emerging areas of critical data studies (Kitchin and Lauriault, ; Iliadis and Russo, ) and critical algorithmic studies (https://socialmediacollective.org/reading-lists/ critical-algorithm-studies/). These bodies of work highlight the way in which both digital data and algorithms are not ontologically, ethically, politically or socially neutral. They are created by humans (typically white, male, educated ICT professionals) who consciously or unconsciously embed their own ways of thinking and/or visions about how the world works and what forms of knowledge are constructed and important (Winner, ; Sandvig et al., ; Ruppert et al., ). Most acutely, given the current frenzy in global techno-political debates, a detailed and critical understanding of 'Artificial Intelligence' and machine learning is also required (Taulli and Oni, ).
Four, a critical digital social policy must take account of the nature and practice of power in a digital world. Traditionally, most critical approaches to digital technology have focused on operations of surveillance (Lyon, ). Theoretically grounded approaches to power include those drawing on Marxist and Weberian traditions (Castells, ; Schroeder, ). With his broader conceptualisation of powerincluding its disciplinary, productive and capillary-like manifestations -Foucault's work has stimulated a significant body of work. Drawing on Foucault's concept of governmentality, several authors have sought to clarify how digital technology governs (Henman, , ) and even enacts an 'algorithmic governmentality' (Rouvroy, ; Morison, ; Rouvroy and Stiegler, ; Henman, ). Such a mode of rule involves governing segmented peoples and populations differentially via profiling and anticipatory assessments of risk, danger, and prosperity. Political economy approaches to digital technology recognise the increasing role of global tech firms operating in highly intertwined contractual relationships with states, enacting surveillance capitalism (Zuboff, ; Prainsack, ).
Social policy research can also benefit from innovative digital data, research tools and methodologies. Digitisation of data and new data platforms and collection tools have expanded the range, variety, and volume of data with which to examine social policy. In addition to enormous government administrative digital data sets, digital data can be obtained from social media platforms, websites and digital collection tools. The widening diversity of digital data also provides the basis for new ways of interpreting social problems and creating policy solutions. For example, geo-coded data have enhanced our capacity to understand the geographical distribution of social issues and develop responses (Wise and Craglia, ; Ballas et al., ).
Under a broad umbrella of 'computational social science', digital research methodologies include social media analysis, text analytics, social network analysis and computer modelling (Cioffi-Revilla, ; Alvarez, ).
Social media platforms provide spaces for digital ethnographies and analysis of posts. Scholars have studied people's attitudes to social issues (Bright et al., ) or social policy, (Brooker et al., ), and fed these into policy decision making (Simonofski, Fink, and Burnay, ). Studying Twitter hashtags (#) has been particularly popular for understanding the politics of social issues (Carr and Cowan, ; Ince et al., ).
Computational text analysis provides the means by which large textual datasets can be analysed to apprehend the diversity of topics covered, including over time and space, such as media framing of refugees in Europe (Heidenreich et al., ), comparing policy responses to COVID- (Goyal and Howlett, ), understanding the environment-poverty nexus (Cheng et al., ), or examining imagined versus real care relationships (Ludlow et al., ).
Networksconstituted as connections between people, things, or ideasprovide alternative ways for understanding the social policy world. Supported with visualisation tools and network metrics, networks of policy and social service institutions, communities and collaboration (Devereaux et al., ; McNutt and Pal, ; Henman et al., ; Henman and Foster, forthcoming) and social movements (Ackland and O'Neil, ) can be charted via scraping websites and social media platforms.
Online tracking tools have also been used for (quasi-)experiments to better understand how people respond to political and policy changes (Margetts, ), and to assess the usability of government websites for citizens finding information about public services (Henman and Graham, ).

Concluding discussion
Given the growing entanglement of digital technologies in every aspect of our lives, social policy scholars must pay greater attention to them and their positive and negative contributions to social policy processes, including policy formation and enactment. Digital technologies should not only cause dread, but also spark opportunities to advance shared social policy objectives and values. As the range of dates of references included in this paper suggests, many of the realities and issues of digital technology in social policy are not new, with emerging technologies accompanying similar dynamics and challenges. To this end, in critically learning from our past I suggest the following four areas for particular focus in future social policy research.
First, co-design must be a central objective of digital social policy. Digital technologies are typically designed for the agendas of government agencies (and global technology and consultancy firms). Even when done with good intent they are designed with 'imagined users' that are often white, highly educated, middle class, mostly men, which reinforces social disadvantages (Benjamin, ). Not only do multiple perspectives need to be involved in designing digital technology for social policy and social policy for digital technology, but social policy researchers and advocates need to engage with people in identifying digital technologies that address the needs of social policy recipients  , just as persons with disability have long created alternative technologies that are centred on their experiences.
Building policy and legal innovations that steer digital technologies in human-centred ways is a second important task. The EU's  General Data Protection Regulation (GDPR) and  proposed Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)  provide important legal frameworks that need wider consideration, implementation, and critique. Similarly, in  the UK published A guide to using artificial intelligence in the public sector  . These provide important protections and frameworks that require continuing vigilance to ensure that governments' digital actions comply with the intent of the legal frameworks. A further necessary protection is the ability to extend administrative review and appeals rights and practices beyond individual decisions to challenging the specifics of algorithms' code which structurally generate problematic decisions. This might occur by extending independent review institutions' remit to include the machinery of automated decisions.
Third, just as social policy and administration scholars gave considerable critical attention to New Public Management's reconfiguration of social policy thinking and delivery, finding it often resulted in damaging consequences to the most disadvantaged, scholars need to be continually alert to the way in which commercial technology and consultancy interests and propriety software are being inserted into social policy through opaque relationships with social policy agencies (Brown, ). This is particularly pertinent as the history of private sector involvement in publicly funded services (e.g. public-private-partnerships and outsourcing) is rife with reduced transparency and accountability through commercial-in-confidence provisions.
Fourth, digital technologies increasingly enable personalisation of policy and service delivery based on individual characteristics and circumstances. Such personalisation creates increasingly varied and divergent experiences of social policy. Consider, how your Google search results, purchase recommendations, or social media feeds are shaped by your own histories and profiles. When social policy and administration is like this, it becomes harder to have a collective, shared experience of social policy and its institutions and to appreciate how others experience these. Accordingly, we need to be mindful of the value of universalism alongside personalisation  , lest we splinter ourselves further into a fragmented, individualised society, one in which we live in a Matrix, a system that risks becoming humanity's enemy.
Digital technology is only going to grow in its centrality to social policy and service delivery. It is well past the time that social policy researchers and advocates critically embrace it.