3 results
4 - Digital Technology during Times of Crisis: Risks to Society and Fundamental Rights
-
- By Yves Poullet
- Edited by Anne Brunon-Ernst, Université Paris-Panthéon-Assas, Jelena Gligorijevic, Australian National University, Canberra, Desmond Manderson, Australian National University, Canberra, Claire Wrobel, Université Paris-Panthéon-Assas
-
- Book:
- Law, Surveillance and the Humanities
- Published by:
- Edinburgh University Press
- Published online:
- 18 November 2023
- Print publication:
- 30 April 2023, pp 61-85
-
- Chapter
- Export citation
-
Summary
I. Introduction
In the introduction to his 2012 work entitled Crisis(es) and Law, Jacques Larrieu defines crisis as a ‘disorder’ causing ‘the disintegration of the norms that usually regulate society’. This disorder can in turn give rise to a ‘law of circumstance’ which is to be considered dangerous but which can also be a source of progress as it invites specialists to conceive of a ‘new legal system that draws lessons from the crisis’.
Digital technology is an important part of the response to crises, and its use in the service of the struggle against them plays a major role in the disruption of the functioning of our legal system, in an undoubtedly more insidious than conscious manner. Of course, the upheaval that digital technology causes in the legal system is not specific to times of crisis. What is unmistakably specific to such times, however, is how this tool is used during periods of unrest, and how its legitimisation makes the law forget its own foundations. If digital technology is sometimes the object of specific legislation, it is above all because it constitutes the very condition of the effectiveness of the regulatory measures taken by our governments, an effectiveness that runs the risk of negating certain freedoms.
Should the use of such effective tools therefore be abandoned? The answer is ‘no’, but it is undoubtedly necessary to limit their implementation with a legal framework adapted to these exceptional times. The goal of this reflection, therefore, is to guide the development of such a framework, in the same way as Gligorijević attempts to do in Chapter 5 of this collection.
The present reflection covers two areas of application. The first concerns the lessons learned during the struggle against the pandemic. The measures implemented today against COVID-19, or those envisaged for use against future pandemics, all have a restrictive impact on freedoms that goes beyond their simple limitation and sometimes represents a challenge to their very essence.
The role of artificial intelligence in disinformation
- Noémi Bontridder, Yves Poullet
-
- Journal:
- Data & Policy / Volume 3 / 2021
- Published online by Cambridge University Press:
- 25 November 2021, e32
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.
4 - The internet and private life in Europe: Risks and aspirations
-
- By Yves Poullet, Dean, Faculty of Law and Director of the Centre de Recherche Informatique et Droit University of Namur, Belgium., J. Marc Dinant, Senior Lecturer at the University of Namur, Belgium
- Edited by Andrew T. Kenyon, University of Melbourne, Megan Richardson, University of Melbourne
-
- Book:
- New Dimensions in Privacy Law
- Published online:
- 24 July 2009
- Print publication:
- 02 November 2006, pp 60-90
-
- Chapter
- Export citation
-
Summary
Introduction
The reach of the internet grows day by day. Currently there are over 2 billion users and the number continues to rise. The services offered on the internet follow the same exponential trend. Electronic commerce promises ever more varied and ingenious applications, putting the world at one's fingertip with a simple click. Nevertheless concerns have been raised about this virtual universe bringing about the end of our freedoms, especially with respect to privacy. The purpose of this chapter is to bring clarity to the debate and to offer some suggestions. The topic is a timely one in Europe. There are now two European Directives on privacy protection, in particular the general Data Protection Directive 95/46/EC of 24 October 1995 and the more specific Privacy and Electronic Communications Directive 2002/58/EC of 12 July 2002. The latter replaces the Directive 97/66/EC of 15 December 1997 on the processing of personal data and the protection of privacy in the telecommunications sector. A large number of documents generated within the European Community also are topical – including the European Parliament's 1999 report on Echelon; the European Commission's consultation paper on the surveillance by companies of employee internet use; the European Commission's communication on spam; and finally the Council Framework Draft Decision on Data Retention. In addition, one has to mention the important work done by the Article 29 Data Protection Working Party on various privacy issues in order to harmonise the different national approaches.