ECNL comments on the EU Digital Services Act

21-09-2020
Online platforms, as part of our online civic space, should be subject to high standards of transparency and accountability since their function is crucial to the development of opinions and therefore to the exercise of democracy.

ECNL submitted comments to the  European Commission’s consultation on the forthcoming Digital Services Act package.

The European Commission’s survey sought to identify main issues related to the provision of digital services and the governance of online platforms, in order to work on a legislative proposal to be published by the end of 2020/early 2021.   

The development of digital information and communication technologies has changed the ways in which we interact with one another and facilitated interpersonal interactions across time and place. As a result, we have seen a development in the definition of “civic space”, which today includes the internet as well as a space for the exercise not only of the right to freedom of expression, but also of the right to peacefully assemble and to participate in public affairs. Social movements and democratic protests are often held online or organised with the support of online platforms. The right to peaceful assembly online has been recently officially acknowledged by the UN Human Rights Committee General Comment No. 37 of Article 21 of the International Covenant on Civil and Political Rights (ICCPR) as worthy of protection, regardless of the public or private ownership of the online space.

For these reasons, in our submission, ECNL focused on the parts of the survey dealing with the following topics:

1. Dissemination of illegal content and disinformation online, particularly since the outbreak of COVID-19

Since the outbreak of COVID-19, we have witnessed an increase of online illegal content inciting to hatred and discrimination towards groups accused of spreading the pandemic. We have also assisted to a widespread increase of unverified pseudo-scientific information regarding the pandemic, rightly labelled by the WHO itself as “an infodemic”.

We are aware that major social media platforms have undertaken self-regulatory initiatives to swiftly identify and automatically remove content inciting to hatred and discrimination towards groups accused of spreading the pandemic. However, we are concerned that when such measures are carried out through automated decision-making systems (e.g., algorithm-based curation or artificial intelligence-based tools), their way of functioning is not fully transparent to researchers and/or relevant civil society actors.

Cooperation mechanisms between digital services and authorities are welcome to tackle the spread of disinformation – including illegal disinformation, that is, false and deliberately aimed at instigating harm or hatred against a person, social group, organisation or country. However, these type of mechanisms alone are not sufficient to address one of the major problems underlying the viral dissemination and impact of “fake news”, that is, the public’s lack of trust in traditional sources of authoritative information, especially if originating from governments or intergovernmental organisations perceived as highly politicized.

2. Appropriate measures to tackle dissemination of illegal content and disinformation online

We advocate that online content moderation ultimately should always require human review and intervention. The removal of illegal content online should take place only after a review process conducted by an independent, impartial and authoritative oversight body, on the basis of co-regulatory measures involving institutions, platforms and civil society stakeholders.

Furthermore, government law enforcement agencies requesting the removal of online content should also be subject to the same procedural safeguards, in order to avoid the risk of potential abuse of powers and politically motivated censorship.

Interested third-parties such as civil society organisations or equality bodies contributing to tackle illegal activities online should be regularly consulted by online content providers to help them assess the human rights impact of their content curation and moderation and devise effective policies/community guidelines compliant with such rights.

We also endorse the recommendations outlined in the Joint Declaration on Freedom of Expression (FOE) and Fake News, Disinformation and Propaganda by the UN Special Rapporteur on FOE, OSCE Representative on Media Freedom, OAS Special Rapporteur on FOE and ACPHR’s Special Rapporteur on FOE, namely that,

  1. States (the EU, in this case) should adopt measures to promote media diversity (including online);
  2. Regulatory measures should impose obligations on platforms to adopt minimum due process guarantees when they take action to restrict third-party content;
  3. Criteria for restriction/removal should be included in clear detailed policies or guidelines that should be adopted and regularly reviewed through a multi-stakeholder consultative process including civil society organisations;
  4. Platforms should also be mandated to provide detailed information on when they use automated processes (whether algorithmic or otherwise) to moderate third-party content and how such mechanisms operate;
  5. States should include digital literacy as part of the regular school curriculum and engage with civil society and other stakeholders to raise awareness about best practices in tackling disinformation.

Finally, we recommend that EU regulation should also promote appropriate fact-checking cooperation mechanisms between digital services and trusted independent (i.e., non-government) stake-holders, such as civil society organisations operating in the relevant field (e.g., humanitarian organisations in cases of natural hazards/war consequences, human rights organisations in cases of civil disorders/health crises, etc.) to provide check and balances mainstream media and third-party information. These cooperation mechanisms should also be publicly accessible, adequately explained to online users and subject to periodic review in consultation with the relevant stakeholders. Ultimately, we believe that the transparency of such mechanisms is pivotal to regain the trust of citizens in the information that they receive when crises emerge and cause threats to the fundamental rights of society.

3. Responsibilities (i.e., legal obligations) for online platforms and other digital services

We recommend that additional obligations should be imposed on online content providers to provide clear notice-and-take down mechanisms for end-users to flag potentially illegal/harmful content as well as redress mechanisms.

The new EU regulatory framework should clarify that the obligation to promptly remove illegal content should only apply when the request comes from an independent judicial authorities or oversight body. Government law enforcement agencies requesting the removal of online content should also be subject to the same procedural safeguards in order to avoid the risk of potential abuse of powers and politically motivated censorship.

Online content providers should not rely exclusively on automated processes (algorithm-driven or AI-based mechanisms) to detect potentially illegal content but should always support such processes with human review.

Online content providers providing access to third parties should be mandated to compile transparency reports that should clearly indicate and explain the criteria used for content curation, moderation, data collection and analysis and the human rights impact assessment conducted before adopting them. Platforms should also be mandated to provide detailed information on when they use automated processes (whether algorithmic or otherwise) to moderate third-party content and how such mechanisms operate.

As part of their mandatory transparency and accountability reporting requirements, online content providers should disclose the identity of who pays for the political advertising, both via direct and indirect payments via intermediaries. This obligation should also apply to online content providers based in non-EU countries but reaching out to EU-based users.

Practices such as data collection and analysis for behavioural microtargeting purposes should be restricted and subject to auditing procedures – and in any case, should never be carried out without the explicit opt-in consent of online users.

Sanctions for non-compliance should be pecuniary and proportionate to the global revenue of the company, along the lines of those imposed by the GDPR. Enforcement actions in case of non-compliance should be conducted by an independent authority at national level (e.g., data protection authorities, media regulators etc.).

Last but not least, we recommend that EU regulation should mandate online platforms to grant data access to researchers and civil society organisations conducting studies of their functioning in the public interest. Criteria to define such public interest may be further detailed by EU regulation and a redress mechanism in case of denial of access by the platform should be provided at national and ultimately EU level.

4. Liability regime of digital services acting as intermediaries

The distinction between ‘active’ and ‘passive’ intermediaries outlined in the current E-Commerce Directive has become obsolete. New EU regulation should take stock of the fact that online content providers nowadays are almost all, to different extents, “active” processors of data and information stored on their platforms. A new distinction between “passive” provider – as such, exempt from liability – and “active” provider - as such, subject to liability in certain circumstances – should be adopted, as also suggested by Advocate General of the Court of Justice of the European Union: namely, internet service providers should only be considered as “active” – and therefore, subject to liability in certain circumstances  - when they have “actual knowledge” of specific illegal activity or information on its platform.

Future EU regulation should also refrain from requiring a “proactive” monitoring or filtering to detect potentially harmful content, which risks amounting to “pre-publication censorship”. General monitoring obligations automatically entail using content filtering systems that are not end-user controlled and as such, constitute a disproportionate restriction incompatible with global freedom of expression standards.

Finally, we do not recommend imposing on online content providers to take action on content that may be harmful or disturbing but is still protected by the international human rights standards on freedom of speech. On the contrary, platforms’ policies/community guidelines on content curation and moderation should also be bound by such standards and should detail out how the assessment of compliance with such standards is carried out when deciding on content removal or restriction of access.

The need to commit internet platforms to higher requirements of transparency and accountability and to conduct multi-stakeholder-inclusive impact assessments of their mechanisms reflects their ever-expanding in role in our present and future society. All online platforms providing a space to third parties for virtual assembling, exchange of ideas and information should be subject to high standards of transparency and accountability – regardless of their market size or dominance – since their function is crucial to the development of opinions and therefore to the exercise of democracy.