top of page
Search

Protecting victims of online hate speech: the evolving role of the ECtHR



Ms Francesca Cassano

Francesca is a third-year PhD candidate in International Law at the Department of International, Legal, Historical and Political Studies of the University of Milan. Her research focuses on the regulation of hate speech within the international legal framework, with particular attention to the challenges posed by digital communication, online platforms, and content moderation. Francesca is also a qualified lawyer admitted to the Milan Bar.



Introduction


The advent of the internet and social media platforms has engendered an unparalleled access to information, participation, and democratic debate. However, these phenomena have, in turn, given rise to a communicative ecosystem in which hateful expression can disseminate with extraordinary rapidity, reach, and persistence. The phenomenon of online hate speech is distinguished by its reliance on anonymity, virality, and transnational circulation, which significantly differentiate it from its offline counterpart (UNESCO, Countering online hate speech, 2015 at 13). It is now widely recognised that the proliferation of online hate speech does not remain confined to virtual spaces; rather, it increasingly contributes to offline harm, violence, and social fragmentation.


Despite an increasing awareness of the detrimental impact of hate speech, particularly in its online dimension, the international legal response remains fragmented and largely inadequate in addressing the scale, velocity, and intricacy of the phenomenon. In addition to the absence of a universally accepted and legally binding definition of “hate speech”, its regulation has been characterised by a persistent tension between competing values: on one hand, the need to effectively protect hate speech victims and, on the other, to safeguard freedom of expression as the cornerstone of democratic societies.


Amid this contentious context, this post begins with an overview of the key challenges related to digital hate speech and its regulation under human rights law, with particular reference to the UN’s core treaties. It then examines the unique regional approach to interpreting the European Convention on Human Rights (ECHR) to safeguard victims of online hate speech, focusing on the positive obligations derived from its case law.



The digitalisation of hate speech and the challenge of its regulation in the human rights law framework


At the universal level, the regulation of hate speech is primarily grounded in two UN core human rights treaties: the International Covenant on Civil and Political Rights (ICCPR) and the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD). The ICCPR strikes a nuanced balance between protecting freedom of expression and prohibiting certain forms of incitement to hatred. Article 19 ICCPR guarantees the right to freedom of expression, while allowing restrictions under paragraph 3, provided that such restrictions are prescribed by law and are necessary and proportionate, inter alia, to protect the rights and reputations of others. Article 20(2) of the ICCPR goes a step further by obliging States to prohibit, by law, any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence. As clarified by the Human Rights Committee (HRC), Articles 19 and 20 ICCPR must be read as complementary, with Article 20 operating as lex specialis (HRC, General Comment No. 34, 2011, §50-51). The ICERD adopts an even more robust approach: under Article 4, States must criminalise the dissemination of ideas based on racial superiority or hatred, as well as organisations and activities that promote racial discrimination. Article 4 ICERD, while requiring the criminalisation of the mere dissemination of ideas based on racial superiority or hatred, also mandates that its implementation take “due regard” of the principles of the Universal Declaration of Human Rights and those enumerated in Article 5 ICERD, among which freedom of expression is expressly included (see CERD, General Recommendation No. 35, 2013).


While these provisions constitute the normative foundation for regulating hate speech at the international level, they were drafted in a pre-digital era and have not been consistently interpreted or applied to address the specific features and risks of online hate speech. This regulatory uncertainty has also fuelled broader debates about whether online hate speech should be conceptualised differently from its offline counterpart. In 2019, former UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, David Kaye, questioned whether the speed, scale, and impact of digital communication require a rethinking of traditional definitions and regulatory approaches (UN Doc. A/74/486). His work highlights the transformative role of online platforms and the inadequacy of existing legal frameworks that treat digital spaces as mere extensions of offline forums (§29 ss.).


A further structural limitation of the existing UN framework is the limited scope of application of the relevant norms, which is ill-suited to capturing the full range of groups targeted by hate speech, particularly in its online manifestations. The international regulation of hate speech originally developed around ethnic and religious grounds, a focus that reflects the post-World War II context in which these instruments were adopted. Although historically justified, this limitation has led to uneven levels of protection across vulnerable individuals and groups. In particular, individuals and communities targeted based on sexual orientation or gender identity often fall outside the core protective logic of existing treaty provisions, receiving more fragmented and indirect protection compared to victims of racist hate speech (see Sękowska‑Kozłowska et al.).


Against this backdrop, the European Court of Human Rights (ECtHR or the Court) has emerged as a pivotal, albeit controversial, player in the development of more concrete standards for regulating hate speech. In recent years, the ECtHR has been increasingly tasked with adjudicating cases involving not only online hate speech, but also the responsibility of online platforms and the broader States’ duty to protect victims from third parties’ violations (see, inter alia, Delfi AS v. Estonia, 2015, Sanchez v. France, 2023 and Google LLC and Others v. Russia, 2025). Its evolving jurisprudence sheds light not only on the permissible limits of freedom of expression in the digital age, but also on the positive obligations incumbent upon States to prevent, investigate, and adequately sanction online hate speech that threatens the dignity of vulnerable groups and individuals.



Clarifying States’ obligations: the ECtHR’s 2025 jurisprudence on online hate speech


It is essential to provide a preliminary clarification concerning the Court’s general approach to hate speech before examining its recent case law on the protection of victims of online hate speech. This clarification is of particular importance in light of the absence, within the framework of the ECHR, of a specific provision equivalent to Article 20 of the ICCPR and Article 4 of the ICERD. The ECtHR’s case law on hate speech can broadly be divided into two categories. The first includes cases in which applicants complain of violations of their right to freedom of expression due to alleged illegitimate State interference. The second concerns cases in which applicants, as victims of hate speech, complain of State omissions in providing adequate protection (on this distinction, see this study). 


In the first strand of cases, the ECtHR assesses State interference with freedom of expression primarily under Article 10(2) ECHR, applying a tripartite test closely resembling that provided under Article 19(3) ICCPR, which requires that any restriction be prescribed by law, pursue a legitimate aim, and be necessary in a democratic society. In parallel, and particularly in severe cases, the Court has also extensively invoked Article 17 ECHR, the anti-abuse clause, to exclude certain forms of hateful expression from the scope of Convention protection altogether. The recourse to Article 17, which operates as a threshold mechanism preventing applicants from relying on Article 10, albeit widely employed by the Court, has also attracted significant scholarly criticism, particularly for its potential to circumvent the proportionality analysis normally required under Article 10(2), thereby raising concerns as to legal certainty and the consistency of the Court’s reasoning (Cannie and Voorhoof).


In the second strand of cases, the Court evaluates States’ inaction primarily through Article 8 ECHR, which protects the right to respect for private life, often in conjunction with Article 14, which prohibits discrimination. This jurisprudential evolution reflects a growing recognition of hate speech as a phenomenon capable of causing concrete and cumulative harm to the dignity, security, and equal enjoyment of rights of individuals and vulnerable groups. Indeed, the exponential amplification of hateful expression through digital technologies and online platforms has intensified these concerns, multiplied both the reach of hate speech and the number of potential victims, while exacerbating its enduring, systemic, and discriminatory effects (see the ECHR key theme on artt. 8, 13 and 14).


Focusing now on this latest line of cases, two judgments delivered by the ECtHR in 2025 are particularly relevant for the present discussion: Minasyan and Others v. Armenia (App no. 59180/15, 7 January 2025) and Ilareva and Others v. Bulgaria (App no. 24729/17, 9 September 2025). Although these judgments do not represent the first instances in which the Court has applied Articles 8 and 14 in combination in hate speech-related cases – see, inter alia, Aksu v. Turkey (2012), R.B. v. Hungary (2016), and Beizaras and Levickas v. Lithuania (2020) – they are nonetheless significant for the further development of the Court’s reasoning on the protection of victims of online hate speech.


Indeed, both cases under consideration concern the dissemination of hate speech online. In the case of Minasyan, the contested expressions were disseminated in an online newspaper article that publicly denounced the activities of LGBTIQ+ activists. In the case of Ilareva, the controversy surrounded the circulation of Facebook posts and comments that contained insults and threats against the applicants in connection with their work in defence of the rights of migrants and refugees.


Two particularly important aspects emerge from the Court’s rulings. First, they further clarify the scope of States’ positive obligations, particularly the duty to conduct effective investigations into online hate speech. Second, they reflect the adoption of a broader understanding of the subjective scope of protection afforded by the Court’s hate speech jurisprudence. This protection is no longer limited to “traditional” historically marginalised groups. It extends both to groups defined by sexual orientation, following earlier developments on homophobic hate speech, beginning with the case of Vejdeland and Others v. Sweden (2012), and to individuals targeted because of their association with, or support for, such groups.

With regard to the first aspect concerning States’ positive obligations, the ECtHR examines whether domestic authorities have put in place a legal and procedural framework capable of providing practical and effective protection against online hate speech. While States enjoy a margin of appreciation in selecting the appropriate regulatory tools, the Court assesses whether the measures adopted are capable of adequately addressing the specific risks posed by digital environments, including the rapid dissemination and persistence of harmful content. Crucially, the online nature of the impugned conduct underscores the importance of procedural obligations, particularly the duty to conduct effective investigations to identify perpetrators and uncover potential biased motives. As well illustrated in Ilareva, the failure of domestic authorities to pursue available investigative avenues or to make genuine attempts to identify and prosecute the authors of online comments contributed to a situation of impunity, which ultimately facilitated the escalation of hate speech into physical aggression against one of the applicants (§139 ss.).


Concerning the broadening of the personal scope of protection to so-called non-traditional marginalised groups, while rulings on homophobic hate speech are not new in the Court’s jurisprudence, the application of the concept of discrimination by association represents a significant development in these cases. In both judgments, the applicants were human rights defenders and activists who did not themselves belong to the targeted vulnerable groups but were exposed to hate speech because of their professional support for those groups. This expansion of protection has generated diverging views. On the one hand, extending States’ positive obligations in the field of hate speech may raise concerns for freedom of expression, particularly in the absence of clear and foreseeable criteria guiding State intervention (Alkiviadou). On the other hand, failing to afford protection to individuals targeted because of their association with vulnerable groups risks reinforcing the silencing effects of online harassment, especially where hate speech is used strategically to intimidate, discredit, or deter those engaged in human rights advocacy (Ilieva).


From this perspective, the Court’s approach underscores that leaving such individuals without effective protection may amount not only to a failure to address discriminatory harm but also to a form of secondary victimisation, capable of normalising hatred and hostility and discouraging legitimate civic engagement.



Concluding remarks


Recent ECtHR case law demonstrates a notable shift towards a more inclusive conception of victimisation, extending protection not only to historically marginalised groups but also to individuals targeted based on their affiliation with such groups. This evolution reflects the Court’s increasing awareness of the specific harms generated by online hate speech, which, due to its permanence, algorithmic amplification, and rapid dissemination, can produce cumulative, long-lasting effects on victims’ dignity, security, and participation in public life.


While this development enhances the Court’s ability to capture contemporary forms of harm, it also reveals the absence of clear, systematic criteria governing the scope of protection against hate speech in digital environments. From a broader perspective, although UN human rights treaties establish specific normative obligations, their limited material scope and lack of operational guidance render them ill-suited to fully address the structural challenges posed by online hate speech.


The Court’s approach, therefore, both advances victim protection and underscores the pressing need for clearer, more coherent standards to regulate online hate speech. Ultimately, the effectiveness of this jurisprudential evolution will depend on its capacity to offer more consistent guidance in a digital environment marked by scale, persistence, and fragmented responsibility.



The Human Rights in Context Blog is a platform which provides an academic space for discussion for those interested in human rights, democracy, and the rule of law. We are always interested in well-written and thoughtful comments and analyses on topical events or developments. Scholars from all disciplines, students, researchers, international and national civil servants, legislators and politicians, legal practitioners and judges are welcome to participate in the discussions. We warmly invite those interested in writing a post to send us an e-mail explaining briefly the relevance of the topic and your background as an expert. We will get back to you as quickly as we can. All contributors post in their individual capacity, and their opinions do not necessarily reflect the official position of Human Rights in Context, or any organisation with which the author is affiliated.



 
 
bottom of page