We will inform you about CA twice a year Plus 5 – 8 event announcements per year
We will inform you about CA twice a year Plus 5 – 8 event announcements per year
Dr. Philip Di Salvo is a researcher on whistleblowing, investigative journalism, Internet surveillance and the relationship between journalism and hacking. At the USI – Università della Svizzera italiana, he teaches journalism at Master and Bachelor levels. Philip received his PhD in Communication Sciences from USI with a dissertation about the adoption of encrypted whistleblowing platforms in journalism in summer 2018. Since 2018, Philip is also a lecturer at NABA – New Academy of Fine Arts in Milan, Italy, and since 2021 Visiting Fellow at the London School of Economics and Political Science (LSE). As a freelance journalist, he writes for Wired, Motherboard/Vice, Esquire and other publications covering the social impacts of technology. At USI, Philip also works as the Italian editor for the European Journalism Observatory (EJO). Philip has authored two books: “Leaks. Whistleblowing e hacking nell’età senza segreti” (LUISS University Press, Rome, 2019) and “Digital Whistleblowing Platforms in Journalism. Encrypting Leaks” (Palgrave Macmillan, London, 2020).
The rise of the “datafied society” paradigm has been accompanied by several optimistic claims regarding its own transparency and its potential positive impacts on our social lives. The growing centrality of big data and other forms of quantification, in particular, have frequently been pointed to as a revolution in terms of decision making and accuracy: according to these views, the increased availability of data would have brought new opportunities for transparency in various sectors, including politics, the economy and, inevitably, technology.
As scholars Arne Hintz, Lina Dencik and Karin Wahl-Jorgensen wrote, to live in the “datafied society” means also to “increasingly enter the sphere of civic activity – and develop agency – through digital media.” Consequently, public life is now being influenced by how digital media and, overall, platforms of datafication function, operate, are regulated and, increasingly, produce side effects. A critical look at these infrastructures shows that those original premises of transparency and equality have been neglected in various regards.
Several scholars have warned about the risks of digitalization and datafication: in fact, the “datafied society” shows various instances of opacity and increased secrecy. Among these instances, surveillance is probably the most debated one, as private and public entities have gained new powers (both economical and political) for monitoring people through technologies or digital infrastructures. Yet, signs of opacity are also clearly visible in other fields, such as the role and power of algorithms in critical public functions.
Frank Pasquale, in his seminal book The Black Box Society underlined a variety of instances where different forms of secrecy have recently gained prominence and power at the crossroad between technology and policy. The “black box” is an excellent metaphor to describe how various aspects of the “datafied society” work nowadays: while technologies and players using them have developed stronger and wider capabilities of monitoring, citizens are frequently left in the dark about how those technologies actually function. As Pasquale claims, this happens for various reasons, including design, legal grounds and explicitly seeked opacity. This increased level of opacity poses serious challenges to journalism, as its core function is to investigate and challenge forms of power, and aim at exposing abuse and demanding accountability. Even when exercised digitally, power is power.
Three examples from journalistic cases, which emerged in the past few years, clearly bring up interesting insights about how journalism has responded to, reported on or coped with various “black boxes” of the “datafied society”. These examples show the inherent problems of technological black boxes and how journalists have responded to these issues, frequently by joining forces with hacktivists and whistleblowers, or by relying on innovative forms of “hybrid” journalism, based on various forms of computational reporting.
The first “black box” of interest for this analysis is the role of algorithms of major technological platforms in the context of content moderation and distribution and, in particular, how automated algorithmic decision making may affect free speech and other civil liberties. In various instances, intervention by platforms has occurred with little transparency and limited layers of accountability, especially by Facebook. In other, even more controversial contexts, such as policing, decisions taken by algorithms have caused people to be wrongfully incarcerated. The functionings of algorithms and machine learning systems can be extremely opaque, because of their technical complexity, or because their features are frequently hidden and protected as corporate secrets, powering the successes of major technological companies. Consequently, investigating these systems is complex and it poses various access challenges to journalists who want to shed light on the impact of these systems on society at large.
When it comes to state surveillance, instead, a “black box” emerges from both technical and legal opacity. Whereas secrecy with regards to surveillance operations is inevitable in some instances (national security, for example), surveillance powers by the state are frequently exercised without the necessary democratic accountability and in the context of overclassification, the tendency of governments to qualify much more documents as “classified” than needed or even allowed and to give too many peole “clearance” to view sensitive information. This is visible clearly in how “intelligence sharing” agreements among countries work. These agreements are usually considered as the legal infrastructures of mass surveillance and dictate how and how much countries can share the results of their surveillance operations among them – activities that are frequently prone to abuse in terms of civil rights and freedoms. In the wake of the Snowden revelations, NGO Privacy International has investigated the matter,Download: https://epic.org/coalition/intelligence-arrangements/Briefing-Intl-Intel-Oversight.pdf claiming that “there is an alarming lack of effective oversight of secret surveillance in a range of countries around the world.” In fact, some of these agreements have been operational in de-facto secrecy for decades, without, or with extremely limited, oversight and accountability.
In parallel, contemporary surveillance is also constructed around the business of various private companies producing spying software, used around the world by law enforcement and intelligence agencies, including those of non-democratic countries, where these technologies frequently become tools of oppression against journalists, activists and political dissidents. This market is notoriously nebulous, secretive and prone to abuse and misuse. Furthermore, legal frameworks for the use of spyware are limited and inadequate, also in established democracies, Europe included.Download: https://cild.eu/en/wp-content/uploads/sites/2/2017/06/TrojanCo-EN.pdf
These three “black boxes” represent fundamental issues of our time, shaping the relationships between tech and politics and their most controversial implications. Journalistic coverage about these issues emerged in various ways recently, aiming at breaking the veil of secrecy surrounding these thematic areas. Whistleblowers, insider sources who reveal to journalists otherwise inaccessible materials and information in the public interest, have been a fundamental resource. For instance, various stories emerged thanks to leaked documents from inside Big Tech companies with regard to their controversial algorithmic content moderation policies, such as The Guardian’s investigation “into Facebook’s internal rulebook on sex, terrorism and violence” or The New York Times’ story about “Facebook’s Secret Rulebook for Global Political Speech”, among others. Similarly, the Cambridge Analytica scandal was also made possible by the cooperation between whistleblowers and journalists.
Moreover, going back to 2013, revelations about mass surveillance were also made available through Edward Snowden’s whistleblowing. Almost a decade has passed since the first publication of the details of the NSA surveillance activities and we may argue that we’re still assessing the impact of that story on journalism. It is not by chance that one of the most influential academic books about the Snowden case is titled Journalism After Snowden, underlining the tectonic impact of the 2013 event on newsmaking. When it comes to the surveillance market, instead, hackers played an important part in challenging secrecy and opening the black box. For instance, various investigations into these companies’ activities started thanks to the practice of “public interest hacks”, that scholar Gabriella Coleman defines as a hacks “that will interest the public due to the hack and the data/documents.
Here, hackers not only broke into networks of surveillance companies, but also leaked documents and materials to the public, sometimes reaching out to journalists directly. This has happened with Hacking Team, Gamma Group and Retina and FlexiSpy, among others, whose unethical and controversial business operations were exposed in this way. In a recent open access paper published in Studies in Communication Sciences (SComS), Prof. Colin Porlezza and I analyzed the various risks and opportunities (and the profound ethical challenges) posed to journalists when they decide to work on hacked materials. Whereas this is definitely a controversial area, where establishing what is actually in the public interest may be complex, at the current stage “public interest hacks” have been possibly the most effective sourcing strategy for investigating the corporate surveillance black box.
Beyond the contribution of whistleblowers and other disruptive sources, some newsrooms have also adopted a proactive investigative attitude towards technological “black boxes”, building innovative tools and sourcing strategies, in particular to investigate algorithms. Here, we’re in an extremely advanced area of “computational journalism”, where crowdsourcing strategies, audience engagement and data-driven reporting meet. ProPublica has pioneered this approach, by involving its audiences in the data-gathering to analyse how platforms track users’ data for a series of stories titled “Opening the Black Box”.
In the investigation dedicated to Facebook, for instance, ProPublica hacker-journalists coded a dedicated browser extension capable of monitoring Facebook’s tracking and made it available to its readers, asking to contribute – in a privacy-respectful way – their results for the sake of the investigation. Reporter Julia Angwin, who led the “Opening the Black Box” project at ProPublica, later co-founded The Markup, a digital news outlet entirely dedicated to the investigation of algorithms and the social impact of technology. The Markup recently launched the “Citizen Browser” project, a desktop software capable of monitoring how Facebook shows different content to different people, aiming at making sense of how the company’s algorithms work.
This is inevitably a short and limited overview, but technological black boxes pose serious and urgent questions for civil rights and democracies in this era. As such, journalism is needed more than ever for exposing these distortions, stimulating debate and inspiring public intervention. Moreover, we should keep in mind that technology is socially created and therefore an informed citizenship is needed in order to decide the technological future we want to live in (and what we don’t want). Yet, lack of transparency and systemic secrecy are increasing both in the public and private sectors. Moreover, boundaries between the two fields are blurring, leaving a serious question of power imbalance in regards to how technology is adopted and its social consequences.
Sometimes, from a journalistic perspective, acts of “radical transparency” seem to be the most effective answer. Consequently, for journalists, collaborating with “interloper” players – such as hackers or hacktivist – can bring great value, despite operating within an ethical grey zone. Moreover, as journalism becomes increasingly “hybrid” and prone to external influences, relying on non-traditional players and practices is a fundamental asset to investigate the most obscure areas of the “datafied society”.