Jonathon Penny

Jonathon Penny spoke 6 times across 1 day of testimony.

  1. Jonathon Penny, Prof. (Law – York University)

    Thanks very much, Professor MacKay. And I just wanted to say I’d like to thank Commissioner Rouleau and -- for the invitation to be a part of this conversation and join this distinguished panel as well. Thank you for the questions, Professor MacKay. I think those are two really important questions in light of the issues before the Commission. So I’ll speak of two sort of general categories of regulatory framework, the first being law and statutory forms of regulation and, within that broader category, I’ll speak to some additional sub-categories of different kinds of laws that apply to social media content. And I’ll speak to, also, social media-based regulation, kinds of measures and techniques and tactics that social media companies themselves employ to deal with challenges when it comes to social media content. So speaking first to law and statutory regulation, I would say -- so this would cover any statutes and laws that govern social media content and related activity, so behaviour and actors that are related to the generation of content and its moderation and editing and regulation on social media platforms and services, for example. I would say that the single-most defining element of social media legal and regulatory frameworks in Canada is that there’s no single general law or regulation that covers social media content in Canada. Instead, it really is a patchwork of different laws and regulations that apply to and regulate different aspects of social media content. So within that broader framework and that key defining feature of law and regulatory frameworks in Canada, I’ll start with one sort of said category, that being intermediary laws. Now, why do we refer to these laws as “intermediary laws”? Well, the idea here is that social media companies are intermediaries, that is, they are links and connectors between users and -- who generate content, post and express on social media services, and the platforms or the services of the social media companies connect them with the audience, those who consume the content itself. So, generally, we talk to laws that create legal rules and duties in respect of content that’s generated by users, and how that is fed and conveyed to audiences and other users as well. Those are intermediary laws and rules and a few examples of those in Canada, we have certain specific laws that cover, for example, copyrighted material, where they’re very specific legal duties that social media platforms -- and that’s another term that I’ll use during my remarks before the Commission. The idea -- the concept of a platform of something that today scholars and experts typically use to refer to more -- larger, more influential social media companies; Google, Twitter, Facebook, and others. And I think the term itself, you can think of it as conveying the fact that these platforms provide an amplification of certain voices, the same way a stage or a platform in a town square allowed someone to speak to a broader audience and for their voice to be heard. That’s, in a way, how we can think of these social media companies or social media platforms. They platform; they provide a way of amplifying voices in content generation. So there are copyright laws, as one example of intermediary laws and liability, and with intermediary laws that create liabilities, potentially, for these social media companies and platforms, but they can, at times, avoid liability so long as they meet their duties. Copyright laws is one example. Quebec also has a specific law that creates a safe harbour, or legal protecture from liability for social media platforms if they remove illegal or unlawful content once they have notice of it. When we move beyond these defined intermediary laws, they’re also what I call sort of general statutory laws and rules that aren’t specifically designed for social media platforms or intermediaries but have implications for them, and indirectly create legal duties that social media companies and platforms have to abide by. I think a great example of this would be defamation law, right? So if a social media platform becomes aware of defamatory content that’s been posted, they have notice of it, they -- to avoid liability, then they have to remove that content. Another example is criminal law. So certain criminal law provisions; for example, if a Court Order is put in place where social media companies may have to remove certain kinds of unlawful content, be it child pornography; non- consensual intimate media that’s been posted there without the consent of the victim; terrorist or hateful propaganda, those are other examples of how criminal laws can apply to these platforms creating similar legal duties. And then the third sort of category of laws in this context that impact, maybe not social media content directly, but the activities around social media content but also impacts the broader ecosphere. So user information and user data is critical, it fuels the business model that a lot of these social media platforms make money, and they monetize user content. So privacy laws, data protection laws, these are another subcategory of laws and regulations and statutes that impact on content production, to the extent that they create legal rules and duties around how people’s information and data that are collected on these platforms can be used and shared and distributed. When there’s restrictions on that data, then presumptively it would make use of that information more difficult to target users. So, typically, social media platforms will use information around user data, user information, for targeted advertising to monetize people’s activities and social media platforms, but when we look at disinformation/misinformation, user information and user data is also used to target particular users or influence in social -- in disinformation campaigns online and offline. So in Canada we have a few examples of such laws, like the Personal Information Electronic Documents Act, PIPEDA, which creates rules around the collection, use, and sharing of personal information by organizations in Canada. Interestingly, however, you know, there are loopholes in PIPEDA; for example, political parties are not constrained by it. And that’s, I think, an important hole that needs to be addressed. And last, of course, but not least, there are social media-based regulations. And I use regulation in sort of the broader sense here. In this case, I’m referring to the rules and policies as social media platforms and companies themselves develop to deal with content and other issues on the platforms themselves, that can include content moderation policies; that can include privacy policies. And typically these kinds of measures are implemented through policies that are conveyed to users, but they’re often enforced through the actual design and features of the platforms themselves. There can be human-based forms of social media content, regulation, and moderation but, typically, one that we’ll be talking a lot at our comments today are algorithmic. So algorithms and technology are critical to help larger social media platforms deal with content, and deal with offensive content. Typically in these cases a lot of these measures are voluntary. As I said, there are not a lot of specific laws that create duties and obligations on behalf of these platforms to engage in social media content regulation and moderation, but they do so, often for business reasons. And because they do so for business reasons, when their business interests don’t align with certain content moderation, often measures that they take can be seen as ineffective. So that’s a broader look at the landscape. I’ll come to the final question that Professor MacKay offered there, and I think it’s an important one. There are, of course, many impacts of these kinds of increased regulation, and one impact in particular that my research has focused on in recent years is this notion of chilling effects. The idea that increased regulation but also surveillance, both by government, law enforcement, and security agencies, but also by the platforms themselves, right? In my research I found that this kind -- when users become aware of increase surveillance and data collection and data tracking it can have a chilling effect; that is, a chill meaning it can have a deterrent effect on people’s willingness to speak, engage, and share, both online and offline. And when it comes to more marginalized populations, my research has found that those groups are disproportionally impacted. And there’s a number of reasons for that I can get into later, but as these groups, who are already facing important barriers to participation in society; discrimination, racism, when you add on top of that concerns about targeted surveillance, targeted law enforcement action, you end up with a situation that groups who are already marginalized from the public, critical public and democratic discussion, are further marginalized by these kinds of measures. At the same time, I should say, it’s not a entirely simplistic view either; that is to say, increased regulation means a chilling effect. What I’ve also found in my research is that with carefully tailored laws, you can have laws that also facilitate greater participation and speech. Laws don’t just deter behaviour with threats of law, they also have expressive power; that is, laws convey values, they convey messages as to what a society views as valuable. And when a law is taken or enacted, it can send a message to certain marginalized groups that their speech and contributions are valued, and that can help facilitate speech. So I’ll leave my comments there, thank you.

    33-016-02

  2. Jonathon Penny, Prof. (Law – York University)

    Sure. Sure, thanks. And I thought I'd pick up on a few nice comments from my colleagues here. You know, when we talk about freedom of expression, you know, speaking as a scholar, you know, there are certain assumptions when we speak of freedom of expression as to why we see it as a key value, why we want to protect it and cultivate it in a democratic and free and democratic society. And, you know, one of the assumptions of freedom of expression the lawyers often talk about is the idea of a marketplace of ideas; right? If you allow for people to engage with each other that the best ideas in the end will rise to the top and win the day. But I think when we move to the new digital public sphere, as Professor D'Orazio has spoken quite eloquently on, the reality is, is that some -- a lot of those assumptions that we have about debate in the public sphere, debates that we would have in the town square, discussions in broader society, when we move to an online context, it's quite clear that with platforms they have their thumb on the scale; that is, platforms are designed to favour certain kinds of speech, behaviour and activities. And often it's the kinds of activities and speech behaviour that they can monetize. And one of the unfortunate realities of the social media landscape is that often more harassing, polarizing, tribalistic, hateful engagement leads to additional polarization amongst different groups. It's that kind of engagement that these platforms often favour at the expense of more civil discussion and discourse. And so then I think that leads us also to the point that was also raised by Vivek earlier on, is this notion of surveillance capitalism. What are the business models that encourage and incentivize this kind of behaviour, where this kind of activity and expression is favoured, which drives, as Professor Laidlaw set out quite eloquently earlier, drives certain marginalized groups; women, visible minorities, more often targeted by this more antisocial behaviour on platforms, which additionally skews the marketplace of ideas that we lawyers love to talk about.

    33-043-16

  3. Jonathon Penny, Prof. (Law – York University)

    Sure, thank you, Professor MacKay. Professor Penny. I'm not sure if this is -- these comments are directly responsive to that question, maybe in part. And, I mean, some of this comes back to the broader regulatory challenges on these questions as well. Back in 2019, I was a visiting researcher at the Technology and Social Change Project at Harvard Kennedy School's Shorenstein Centre. We looked at what we called disinformation and media manipulation campaigns. The reason why we used the specific term campaign was these were examples of coordinated efforts to manipulate media and to spread disinformation. And in particular, we included -- we looked at a number of case studies including some case studies of disinformation and media manipulation campaigns during the 2019 Canadian election. And peeling off a point that Professor Laidlaw made earlier, one of the challenges here is, you know, we talked about surveillance capitalism, we talked about business models and the challenges with these platforms, and the business model that monetizes antisocial behaviour, and that includes disinformation and misinformation, for example. But at the same time, the disinformation campaigns that we studied in 2019, a lot of the most successful ones were ones that had been seated many years prior to 2019. These were stories and ideas, and false stories, false rumours that percolated in far-right social media groups, whether we're talking on the chance, 4Chan, others, far right nationalist groups on Reddit, for example, where they had been certain rumours and false stories were percolating for years. But once the election comes around, we have greater media coverage. There was a coordinated effort to use tactics what we call working these false rumours up the chain, trying to get larger, influential accounts, larger social media platforms to share this kind of information. And, of course, there were even some successful disinformation coming through. We had mainstream journalists asking questions of politicians about these rumours that had been already discredited, debunked by other journalists. But the point, the broader point that I'm getting at here is that when we're thinking about how to deal with this from a legal, regulatory, even societal perspective, we can talk about the large platforms and that's really important and the business models and I've been also been banging the table on that count, but we -- also, if we're going to be thinking through new ways or new regulatory frameworks, we also have to think about these more extreme marginal communities where a lot of these hateful content, disinformation campaigns are planned, coordinated and conceived, and then later spread in critical moments like during elections, which mislead voters and have impacts.

    33-059-26

  4. Jonathon Penny, Prof. (Law – York University)

    Great. And yeah, I’d like to again just say it’s been great to be part of this conversation with this excellent panel. Evelyn Dewick is a professor at Stanford, and she has a line that she often uses in writing about content moderation. She says, “Everything is a content moderation problem”. In way, when you’re thinking about misinformation and disinformation, I’m almost the opposite of thinking, that is to say, I think some things are a content moderation problem. When it comes to this particular challenge, disinformation and misinformation is a human behavioural problem. We’ve talked in this panel about coordinated inauthentic behaviour, which is a critical important -- that is certainly a key factor in this problem. Algorithms a key part of this problem. But the reason why we keep seeing these disinformation campaigns and media manipulation and misinformation being shared at scale is that it works. People are fooled. People have psychological biases. Professor Laidlaw nicely lays out in her paper the psychological -- some of the psychological foundations to the disinformation problem, that is, people have a confirmation bias, they look for information that confirmed their own personal world view. They have an identity-affirming bias, looking for information and stories that they’d like to share with others that affirm their own cultural biases and world view. And then on top of that is social psychology, the reality that we can speak as a society and how we want to promote freedom of expression and pluralism and these key broader mainstream values, but there are also these groups, extremist groups, ideological groups that exist on mainstream social media platforms that have an entirely different set of social norms where trolling, harassment, spreading false stories and rumours about people is celebrated and encouraged rather than discouraged the way we would see in broader society. So there’s a human behavioural side to this, and I think if you go back to, you know, the definitions that Professor Laidlaw set out nicely in the beginning, you can see the real challenge if we are to draft specific laws to deal with content because when it comes to distinguishing between disinformation and misinformation, it’s a question of intent, a person knowing is spreading false content as opposed to someone who’s innocently doing so in good faith but are doing so because they like the story and it confirms their prior assumptions about the target of the story, for example. And so you see the challenge from a regulatory and legal perspective of tailoring very specific laws deals with specific content, which is why I would join in Professor Laidlaw’s recommendation. I agree. I like the idea of a more generalized duty, a duty of these platforms and intermediaries to act responsibly. In the UK context, it’s been proposed and described as a duty -- generalized duty of care. Why do I prefer that rather than having more specific laws on this count or that specifically target certain kinds of content, although in some cases that will be necessary as well, because it closes gaps, it can encourage platforms to deal with more than just the bare minimum of lawful or offensive content, maybe dealing with some content that drives certain users from platforms, marginalized groups, visible minorities, women who are disproportionately targeted by abuse online. You can have that kind of a generalized duty that can also get these platforms to deal with misinformation, disinformation as well. A generalized duty also means that we don’t tailor laws. Earlier I talked about more obscure places online where more extremist communities percolate and seed false rumours and stories that later get picked up by broader actors and larger platforms and, in some cases, mainstream media. The rules that you would tailor for those kinds of online contexts would be far different from the legal rules you would tailor for social media platforms. But if you have a more generalized duty that can be defined through -- over time with certain regulations over time, more specific applications over time by courts, you can see how that can be applied both to the large-scale platforms but also to those more obscure places online as well.

    33-066-22

  5. Jonathon Penny, Prof. (Law – York University)

    For sure. So just ---

    33-085-05

  6. Jonathon Penny, Prof. (Law – York University)

    Yeah, just very quickly, yeah, Professor Penny speaking. Just very quickly to add to my colleague, Professor Laidlaw's comments here on the role of political actors. Something I mentioned earlier which I think really needs to be fixed in Canada is the glaring hole in our privacy and data protection laws which presently do not cover political parties. We have documented cases both in Canada and abroad where, like, for example, the Cambridge Analytical scandal is a great example of where you have misappropriated personal information, user data that was then used for targeted influence operations, disinformation, misinformation, all of that. And if you have political actors that have access to people's personal information and there's no constraints on it, that's a real problem for the challenge that this Commission needs to be addressing. So that's one. To come to the second point, again, this is a really hard problem, because anonymity really is important to broader democratic discussion; right? There are certain things that one might be more willing to say when those comments are not necessarily tied to one's professional standing, or to their identities, or would not say it because they might risk their job. So anonymity provides some protections to have broader and more robust democratic debates, but there, of course, is a darker side to anonymity on social media platforms. It's often used as cover to allow more abusive behaviour, and that goes -- includes intentional spread of false information. It gets into some of the other more abusive behaviour that we have talked about, harassment, intimate privacy violations, harassment, online abuse, all of that. And so there are different balances that different platforms have come to. Some platforms have real name and sort of a non-anonymity policy. Some, for example, like Twitter, promote anonymity and are fine with it as a policy, though that may change under the new ownership, it's unclear. So I think that if there are going to be laws that would address anonymity, we have to be really careful because while you do have those concerns about abuse, you also don't want to undercut the ability of anonymous users to participate in robust democratic discussion they wouldn't otherwise without that protection of anonymity. Maybe one balance that you see in some platforms where you have internal identity verification but public facing anonymity, and that is going to avoid more bots and trolls that want to hide their actual identity and persona is engaged in more abusive behaviour.

    33-085-07