Emily Laidlaw

Emily Laidlaw spoke 7 times across 1 day of testimony.

  1. Emily Laidlaw, Prof. (Law – University of Calgary)

    Yeah, thank you Professor MacKay, and thank you for having me. Good morning, Commissioner Rouleau.


  2. Emily Laidlaw, Prof. (Law – University of Calgary)

    Commissioner Rouleau, I -- so I co-chaired the expert panel with Pierre Trudeau that -- for Heritage Canada back in the spring, and our mandate was to -- not to come to some sort of consensus, but to have a conversation about what really should be the responsibilities of social media. And I had the pleasure of sitting on the panel with Deputy Morin, with -- who is with us today. And so we were not looking at what I would call the primary wrongdoers, which would be the individuals that are perhaps posting certain content online. We were more specifically looking at, okay, what are the roles of these platforms, or what John was just talking about is intermediary liability. And I think one of the most interesting things about our panel was despite not having a mandate to reach consensus, we circled around and agreed on some of the -- what I would call the core building blocks of what a law should look like for Canada. And I think that the first point I want to make is that we all concluded, I would say, that social media, or platforms should have what I would call a duty to act responsibly. Now, this was something that came out of the work of the Commission on democratic expression, and this is something that has been explored in other countries. We're seeing it with online safety bill in the UK, with their duty of care. It could be talked about as a due diligence model in the EU with the new Digital Services Act. It's a risk management obligations. And a duty to act responsibly, the way that kind of we were talking about it and the way that I think about it is that, you know, it's almost product safety that social media and other platforms should take care in the design of their spaces to be responsible, to protect their users from harm. It's not about perfection, but it's about turning their minds to how do their, your know, recommender systems work. What content moderation policies do they have in place? What might be the unintended impact on this in particular groups. But it's not just about protection from harm. It also should be about protection of human rights, about the right to freedom of expression, about the right to privacy, to equality. One of the fears about imposing too strong or stringent obligations on social media is that it might incentivize them to put in place kind of just blunt tools; right? An automated tool that is going to remove any sort of content that might have the whiff of hate speech or any sort of extremism. But often, hate speech is quite -- you know, the line between that and some more political views, what we would call expression that's core to democracy, that line between the two can be somewhat fine. So blunt automated tools are poor solutions. So if there's an obligation then to have a commitment to and realize the right of freedom of expression, then social media will have to think through, okay, what do we want to put in place that addresses maybe some of the risks of harm, maybe we have a lot of children that use this space, but also, what kind of tools do we put in place that maybe aren't going to have an unintended impact on freedom of expression, or end up being a system of surveillance. The other aspect that was key that we discussed was transparency. You know, one of the issues that we have is that we -- it's not easy to lift the lid to know how the algorithms work and how the decision making works for the various platforms on which we rely for discourse. And so almost kind of radical transparency is necessary and that includes access by researchers and other civil society groups to how these companies kind of operate and the algorithms is a form of accountability. This is something that's been implemented in the EU with the new Digital Services Act. The other thing that we circled around is the idea that we need a regulator. I mean, it is not realistic that we go to courts to address many of these issues. Often these are high volume. They are low value. And the court process is really just too slow and inappropriate to be dealing with many of these issues. What we need is a regulator like a Privacy Commissioner, but with the power to impose fines, that can audit the systems of these companies. It's about the systemic risk of the various social media. So they can investigate the companies, they can audit their behaviour, they can help develop codes of practice working with civil society, working with industry, and they can play an education role with the public. Now for some of the tougher issues. I think that, you know, one of the challenges that Heritage Canada faces, and I am sitting here just as interested as everyone else about what they're going to propose, would be scope, what's in and what's out. I mean, if we're realistically looking at creating a regulator, we can't impose all sorts of content and toxicity of everything that's out there on a regulator to investigate. Previously, they kept the scope quite narrow: terrorist propaganda, hate propaganda, violent expression, intimate image abuse, child sexual abuse and so on. And I think people quite rightly, including myself, said but there's so much other types of toxicity out there that's unlawful that should be addressed. And but the issue is, is that the regulator has to start somewhere. So at least I've landed that there is real justification, I think, for a regulator to maybe be more narrowly scoped at the beginning, and then it can build its capacity from there. Some of the other issues would be should this cover -- you know, what platforms should we cover. I've been using quite loose language here saying social media and platforms, but we often talk about it as the internet stack. Are we just talking about social media, the top layer? Are we going down a level and we're looking at domain name registrars? Are we looking at internet service providers? Are we looking at, you know, companies like Cloudflare that provide cyber security protection and so on? I think this is quite a controversial area, but when we're thinking about it in terms of corporate responsibility, maybe their obligations should be different but maybe there should be some responsibility there through legislation. The other is then, mis, dis and malinformation. It is one I guess the existential threats that we face. And there is a real concern though in legislation that this could become almost the poison pill. Because as I get into -- you know, in the next section, we're going to talk a bit more about some of the legal issues, but I'll just, you know, drop one point right now, which is there's a quite a bit of content of mis and disinformation that is lawful, or it's that lawful but awful. So for the government to create legislation that targets lawful expression likely won't survive constitutional scrutiny. But it's a very different thing to say, look, a company needs to be responsible, much like, you know, safe cars, about, you know, putting in place certain systems to manage risks. That has much clearer sense of I guess constitutionality than saying I, you know, want social media to target lawful expression and take it down. That would not survive constitutional scrutiny. And I think the last point that I'll make now, well, I guess two points, would be one is a controversial point was recourse. You know, the groups that are impacted by this need access to some form of remediation. And sometimes they get it through social media and sometimes they don't. And even if content moderation is in place, it doesn't always work very well. But the idea that every complaint about content could somehow be, you know, pushed to some sort of recourse body, it's just not practical. It won't work. But victims need access to something. And so what that should look like, should it just be for intimate image abuse, child sexual abuse images? You know, what should they have access to and what's realistic is a key point to discuss. And the last is size, and I think that what we saw with the convoy movement is that, you know, organizers and followers were all using all kinds of different social media. Some of them major social media, what we'd say is quite entrenched such as Twitter and Facebook and YouTube and Tik Tok, but also some smaller alternative social media like I'd say ButChute, Rumble. Of course, there were conversations about Zello and Telegram, et cetera. So the EU has looked at -- they've created risk management obligations just for very large platforms, and so that would be an obligation for companies to think through how does my recommender system work, and is it actually having an impact that could, you know, in a sense undermine democracy? That sort of obligation would end up in Canada, if we put something in place, only targeting the YouTubes of the world or the Facebooks of the world, but similar obligations would not exist for some of the smaller platforms. And that's not going to work, because as we see most of, you know, the users of social media use all kinds of different social media and are using some of these smaller platforms in social media to engage in these types of movements. So we have to think through with a type of law like this that, hey, targeting, you know, very large platforms is perhaps misguided, but how do we then deal with the smaller and medium size companies that perhaps do not have the resources in place to deal with some of these issues.


  3. Emily Laidlaw, Prof. (Law – University of Calgary)

    Yeah, thank you, Professor. So I'm going to give the definition that UNESCO uses for mis, dis and malinformation, but I do want to note upfront that there is quite a bit of debate about what these terms means, in particular, for malinformation. So disinformation is the intentional spread of false information. So if you think a government-sponsored disinformation campaign. Misinformation is a bit different. So it is where information is intentionally spread, and it's false information, but the person believes it to be true. And I think a large swathe of what we see on social media, at least from individual users, is misinformation. So even if, say, a government launches a disinformation campaign it eventually seeds to people who then believe it to be true and then spread it from there. The last would be malinformation, and this is the one where there's not a lot of consensus on what it means. I rely on UNESCO that defines it as information that's based on reality, but that's distorted, you know, it has that kernel of truth to it, but it does include that everything else bucket. And it can include, potentially, hate speech, harassment, trolling, doxing where private information is shared publicly, other forms of violent and extremist content. And you know, one of the questions I was asked to think about to open up this discussion is what is this in law? What laws regulate mis and disinformation, and what is the tension there with the right to freedom of expression? So I was just talking about laws that, you know, that are being explored to target social media, but the laws that target mis and disinformation would be targeting the individual users that are posting the particular information. And we do have a variety of laws that target false information, but we do not have any law in Canada that broadly addresses or targets mis and disinformation, at least in the way that we're contemplating here. So we have a crime of spreading false news, but that was held to be unconstitutional in the case of Zundel, and other crimes that target false information would be crimes of hate propaganda, counselling terrorism, fraud, and in civil law, and Jonathan touched on a few of these, defamation. Defamation is where, you know, someone sues another individual or entity for spreading false information that impacts their reputation. And perhaps also I would put in here the tort of false light, so that has just been introduced or adopted in Ontario. So it's just sort of a hodgepodge mix of different laws that apply to some aspects of mis and disinformation. One of the issues is that a great swathe of this is what I call lawful but awful expression. It's hateful, but not hate speech; it's extremist content, but it is not counselling terrorism. So it's not unlawful in the traditional sense. And also, as we were just talking about, with the role of social media, and pushing, you know, through advertising and recommender systems, it might be that lawful but hateful content ends up being pushed to users over and over. So in law, these are difficult areas to deal with. In -- you know, under constitutional law, we have the right to freedom of expression. It is a broad right. It includes the right to seek, receive, and impart information and ideas. It includes the right to shock, offend, and disturb, and it also values false expression, in particular, if it's information that's of public interest. So restrictions of it are supposed to be narrowly construed. And we face a hurdle in this area because what is truth; right? What is the perception of truth in one particular scenario? And I've listed a few laws where courts have willingly, you know, addressed what would be false information, and consequences might flow from that, but much of what we're talking about on social media might be, you know, deeply held beliefs that are harmful but it's difficult to pin down truth or not truth of it. Also, much of the dis and misinformation that's spread, when we can identify what's false there's a lot more clarity to it, but some of it exploits what I would call the strategic ambiguity of it. They use humour, they use kind of the short, emotional memes, with visuals or short videos, that are known to be highly effective and impactful on users and play on jokes and say, "Well this is just a joke. This is just the grey area." This is really hard for law to address. And when disinformation has been criminalised in some countries, it has been used to target political opponents, dissidents, and so it's difficult to put in place these types of laws, and generally don't comply with international human rights. So the last point I want to make here before we open it up is the other side of this. As much as I have mentioned the breadth and depth of the right to freedom of expression, I always think of the comment by the Supreme Court that freedom of expression is freedom governed by law. There are laws that constrain it, and rightfully so. And I think John made a key point that, you know, laws help the right to freedom of expression of particular groups, marginalised and racialized groups, that are impacted by what happens online. People are driven from participating online, they are dehumanised by these experiences online, and therefore, they don't get to enjoy the right to freedom of expression. There is also a growing body of work in the area of the right to freedom of opinion, and that the recommender systems and the advertising push to users is in fact infringing on the right to form an opinion free from manipulation. And this is something that is developing in international human rights and needs to be contended with here. I'd say that there is two things I want us to think about as we have a discussion here, is one would be what are the laws that apply to individuals, the ones that are posting the particular information and engaging in certain conduct that may or may not be lawful or falls into the category of the lawful but awful? And the other would be what are the laws that then target social media and their responsibilities, whether it's through managing their advertising ecosystem, managing their recommender systems, you know, algorithmic accountability, or their systems of content moderation?


  4. Emily Laidlaw, Prof. (Law – University of Calgary)

    Yeah, thank you so much. And, Vivek, I'm just thinking a bit about what you had to say, and it brought home I think one of the key aspects of, you know, what we're struggling with right now with freedom of expression, which is, you know, social media, as Dax was saying, has opened up such huge opportunities for discourse and for everybody. And so we almost need to break down what the problem area is that should be regulated through law, what sort of we're okay with leaving alone; right? And some of this is this commitment to the messiness of freedom of expression to be able to discover and figure out who you are and hear other ideas. The struggle that we're having now is that because of the sheer volume and the kind of equitable access on social media is that the opportunities to dehumanize and harm other individuals is made that much easier. So on a practical level, it's easier to cause harm and it creates a regulatory issue. But in thinking through what the role of the law is here, we also have to think of what other things regulate. And so Twitter is a great example here because at the moment, if we had a law in Canada, would it make a difference right now? Perhaps. In my ideal world, it would set the standards of what we expect for corporate responsibility here, that Twitter has certain standards in place to deal with inauthentic behaviour, to address content moderation, access to remedy for individuals. So that is where I think a law could help. But it's not as though Twitter, as I think Elon Musk has discovered, is somehow operating in a regulation-free zone, because, what, the marketers don't want to have to advertise on that platform. Ah, well, then they're going to regulate a particular way. Do you know who else doesn't want it? Users. So users leave the platforms when they don't want this particular space. But also, others further down, for example, the digital storefronts, the app stores right now are imposing conditions on Twitter to continue to carry the Twitter app. So there's all kinds of different mechanisms that are often at play when we talk about what regulates expression and what regulates what we say and do on social media.


  5. Emily Laidlaw, Prof. (Law – University of Calgary)

    Yeah, thank you. I want to build on what Vivek is saying and maybe add a point about social media content moderation because when we talk about violent extremism, I mean, we have particular laws that address, you know, counselling terrorism and hate propaganda, et cetera, but there isn't a law against extremism. And so we're often -- you know, we are turning to social media right now to set down through content moderation, through self regulation what kind of expression they're willing to host or not. And so when we talk about some of the fringe platforms or social media, we're usually talking about social media that do not proactively monitor content to action and that just rely on legal definitions of -- and only remove content that is unlawful. And so we're actually dependent on social media to proactively take some steps to address the forms of extremism that Vivek was talking about is -- you know, sort of entrenching different views. So let me give an example of how difficult this can be right now. I mean, one example I gave is just the use of jokes, but the other is that often groups talk in code. So let's imagine a video that is posted on Tik Tok and someone is saying some just, you know, antigovernment-type message, which in and of itself is, I mean, there's nothing wrong with having that view and wanting to express it, but behind them are guns, for example, on the bed. And this is an example that's been given to me previously. What is social media supposed to do about that? Some of this is that kind of the slow violence of just normalizing certain rhetoric and sometimes coded messages in the background, and social media's having to grapple right now with how to address this. This is lawful expression, but it might not be expression that they want to host in their space. But if we're talking about the, you know, private spaces as becoming the new public sphere, which is something that Dax was talking about, then they have to take seriously freedom of expression, but they are setting their own limits to address what is lawful, but extremism.


  6. Emily Laidlaw, Prof. (Law – University of Calgary)

    Thank you, Professor MacKay. You know, I’m sitting here listening to my colleagues, and there are just so many different angles to this. And I think one of the oft-repeated solution, which is -- you know, causes some to roll their eyes but is -- because we all know it to be true, is that it’s a multi-faceted solution. If we really want to target the problems of dis, mis and malinformation, we are looking at, you know, education, we’re looking at underlying social and economic factors. We’re looking at improving laws in the area of social media regulation, which we’ve talked about a lot today, but it also might be in areas such as funding of media. And of course, we have a Bill on the table examining that right now. I have been advocating strongly for a law specifically to address the responsibilities of platforms and social media specifically and it’s this, you know, duty to act responsibly. We need this law, but it’s not a solution to everything. It is -- it would be one piece of the pie. And I think one thing I want to emphasize with the few minutes I have here is that it’s not a perfect solution and it should not be a perfect solution. Working at its best, it’s really about kind of lifting the game, in a sense, to just work in the direction of a healthier ecosystem. My fear is the be careful what you wish for aspect of this, and I think this is something that Vivek and David have been warning about, is the risks of over-regulation and kind of, you know, I guess too much intervention in some of these spaces. So it’s really a question of how do we make this healthier, how do we make sure that everyone is welcome to participate in these spaces and that social media are taking care to think through the impact of the design of their spaces, of the content moderation systems that they do or do not set in place and the impact on users and on society. The well, it’s not going to be a perfect solution to that, if it is operating appropriately there is an error rate, right. But the assessment needs to be what, then, would be the bottom line of the expectation of these companies. One would be to have in place certain content moderation systems. One is algorithmic accountability. And I say that with some hesitation, and that would be to the extent that there is access to be able to review and hold them accountable for their algorithmic impact. And by accountability, I don’t mean that it’s outcome based. Rather, what I would say is can they then explain and justify the approach that they are taking to work towards a healthier ecosystem. And I think I might leave it there and pass it on to my colleagues and just say that, you know, it’s time, I think, as a solution that Canada introduces some sort of regulation here to start addressing some of the underlying issues.


  7. Emily Laidlaw, Prof. (Law – University of Calgary)

    That's a great question, and there's no easy answer. I would say that any actor can contribute to misinformation and disinformation. So often, if we look at, say, a disinformation campaign, so one that's intentionally launched, they -- let's stay it's state based, so that would be government, that would be political actors in power that have made that decision, but then they will often then target key influencers because they have a large following, and it might be media, or it might be certain political actors, it might be other types of influencers, and then that then spreads from there to individuals who consume it, and maybe believe it, and things go viral. So political actors absolutely can play a key role in spreading both dis and misinformation, and I would also say malinformation in the way that they might label and perpetuate whether it's stereotypes or other forms of hate; right? So they're like anybody else in that system, they just have power and influence.