When do facts that reflect poorly on vulnerable groups become 'harmful content'? The online harms bill isn't clear

Bureaucracies usually have a purpose. The public health system cares for the physical bodies of citizens; the justice system (ideally) keeps the dangerous from the normal; the immigration system controls entry and exit.

Soon, you may have to add to the list a “digital safety” bureaucracy, designed to regulate feelings and ideas, and the places where these things are expressed. This is what the forthcoming online harms act, tabled Monday in the Liberals’ Bill C-63, will do. Individuals who value their ability to discuss controversy online should be wary — as should any major social media company that happens to operate in Canada.

Enjoy the latest local, national and international news.

Enjoy the latest local, national and international news.

Create an account or sign in to continue with your reading experience.

Don't have an account? Create Account

The law would put “harmful content” in scope of government regulation by way of “arm’s-length” agencies. Targeted content would include media depicting sexual abuse (and understandably so), as well as any content that “expresses detestation or vilification” of any group considered by human rights legislation to be vulnerable and is likely to foment such feelings given the context of the communication (less understandably so). Identity-based protections are inherently more subjective, and they aren’t afforded equally to everyone: human rights law tends not to protect white people, for example.

The bill states that expressing disdain and dislike — or discrediting, humiliating, hurting or offending — is not necessarily hateful for the purposes of online regulation. Critically, it’s silent on what does make speech cross over into unacceptable territory. There’s no hard threshold.

At what point does discussion of the fact that most gender-diverse sex offenders in federal prison are transwomen (male) cross over into “harmful content” territory? Or the fact that Black people make up only three per cent of the population, but represent six per cent of all accused in criminal courts? Or the fact Eritreans in Canada, half of whom arrived after 2016, and who come from a country known for not cooperating with the deportation process, are increasingly rioting in response to politics back home?

This newsletter tackles hot topics with boldness, verve and wit. (Subscriber-exclusive edition on Fridays)

By signing up you consent to receive the above newsletter from Postmedia Network Inc.

A welcome email is on its way. If you don't see it, please check your junk folder.

The next issue of Platformed will soon be in your inbox.

We encountered an issue signing you up. Please try again

Regardless, the promotion of actual hate propaganda, and the incitement of genocide, are already crimes in Canada, so the very worst speech was already covered by the current law and enforceable by the police. If the Liberals wanted better work done on these fronts, they could have simply raised police funding and staffed the courts with judges, as manpower is a primary constraint in dealing justice.

Instead of maintaining the systems that exist, the online harms law would add proactive measures in the form of a new bureaucracy to ensure that everything from genocide advocacy to the insulting recitation of upsetting facts don’t get out of hand. These will work in tandem with reactive measures: the crime of “hate crime” will be enforceable at criminal law, and the Canadian Human Rights Commission will be empowered to adjudicate cases of rights-violating content online.

The proactive regulatory body consists of three parts. First, a “Digital Safety Ombudsperson” would investigate “systemic issues related to online safety, support social media users and advocate for the public interest.”

Second, a five-person “Digital Safety Commission” would advance the “reduction of harms caused to persons in Canada as a result of harmful content online.” This would involve enforcing and administering the online harms law and developing “online safety” standards.

Finally, a “Digital Safety Office” would provide support to the other two.

The purpose of proactive regulation would be to ensure compliance among social media giants, rather than individual internet users. This is similar to how the Online Streaming Act (Bill C-11) regulates streaming platforms and therefore indirectly regulates platform users.

Under Bill C-63, many of the required “harm reduction” measures to be mandated by law are already a feature of major websites: users must be able to block other users, report content for removal, and websites must have a person on staff tasked with managing user complaints. Corporate social media already self-regulate “harm” to an extent, and executives are often fine with cracking down on potentially harmful speech at the expense of freedom. Reddit has banned gender critical communities for promoting “hate” (the sex binary), while the livestreaming platform Twitch once suspended socialist commentator and mansion owner Hasan Piker for using the term “cracker” for white people.

However, if the online harms law passes, social media would also be legally required to uphold a “duty to act responsibly” when it comes to managing content, which includes taking measures to “mitigate the risk that users of the service will be exposed to harmful content.” The commission could also order specific measures by way of regulation.

It’s doubtful that the regulator will make regulations requiring the takedown of crime stats that reflect poorly on a vulnerable group — facts are facts, but in the interest of compliance and risk mitigation, platforms might do it anyway. A chilling effect is likely.

Notably, Bill C-63 would require every site of sufficient size (to be determined, but you can safely assume this will include major players like Meta, X and Reddit) to provide a digital safety plan to the regulator, outlining exactly how they plan to meet every statutory requirement for mitigating harm. Reports would need to include data on the amount of “harmful content” on the site, data on content removal and so on. The plans would be made public, while the data used to prepare them would be made available to approved educators, advocates and researchers upon request.

Data is valuable. Google, for example, just paid $60 million to access Reddit’s data for the purpose of AI training. It’s certainly not something that social media giants will want to hand to the Canadian authorities for disbursement to government-approved advocacy groups. Between the data hand-over, and new wave of compliance reporting, it’s possible that some companies will want to withdraw from Canada entirely.

The proactive digital regulation machine wouldn’t skewer individuals who criticize, say, immigration online (though, with new powers to do so, the Canadian Human Rights Commission might). But the fact it exists, and its role in enforcing the “duty to act responsibly” to which social media would be bound, still stands to stifle free expression. This stands to muffle the flow of inconvenient ideas, and that’s a problem.

National Post

Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.

Tips on maintaining your sleep schedule during time changes

If you enjoy a freshly baked pizza and love spending time outdoors, you’re in for a treat. This guide will welcome you to outdoor pizza ovens, exploring various options that will help you transform your backyard into a pizzeria. Whether you’re a novice pizza chef or a seasoned outdoor cooking enthusiast, there’s something here for everyone. Visualize yourself eating the mouthwatering homemade pizzas in your outdoor space. Let’s get cooking!

The bob has been a dominate hair trend this season. Nadia Albano shares a few things to consider before making the cut.

Three buzzy new beauty products we tried this week.

Canadian chef, entrepreneur and actor Matty Matheson opts for homegrown style for red-carpet awards season.

QOSHE - Jamie Sarkonak: Trudeau's digital bureaucracy will trample the flow of ideas - Jamie Sarkonak
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Jamie Sarkonak: Trudeau's digital bureaucracy will trample the flow of ideas

8 0
29.02.2024

When do facts that reflect poorly on vulnerable groups become 'harmful content'? The online harms bill isn't clear

Bureaucracies usually have a purpose. The public health system cares for the physical bodies of citizens; the justice system (ideally) keeps the dangerous from the normal; the immigration system controls entry and exit.

Soon, you may have to add to the list a “digital safety” bureaucracy, designed to regulate feelings and ideas, and the places where these things are expressed. This is what the forthcoming online harms act, tabled Monday in the Liberals’ Bill C-63, will do. Individuals who value their ability to discuss controversy online should be wary — as should any major social media company that happens to operate in Canada.

Enjoy the latest local, national and international news.

Enjoy the latest local, national and international news.

Create an account or sign in to continue with your reading experience.

Don't have an account? Create Account

The law would put “harmful content” in scope of government regulation by way of “arm’s-length” agencies. Targeted content would include media depicting sexual abuse (and understandably so), as well as any content that “expresses detestation or vilification” of any group considered by human rights legislation to be vulnerable and is likely to foment such feelings given the context of the communication (less understandably so). Identity-based protections are inherently more subjective, and they aren’t afforded equally to everyone: human rights law tends not to protect white people, for example.

The bill states that expressing disdain and dislike — or discrediting, humiliating, hurting or offending — is not necessarily hateful for the purposes of online regulation. Critically, it’s silent on what does make speech cross over into unacceptable territory. There’s no hard threshold.

At what point does discussion of the fact that most gender-diverse sex offenders in federal prison are transwomen (male) cross over into “harmful content” territory? Or the fact that Black people make up only three per cent of the population,........

© National Post


Get it on Google Play