Expert team members, including Bernie Farber of the Canada Anti-Hate Network and Lianna McDonald of the Canadian Center for Child Protection, advise that the practice requires technology giants to tackle the spread of fake news and videos.
Some have suggested that Canada should reflect the European Union’s Digital Services Act, which allows for stronger action to tackle misinformation in times of crisis – for example during elections, international conflicts and public health emergencies.
They said the EU measure was related to Russia’s attempts to spread false allegations to justify its invasion of Ukraine.
Public Security Minister Marco Mendicino said in an interview that technology is now so sophisticated that some fake images and content are “almost indistinguishable” from genuine content, making it very difficult for people to tell the difference.
He said a “government-wide” approach would be needed to address the spread of misinformation in Canada.
“We are at a critical juncture in our public discourse.  “We are seeing a growing volume of misinformation and misinformation being updated by extremist ideology.”
An analysis by academics of more than six million tweets and retweets – and their origins – found that Canada is being targeted by Russia to influence public opinion here.
A study by the University of Calgary School of Public Policy this month found that huge numbers of tweets and retweets about the war in Ukraine could be found in Russia and China, with even more pro-Russian tweets being found in the United States.
Ministers have announced their intention to introduce a bill on Internet harassment that addresses cyberbullying – including racist slander, anti-Semitism and offensive statements addressed to members of the LGBTQ community.
Following the publication of a previous online hate bill shortly before last year’s federal election.  The bill did not become law.
The panel, which also includes law and policy teachers from across the country, said a bill should not only tackle cyberbullying, including child abuse, but also look at fake and misleading information on the internet.  This could include coordinated misinformation campaigns “used to create, disseminate and enhance misinformation”, including the use of bots, botnets, non-genuine accounts and “deepfakes”.
“Deepfakes” are fake videos or photos that use deep learning technology, which creates fake images with a very realistic look.
Some experts on the committee said the bill should also address fake ads, misleading political communications and content that contributes to an “unrealistic body image”.
The panel said the platforms would have a “duty” to deal with “harmful content on the internet, which includes misinformation, conducting content risk assessments that could cause significant physical or psychological harm to individuals”.
Some experts in the committee warned that measures to tackle misinformation must be carefully worded so as not to be abused by governments to justify censorship of journalism or criticism.
Their warning was echoed by Emmett McFarlane, a constitutional expert at the University of Waterloo.
“There are always valid concerns about the possibility of exceeding and unintended consequences of such laws.  “Our existing laws on criminal hate speech and obscenity have resulted in material being unjustly restricted or excluded at the border, for example.”
The group of 12 experts, which has just completed its work, said that misinformation and false publications could pose a higher risk to children.
They recommended that the bill impose strict requirements on social media companies and other platforms for the removal of content that promotes or promotes child abuse and exploitation.
Some members criticized the platforms for failing to remove this content immediately, saying “the current performance of online services in removing child sexual abuse material is unacceptably poor.”
The panel criticized the platforms in general because it said the percentage of harmful content they were removing, but not how long it took to remove it.
Rodriguez thanked the panel for concluding their discussions last week, saying “their advice is needed to create a legislative and regulatory framework to address this complex issue and to create a secure internet site that protects all Canadians “.
“Freedom of expression is at the heart of everything we do, and Canadians should be able to express themselves freely and openly without fear of harming the internet, and our government is committed to taking the time to do so,” he said.
The minister also thanked the Citizens’ Assembly, a group of 45 Canadians considering the impact of digital technology on democracy, for its advice.  At a conference last week, the assembly also stressed the importance of tackling the spread of misinformation on the internet, saying it could manipulate public opinion.