Will the misinformation bill impede freedom of speech?

Anne Twomey, University of Sydney

The federal government’s proposed legislation on misinformation and disinformation has passed the House of Representatives, but faces a rocky time in the Senate.

Opponents have dubbed it the MAD Bill, and it has certainly made some of them mad. Ironically, there is a great deal of misinformation circulating about the bill itself.

Some believe it gives the Commonwealth government power to censor them and prosecute them for what they say – but it does not.

What does the bill do?

The bill is directed at digital platforms such as Google, Facebook, Instagram, X and TikTok. It requires them to be transparent, by publishing their policies, risk assessments and complaints mechanisms for dealing with ‘misinformation’ (which is false, misleading or deceptive content) and ‘disinformation’ (which is misinformation that is intended to deceive or involves ‘inauthentic behaviour’ by bots).

It also requires the platforms to keep records and provide information to the Australian Communications and Media Authority (ACMA) about how they deal with misinformation.

The digital platforms already take action to remove content, demote its accessibility, and place warning labels on content that is disputed. But exactly how they do this, and whether their actions are sufficient, remain contentious.

Outsourcing responsibility to digital platforms

The controversial aspect of the government’s bill concerns its outsourcing to the digital platforms of the primary responsibility of dealing with misinformation. It requires them to enter into industry codes to address the spread of misinformation and disinformation. Once ACMA has approved the code, it becomes binding on the industry and is backed by hefty fines for any failure to comply.

If there is a failure to develop a code, or ACMA decides the code is deficient, or if there are “exceptional and urgent circumstances”, ACMA can impose its own ‘standards’ on the digital platforms. These are also backed by hefty fines for failure to comply.

It is unclear what these standards can address. The bill says they can deal with “matters relating to the operation of digital communications platforms”, which could mean anything.

But the bill also says ACMA can only apply a standard if it considers it is necessary to provide adequate protection for the Australian community from serious harm caused or contributed to by misinformation or disinformation on the platforms.

ACMA cannot use its standards to order the platforms to remove content or boot off end users, unless it involves inauthentic behaviour by bots.

What is ‘misinformation’?

It is when we get to the definition of ‘misinformation’ that things get sticky.

Misinformation is defined as content that contains information that is reasonably verifiable as false, misleading or deceptive. It is only misinformation if the content is provided on a digital service to end-users in Australia, and is reasonably likely to cause or contribute to serious harm.

The types of serious harm are set out in the bill. These include harm to electoral and referendum processes, harm to public health, the vilification of particular groups, and imminent harm to the economy or critical infrastructure.

If it was just confined to matters of fact that could be verified as false – for example, scam financial advertisements showing false celebrity endorsements, or manipulated photos – there would be less concern. But the explanatory memorandum to the bill says it also covers opinions, claims, commentary and invective.

How do you determine that opinions are false? What happens to claims that are contestable? How does a digital platform, based in the United States, make assessments about whether a claim relating to a referendum in Australia is false or misleading and is reasonably likely to cause serious harm to the referendum process in Australia?

Fact-checking

The explanatory memorandum says information can be verified as false by a third-party fact-checker. Digital platforms already use Australian fact-checking services for this purpose. Fact-checkers rely on experts to help them make their assessments. But they are only as good as the experts who happen to respond to their requests.

When dealing with contestable matters, it is often the case that there are experts with different views. Who is chosen can make a big difference to the outcome of the fact-checking report. Even where there is a consensus of experts that something is wrong, that does not always turn out to be the case.

If the result of fact-checking is that a note is added to a post stating it is contested, and giving readers other information or references to more authoritative sources, that is fine. It ensures readers are aware the post may not be accurate, and empowers them to become better informed and make their own assessment.

But if fact-checking leads to a contestable claim being declared ‘misinformation’, and posts containing that claim are then removed from digital platforms, that is far more concerning.

While the ‘serious harm’ qualification should mean most contestable claims are not treated as misinformation, that will not always be so. The explanatory memorandum regards swaying voter behaviour during an election so that “the outcome of an electoral process can no longer be said to represent the free will of the electorate” as falling within serious harm. So contestable claims about political policies could end up being classified as misinformation, if fact-checked as false and if they were likely to influence voters.

There are exclusions from the scope of ‘misinformation’ for professional news content, parody and satire, and the “reasonable dissemination of content for any academic, artistic, scientific or religious purpose”. But the chances of a digital platform’s algorithm being able to apply such fine distinctions seem pretty low. The more likely outcome would be over-censorship by platforms, dumping anything on a controversial subject in the ‘misinformation’ basket to avoid trouble.

The Trump wild card

But the last word on this may come from Donald Trump. He has announced that once inaugurated he would ask Congress to enact legislation that curtailed the powers of the digital platforms to restrict lawful speech. They would be prohibited from taking down content, other than unlawful content such as child exploitation or the promotion of terrorism.

How effective the Australian bill, if passed, would be in the face of such US legislation, takes this issue to a new dimension of difficulty.

Anne Twomey, Professor Emerita in Constitutional Law, University of Sydney

This article is republished from The Conversation under a Creative Commons licence. Read the original article.

The Conversation
The Conversationhttps://theconversation.com/au/who-we-are
The Conversation Australia and New Zealand is a unique collaboration between academics and journalists that is the world’s leading publisher of research-based news and analysis.

15 COMMENTS

  1. So, just like China and Russia, ‘publishing’ anything the Government does not agree with is misinformation and can land you in prison! Already we cannot have free and open online discussions of race, religion, sexual orientation, refugees, immigration policy, judicial outcomes or motivations or some more off-centre views on politics. Instead of this, we urgently need a constitutional bill of rights similar to the US one. Even that seems to be weakening however.

  2. They are even moving to restrict internet access for under 16’s interfering with parental prerogative. It will be a creeping restriction – getting more widely interpreted over time. Our public discussion is already highly restricted due in part to activist judiciary who read more into laws than the black letter.

  3. That may depend on whether or not you consider that to be a false claim. Millions of us firmly believe that He does exist, because of our personal experience of Him.
    You need to be careful claiming something (or someone) doesn’t exist, just because you have not yet experienced it, or seen it. I have never seen the Eiffel Tower, but I believe it exists, because I believe others’ experiences of it.

  4. Such legislation is long overdue because unfortunately people with hidden agendas have been using social media to disseminate false and misleading information which fools a significant portion of the population into forming opinions based on falsehoods.
    Also as a parent I know myself and many other parents have had a gutful of our kids minds being taken over and manipulated by online misinformation which they have become addicted to. It’s reached the stage where kids believe their parents know nothing and the online influencers know it all. And you can not take it away from them because things at home can turn very nasty including violence if you try. So we are very happy to see the government try to help.

    • Do you mean politicians – a major class of people with hidden agendas have been using social media to disseminate false and misleading information which fools a significant portion of the population into forming opinions based on falsehoods.

    • Parents own the phones and they own the sim/esim and they pay the bill for connection. Really parents need to be firm from the day they let their child have a phone for their use. Unfortunately this has to start early but most parents seem to want to be friends with their children instead of parenting. My children now see why the word NO was used at home and when out. Phones and laptops need to be taken away at a certain time and they need to be checked by parents and no good saying too tired, too busy – why did you have children if you are not going to parent.

  5. It’s very cunning of the government to aim any penalties at those running the digital platforms, and not at those posting comments. Facebook, X, and the others, would presumably be very vigilant about what they allow to be posted, to avoid the massive penalties, and so finish up doing the government’s dirty work for them. Sneaky!
    And obviously the prime targets would be anything the government of the day disagrees with, regardless of its accuracy, or otherwise. Note that the government itself, and “recognised” news services, would be exempt from the legislation. One rule for the ruling elite and their news outlets, another for the rest of us.
    What gives a government department ( ACMA in this case) the right to determine what is, or is not, true?
    The best way to combat misinformation and disinformation is not to vigorously shut down anyone you disagree with, but to allow free and open debate in the public forum.

    • And that is exactly how we end up with a Donald Trump, thank you very much. Abraham Lincoln made the point that all of the people can be fooled some of the time, and some of the people all of the time, and we seem content to ignore history.

      We are no longer represented by the ‘majority’, by which I mean the people who have some education, have travelled to other countries, and have been brought up to think of others first (courtesy, or “wokeness” if you will) and to think for themselves (be curious and question everything).

      It seems like a great many people in this world would rather just sit back and be spoon-fed the “facts”, and that is where civilisation will end. Watch how Trump’s people try – again – to dismantle the education system in the US (which, frankly, has been spiralling for decades – no-one seems capable of conjugating the verb “go”, everyone’s saying “had went”, for goodness sake). Those of us who just accept all that is wrong with the world are part of the problem, not the solution.

- Our Partners -

DON'T MISS

- Advertisment -
- Advertisment -