Artificial Intelligence and the Future of Misinformation

The prevalence of misinformation and disinformation has become visibly pronounced since the beginning of the Covid pandemic. However, both concepts unarguably preceded the pandemic. While both concepts are similar, the key difference between misinformation and disinformation is the element of intent.

Misinformation involves the intention to maliciously mislead by propagating false and often harmful information. On the other hand, disinformation involves the spread of false and possibly harmful information without malicious intent. Regardless of the distinctions, false and harmful information can lead to very dangerous consequences if propagated without moderation.

As strides in Artificial Intelligence have continued to emerge, misinformation has become increasingly difficult to identify. This is due to the ability to create images, videos and other forms of information that appear closer to real life. These are often referred to as deepfakes. A recent example of a deepfake is the viral image of the Pope wearing a puffer jacket.

The implications of the prevalence of deepfakes are increasingly steep due to the rise in ease of access to sophisticated AI tools by the average person. A further concern is that ‘commercial’ AI tools are being designed to be as easy to use as possible with little to no technical training required to use them. This further reduces impediments to their use for misinformation.

While AI advancements are undoubtedly important for future growth and development, it is key to ensure that the malicious use of AI is reduced. Some ways of ensuring this can include-

A. Inclusion of distinct watermarks in AI-generated content – Traditional watermarks often contain details such as the type of device used (AI in this case), date of creation, the username of the person generating the information, and other distinct details. These watermarks can also include the currently prominent issue of poor finger and hand placement. This is often a dead giveaway when one encounters AI-generated images today. This watermark can be problematic where the AI-generated information or image does not involve human digits or with improvements in AI technology. This brings us to the next point.

B. Inclusion of Non-Realistic Details – Closely related to the watermarks discussed above, AI engineers could consider embedding minor non-realistic details in the AI which would not disrupt the overall enjoyment of the AI-generated content by a casual observer but would be a tell-tale signalling misinformation on closer examination.

In conclusion, as noted by the World Economic Forum, AI development is important and strategic for future growth globally. Nonetheless, attaining a fine balance between advancements in AI and preventing malicious use of AI is key.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: