"Angane Pavanai Shavamayi. Enthokke Bahalamarunnu, Malappuram kathi, machine gun, bomb, olakkede moodu." [So Pavanai is now dead. What all he had claimed. Malappuram knife, machine gun, bomb ...]

Nothing depicts the crash of hyped-up expectations better than this classic dialogue from the Malayalam movie Nadodikattu.

You can’t blame voters in Kerala if they feel the same way about the warnings of havoc that artificial intelligence was going to create during the elections.

Deep fakes, AI-generated voice and video clips that show people in compromising positions, manipulated versions of speeches that spread hatred and disinformation, groundless allegations packaged as facts... the list of possible manipulations was long, according to the experts.

But now that voting is over in Kerala, one can say with confidence that none of these dreaded AI weapons surfaced in our electoral battles. If it did, no one noticed.

This is where technology experts go wrong about India. In other societies where order and decorum are maintained and political leaders play by the rules, AI can be a potent weapon to malign people by generating things they never did or said.

For instance, in such places, AI can be used to generate fake footage of a leader indulging in naughty extracurricular activities and propagate them to show the person exploited a large number of women using his power and position.

More damaging would be to use AI and create a fake video of a leader in power making utterances targeting people of a particular religion or ethnicity.

In the hands of a political Chanakya, AI could be a tool to create a fake narrative about some prominent leader holding secret talks with their declared enemy about jumping ship.

Even a political novice could make a video or audio of a contestant alleging without proof that his rival is offering cash to buy votes.

These kinds of AI manipulations would be scandalous in other countries, but in India, these would come a cropper. Simply because our leaders are much better when it comes to generating such content naturally.

These large language models like ChatGPT can go on bragging about their ability to hallucinate, but they have no hope of creating the kind of political dope our political leaders generate.

This is why AI platforms with billions of parameters are no match for our politicians whose ways are beyond the comprehension of any algorithms.

The basic difference between large language models like ChatGPT and our political spectrum is in the kind of data they use.

While technology companies like OpenAI, Google, and Meta use data that are verifiable to train their models, in India we have no such qualms.

First, we come up with claims that grab attention and then back it with what sounds like facts. Questions about its veracity are dismissed as ignorance of a brainwashed idiot, and if you persist, then you become a sworn enemy.

Take the case of our health sector, for example. We have companies that advertised life-saving products despite no such scientific evidence.

Those who questioned the claims of the company were brushed aside as those doubting the wisdom of our forefathers and being pawns of big pharma’s conspiracy to make millions from the market here.

By the time the company admitted to the Supreme Court that its claims were not true and decided to issue a half-hearted apology, millions had swallowed their claims and products.

Hallucinations are seen as a problem in other societies. But in India, all you have to trek up some hills up in the north, and you will get a whiff of our deep-rooted association with hallucinogens from time immemorial.

So we are used to illogical ramblings, whether it comes from a political devotee or a popular godman. Fans adore them even when they come out with statements on the “physics and chemistry of biology”. For the devotees, it all makes sense, but AI platforms will struggle to understand it.

While a lot of blame for disinformation can be pinned on social media, we in India have been well-seasoned with misinformation for decades, well before the internet arrived.

Take the case of our beloved health drinks like Horlicks and Bournvita. Generations have gulped down their claims of being a health drink, and it was the first thing your grandma would reach for when you fell ill.

Now, multinational companies have quietly dropped the word “health” from their claims, making it just another drink like Pepsi or Coca-Cola. Still, it will take some time to convince my grandma that her beloved Horlicks is not a 'health drink'.

The same goes for the political devotees also. Describing the cancellation of a few apps as a surgical strike against a rival nation can pump adrenaline into the devotees who are on a permanent high after inhaling smoke signals sent out by their leaders regularly.

AI platforms made a great leap by training their models on billions of terabytes of data over the last few years. But they are no match for our society.

Our leaders and business houses have been training the 1.4 billion-strong population with hallucinatory data for decades. No AI can ever hope to match that.

No wonder the threat of AI misuse fell flat in our political arena, just like the Pavanai character in the Nadodikattu movie.

1 min

Sports

Other Sports

Apr 30, 2024

1 min

Special Pages

Lok sabha Election 2024

Apr 30, 2024

2 min

Special Pages

Lok sabha Election 2024

Apr 30, 2024

2 min

News

Kerala

Apr 30, 2024

1 min

News

Kerala

Apr 30, 2024

4 min

Columns

Occasional Bytes

Nov 2, 2023

QOSHE - AI is no match for our leaders - Occasional Bytes
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

AI is no match for our leaders

20 1
01.05.2024

"Angane Pavanai Shavamayi. Enthokke Bahalamarunnu, Malappuram kathi, machine gun, bomb, olakkede moodu." [So Pavanai is now dead. What all he had claimed. Malappuram knife, machine gun, bomb ...]

Nothing depicts the crash of hyped-up expectations better than this classic dialogue from the Malayalam movie Nadodikattu.

You can’t blame voters in Kerala if they feel the same way about the warnings of havoc that artificial intelligence was going to create during the elections.

Deep fakes, AI-generated voice and video clips that show people in compromising positions, manipulated versions of speeches that spread hatred and disinformation, groundless allegations packaged as facts... the list of possible manipulations was long, according to the experts.

But now that voting is over in Kerala, one can say with confidence that none of these dreaded AI weapons surfaced in our electoral battles. If it did, no one noticed.

This is where technology experts go wrong about India. In other societies where order and decorum are maintained and political leaders play by the rules, AI can be a potent weapon to malign people by generating things they never did or said.

For instance, in such places, AI can be used to generate fake footage of a leader indulging in naughty extracurricular activities and propagate them to show the person exploited a large number of women using his power and position.

More damaging would be to use AI and........

© Mathrubhumi English


Get it on Google Play