DAY 02: 03 February 2024

Of Dangerous but Necessary Intersections: AI and Journalism


As Artificial Intelligence (AI) rapidly evolves, its impact on journalism sparks both fear and excitement. This session explored AI’s potential to fuel fake news and worsen societal divides, while also envisioning its possibilities for efficiency, creativity, and enhanced news gathering. For journalists in India, navigating the ethical minefield and preparing for the arrival of AI becomes paramount.

This session focussed on the fast-approaching world of AI and its role and limitations in journalism. The panellists were Anurag Mehra, professor of engineering at IIT Mumbai, and Karthik Kalyanaraman, professor at the Myra School of Business in Bengaluru and an expert in the field of uses of AI. Sannuta Raghu, journalist with Scroll, was the moderator.

The panellists discussed how Al held both highly time-saving and useful possibilities while also having enormous potential to be dangerous and unethical for the future of journalism.

The session began with Anurag Mehra giving the audience a broad historical overview of Artificial Intelligence and its possibilities. He explained how AI models are essentially trained to do simple things with large datasets. They are taught to accept multiple inputs for a particular output. An early example of this is how Cambridge Analytica was able to process massive amounts of data on Facebook ‘likes’ to identify which of the five basic traits of human psychology a user exhibited, an exercise that usually would take intensive surveying. Generative AI also works in this manner and is trained on “what word follows the previous word.” While it’s a brute force technique, the result is a stupendous technology, he said.

There are different kinds of biases to watch out for in the training data which is often obscured by people’s prejudices and beliefs and is therefore not objective. These biases can be algorithmic, cognitive, or coded, and can seep in unless training data is carefully regulated and audited. This new technology also throws up challenges around data ownership, plagiarism and particularly creativity to which it is an “existential threat”. There are ongoing debates and legal cases regarding the intellectual property of data used to train AI.

AI is also prone to misuse for politics, exploitation, and disinformation. And while current AI platforms are responding to these dangers, and erecting necessary guardrails, these can be bypassed when the platforms move from local networks to the cloud. It has the potential to disrupt political processes, and the more human-like these platforms are, the more likely it is that they add to knowledge chaos and fuel disinformation, with dangerous consequences where people are no longer able to tell lies from truth.

Shadow banning, deep fakes and the monopoly of online technologies by a few western companies all make for a challenging future environment for journalism, but one that requires journalists to be resilient, courageous and open to learning new technologies to ensure they are not misused or unethically used.

AI will also impact labour and negotiations as was seen in the recent Hollywood writers’ strike. Drastic re-skilling and re-training are in the offing, along with the rise of the gig economy and an impact on creative work, all pointing to a major overhaul in the economics of labour.

Prof Mehra concluded his talk by saying that it is crucial to rein in big tech as well as promote digital media literacy. We as a society must regulate its use based on risk assessment. The use of generative AI in journalism should be limited and clearly labelled.

Karthik Jayaraman spoke on policy, opportunities, and problems around AI use in journalism. From a purely profit motive, organisations will tell you how great AI is, he said. The development of AI has happened in two phases, though it may not be immediately obvious to us – starting with predictive/discriminative and followed by generative.

Trained on explicitly Western datasets and by only a handful of companies across the globe (Google, Apple, Microsoft, Facebook, Baidu, Tencent, Alibaba), AI tools may be prone to institutional bias and not be in line with the democratic and cultural values of our country. Jayaraman emphasised the need for policies to protect individuals, nation-states, and societies. But we are far behind on these, as the Digital Personal Data Protection Act, 2023, shows that there are no safeguards right now against the government.

In journalism, AI can be deployed across the editorial process from sourcing to production and distribution. AI can even answer questions based on large documents like reports, but building these tools needs significant resources and small news organisations might not be able to compete or invest in this right away.

But there are other ways that AI is changing journalism. For instance, news is becoming entertainment largely due to AI. And the same AI is being used to sell products including news stories and political ideologies. AI will help you figure out who cares about the story you are writing, but it can also polarise people and push them to extremes.

AI has changed, and will continue to change, the unit of competition. Newsrooms are no longer competing with other newsrooms, but stories compete with other stories, with influencers, and with the notorious WhatsApp forwards. It is breaking down the idea of public space, where news belongs. There is now immense pressure on newsrooms to convert news items into products, a “bag of chips” that can be sold to readers. In this milieu, we need to think about how we can hold on to the idea of what journalism can be. Get Ready With Me (GRWM) videos by many social influencers, for example, are ways in which social media is combining news and entertainment.

Sannuta Raghu spoke about her experience leading Scroll’s news products team and their experiments with AI. She said these are immensely learnable and that they use open-source technologies. Newsrooms need a strong team of an engineer, a product leader, and an editor, with guidance from the policy input group to start bringing these tools into the workflow.

Currently, Scroll is experimenting with tools that can produce videos from a story in a URL, that can query their own archive of trusted news stories and large government data like Indiaai (https://indiaai.gov.in/) or the Parliament Library, and can autocorrect based on their in-house style guide.

In the newsroom, they have decided against using AI to adapt the style of a particular writer, using photo-realistic avatars of real people, or recreating events. They have approved using AI for illustrations, summarising, brainstorming, search engine optimisation extraction and classification. They also approve the use of AI in rewriting core, non-original and non-reported text.

Before adopting AI in the newsroom, it is important to do an audit of intention vs resource because it is crucial to have a human in the loop even if this increases the workload. It is also important to consider diverse perspectives on AI to avoid confirmation bias.

Overall, the session concluded that AI could provide an opportunity for journalism to go back to being a service rather than just a producer of content.