Using AI Thoughtlessly is Risky Business

Whether we like to admit it or not, generative AI generative models are becoming part of our professional lives. Various industries have workers reporting that they have started using generative AI models for their daily tasks. The annual McKinsey Global Survey revealed that among the surveyed companies, between 14 to 35 percent of companies were regularly using generative AI models for their work tasks (SOURCE). 

The reasons as to why this number of people are using AI are fairly easy to guess. Experiments designed by academics at Havard Business School, Warwick Business School, MIT Sloan, and The Wharwick School concluded that using generative models like ChatGPT can increase a worker’s performance by up to 40 percent (SOURCE). 

Increasing productivity is always an aim that businesses and organisations constantly work towards. Generative AI tools are just another strategy that companies are implementing on masse because the benefits for their productivity are so plain to see. 

But while the benefits can seem obvious for these organisations, the detrimental risks of thoughtlessly using generative AI models require more of our attention. 

The Benefits and Risks AI Models Create

In the annual McKinsey Survey, respondents were asked a series of questions on their companies’ preparedness for the widespread adoption of generative AI models. Just 21 percent of respondents reporting AI adoption said that their organisation had policies governing the use of AI (SOURCE).

They were also asked about the biggest risks they can identify with generative AI models. 

Respondents cite inaccuracy more frequently than both cybersecurity and regulatory compliance, which were the most common risks from generative AI models overall in previous surveys (SOURCE).

Inaccuracy when producing content is a clear precursor to misinformation as far as generative models are concerned. What’s even more troubling to consider is that the amount of misinformation that generative AI models will produce will only increase as more organisations adopt them.

But so what if it produces some misinformation? It can’t do that much harm can it?”

To answer this question we should look at a few scenarios that showcase the risks AI misinformation poses. 

Types of Risk Scenarios That Generative AIs Can Create:

  • Defamation: This risk is extremely relevant if you work as a media company one of the industries that are adopting generative AI the most. Certain media companies often produce content about specific entities or individuals. Now if a generative AI tool produced content that was deemed to be inaccurate and damaging about those entities, then that leaves media companies open to be slapped with defamation lawsuits. 

  • Industry-specific Regulations: If you work in the financial sector, businesses handling publicly factual information, such as market data and stock prices, must adhere to regulations to maintain accuracy and transparency. For instance, if you work as a securities regulator, you often enforce strict rules about producing misinformation that could impact market integrity. If a company is found to be non-compliant with these regulations, it may result in severe penalties.

  • Data Protection Laws: Consider the scenario where a generative AI model handles sensitive personal information and because of this, it's subject to data protection laws. In the event of misinformation causing breaches or mishandling of such data, businesses may face legal repercussions, including significant fines imposed by data protection authorities.

  • Breach of Contract: Envision a situation where misinformation generated by an AI leads to breaches of contractual obligations. A company may face legal consequences if the information they provide, driven by AI systems, does not align with contractual commitments. This may result in fines, legal action, and loss of high-value customers for a company. 

  • Navigating International Regulations: A business operating on a global level might struggle with the challenges of adhering to international regulations while deploying AI. Operating in multiple jurisdictions can introduce a series of complex scenarios. Different countries might have different laws when it comes to enforcing AI and the repercussions of misinformation. So if a company is found to be non-compliant with international regulations, the consequences will be both severe in a financial and reputational sense. 

There are many more risk scenarios that generative AI models can create but the ones above showcase how inaccuracy in the generative AI models that are being used can be detrimental for organisations. No matter what industry they are operating in. 

Use Factiverse to Reduce Risk

The arguments for implementing policies and procedures to mitigate the risks of using AI are very clear. Companies and organisations need to take a more proactive approach to using AI for their working tasks.

To make this easier for them, Factiverse has some of the best tools there available to help reduce risk in business practices. 

Factiverse AI Editor

If you are using a different tool other than ChatGPT, don’t worry. Factiverse also has an AI Editor. It’s a simple tool that automates fact-checking for text produced from generative AI models.

To use the AI Editor all you have to do is copy and paste your content generated from the model you used into Factiverse AI Editor. It will then point out any misinformation your model may have generated with sources to correct information. Factiverse’s AI Editor searches through multiple search engines and databases, like Google, Bing, You com, and Semantic Scholar (200 million science articles) simultaneously to find relevant sources relevant to whatever your AI Generative Model has generated. 

Factiverse GPT

If your organisation is heavily reliant on using ChatGPT for example, Factiverse ChatGPT is also available to help with identifying misinformation. Factiverse GPT detects factual statements and cross-references them with the most credible real-time sources.

It's available in over 40 languages and allows for seamless fact-checking as ChatGPT performs tasks for you.

Here are how the two AI tools answer the question “Who won the Superbowl in 2024?”

GPT-4 being asked on who won the Superbowl in 2024

Factiverse GPT being asked on who won the Superbowl in 2024

Here you can see the difference between both tools. The differences in source details between GPT-4 and Factiverse GPT are like night and day.

It is a clear representation of how you can enhance GPT with Factiverse and therefore mitigate the risks your organisations face as a result.

References:

  • Mckinsey Consultants - The state of AI in 2023 (Link)

  • MIT Sloan - How Generative AI Can Boost Highly Skilled Workers’ Productivity (Link)

Previous
Previous

Empowering Women in Tech: A Norwegian Perspective

Next
Next

Fact-Checking Is More Important Than You Realise