Elon Musk, alongside a thousand other signatories, vouched for an immediate pause on training stronger AI systems in an open letter in March 2023. OpenAI’s CEO Sam Altman reacted to it by saying it lacked technical nuance. But some days later, Sam urged U.S. federal lawmakers for government intervention to prevent the abuses of AI. Why are the tech giants suddenly concerned about the dangers of AI? What are the loopholes? Know about it as we gradually unravel the dark side of AI in this article.
Why Is AI So Dangerous? | A Bigger Threat Than Global Warming?
So, how is AI harmful? AI falling into the wrong hands can accelerate misinformations, amplify our dependence upon it, and cause an existential threat to humanity. Let us unfold how exactly it does each one of them.
Spread of Misinformation
Brace up, as it is here to stay for a long time. Generative AI and chatbots can flood internet sites with misinformed content.  Even a study concluded that humans are less likely to point out a false tweet by AI than those written by humans. But what fans out the misinformation? Let’s see!
The launch of Open AI’s chatbot on the LLM GPT-3 in 2022 is popular among users today. Usually, you receive a detailed response whenever you enter a prompt (instruction) to it. While it is a valuable source of learning, you may receive incorrect answers.
One research by Purdue University revealed that 52% of ChatGPT’s answers on programming issues needed to be more accurate, yet nearly 40% of the users still preferred using it for its easy-to-understand writing style.
You can perceive an Al bias when an algorithm produces systematically prejudiced results. Human-trained algorithms may inherently discriminate on data and influence discourses on social media handles.
In December 2022, the Twitter Files released by Elon Musk alleged that Twitter (now X) discriminated against conservatives and shadow-banned their posts. A shadow-ban hid the user’s posts and tweets without even notifying them.
Influencing Political Narratives
The third area where AI spreads misinformation is the political landscape. The use of AI bias and deepfakes can hamper the opposition’s reputation. AI-generated images, respondents, or polls can help political parties with favorable social media campaigns, thus influencing the political narrative.
Over-reliance upon the AI
One can almost feel our rising dependence upon AI. Our social skills will diminish as we rely more upon it.  Know the impact of this reliance.
Reduced Human Touch
Online conversations and virtual interactions sustained us during the COVID times. But, we still primarily rely upon social media handles to communicate. AI-powered chatbots share a personalized discussion but are unreliable and induce social isolation. Earlier this year, a Belgian man died by suicide after talking with an AI chatbot.
Restrained Productivity and Decision-Making Ability
While using AI boosts productivity, it isn’t true for all cases. A recent survey shared that 50% of respondents hinted that AI hampers creativity and productivity in the workplace.
Similarly, depending excessively upon AI technologies can make you incapable of creative analysis, critical thinking, and independent decision-making, thus affecting your rationale and cognitive ability.
Existential Risk To Humanity
The existential risk to humanity is real, but it is less menacing.  Not that it would kill you in a literal sense. Still, you lose control over your life gradually. You may find it more in your daily life dealing with displacement of jobs and significant security threats.
Replacing Human Jobs
A few months ago, an Indian CEO replaced 90% of its customer support team with chatbots. Automated workplace and mass layoffs go hand-in-hand. As AI technologies first replace low-skilled workers’ jobs, it may create an income gap. Big corporations will benefit from AI-driven automation but not others, which may trigger the concentration of power and economic inequality in society.
It takes the form of AI Terrorism, malicious hackings, incessant cyberattacks, global AI arms race, and more. You could be a victim of it anyway. Lethal Autonomous Systems (LAWs) are AI-induced weapon systems that can select the target without human intervention and pose a considerable risk to humanity.
Here’s more for Ai enthusiasts:
FAQs | Why is AI so Dangerous?
Now, let’s head onto some of the most asked questions I tried answering.
Is AI safe or dangerous?
By now, you know why AI is so dangerous to humans. An individual’s experience with it may vary. Though it has potential risks, you can mitigate them and use AI safely. Know that AI technologies help better the human’s everyday experience, so use it for your convenience.
Will AI destroy the world?
As you saw earlier, an existential threat lies. But there is a long way to go before AI destroys the world. People already acknowledge the risks of AI, and collectively, we can reduce them. Similarly, questions like “Are AI more dangerous than nukes?” or “Will AI kill humans?” need broader introspection and deeper study.
What are the 12 Risks of Artificial Intelligence?
Some of the twelve risks of AI include threats of job losses, online manipulation, privacy concerns, AI biases, economic inequality, LAWs, reduced human influence, incorrect answers, lack of decision-making, and more.
As we traversed the three significant threats by AI, know that you can limit it, too. Experts point out that human involvement in AI tools like the ChatGPT is essential to eliminate inaccuracies. While AI can potentially spread disinformation, one may also use it to fight it. The tech giants like Google, Meta, and Microsoft introduced steps to gauge and sieve misinformation.
Implementing AI regulations and paying close attention to the AI systems will restrict its misuse. Besides, the benefits of AI in revolutionizing learning, education, healthcare, governance, and business sectors are well-known. AI has pros and cons, and it is up to you which side you want to employ.
Ask away your queries in the comments below, and we will reply soon.
- Fried, I. (2023, July 10). How AI will turbocharge misinformation — and what we can do about it. Axios AI+. Retrieved October 1, 2023, from https://www.axios.com/2023/07/10/ai-misinformation-response-measures
- Anderson, J., & Rainie, L. (2018, December 10). Artificial Intelligence And The Future Of Humans 1. Concerns about human agency, evolution, and survival. Pew Research Center. Retrieved October 1, 2023, from https://www.pewresearch.org/internet/2018/12/10/concerns-about-human-agency-evolution-and-survival/#individuals-cognitive-social-and-survival-skills-will-be-diminished-as-they-become-dependent-on-ai
- Eisikovits, N. (2023, July 12). AI Is an Existential Threat—Just Not the Way You Think. Scientific American. Retrieved October 2, 2023, from https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/