Humankind’s Innocent Looking Danger… By Dr. Rachhpal Sahota,
Cincinnati, USA, January 4, 2022: As a child, I learned about the plague and how quickly it could wipe out village after village. Then I learned about atom and hydrogen bombs and how the megacities of Nagasaki and Hiroshima were destroyed in a matter of moments. In high school, I learned about the demise of the Indus Valley and other civilizations. Slowly, I realized that catastrophes, more potent than plague, could easily wipe out humanity.
But these calamities, natural or manufactured, always provoked images of devastation and those of the people having to live through them.
But what if the disaster doesn’t even look like a disaster? What if it were introduced to you as your friend? What if it destroys humanity, and humans don’t even realize they are on their way to destruction?
I am talking about Artificial Intelligence or AI. Today, we are all surrounded by AI. Since the divide between AI and human intelligence is so fuzzy, it is difficult for an ordinary person to realize when AI started to invade our lives.
At the end of the 20th century, when personal computers became common, and Microsoft launched Windows, the computer began to appear intelligent because it could tell if a printer, scanner, or a fax machine was hooked to it and what kind. But it was not AI; it was simple human logic.
Similarly, when on May 11, 1997, IBM’s Deep Blue beat Garry Kasparov, it might have looked like AI; still, it was a demonstration of human logic, an example of how the man could use the enormous calculations and data handling power of computers. In 2016, Google’s AlphaGo beat Lee Sedol, the world champion Go-player; it was still not AI because AlphaGo could not be trained to predict the opponent’s next move.
What is AI, then?
So, what is AI? In their book ‘Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig define AI as “the study of agents that receive percepts from the environment and perform actions.” In other words, a machine is supposed to have AI if it can understand its environment and act accordingly. ‘Act accordingly’ is the phrase, and when a machine can ‘act accordingly,’ the elements of human involvement begin to disappear.
When did AI start? What is the current state of AI? Has it already begun to beat human intelligence; will it ever? Is it possible that one day AI will control humanity or may even destroy it? Experts differ on these questions; let’s try to explore some of these from up close.
Alan Turing is considered to be the father of AI. In 1950, he posed a simple question— ‘Can Machines Think?’ His paper that year, ‘Computing Machinery and Intelligence,’ and the following ‘Turing Test,’ he designed to determine the intelligence of a machine, brought about a revolution, and provided a definitive goal for the study of AI. LISP, the first AI programming language, was released in 1960. To answer the question of how far AI has come, we will divide AI-enabled machines into three categories
- Machines developed for specific purposes.
- Machines that can interact with humans, like humans.
- Machine’s that are aware of their existence.
- Machines developed for specific purposes: AI has made the most significant strides in this area. Most of the 60 billion dollars spent on AI in 2020 went to develop AI in this area. The examples include:
- Targeted Marketing: Targeted marketing means spending your marketing resources to sell only to those who already have some interest in your product. How to identify those customers out of the zillions out there falls upon AI. One day I happened to search Google to check on the interest rates on home mortgages, and immediately my screen started to fill with ads from mortgage sellers.
- The second example is rather scary—A friend of mine was tested for A1C, the measure of bound sugar in the body, and his A1C turned out to be 5.7. When he went home and opened his computer, an ad flashed on his screen – ‘How you can keep your A1C below 5.7.’
- Parsing Human Language: On July 12, 2019, Eduardo Barros, a convict released from jail, beat his girlfriend, accusing her of cheating on him. Brandishing a revolver, he threatened to kill her should she call the cops. Understanding the threat involved the voice Activated Alexa called 911. Soon, the police arrived and after a standoff, arrested the convict and saved the woman and her daughter.
- To understand what people like: We all read the news on phones or computers. Slowly, we begin to get the information that interests us the most.
Google’s ultra-fast internet searches, driverless cars, and many more AI-using technology areas fall into this category. In these fields, AI often outperforms human intelligence.
- Machines that can interact with humans, like humans: This area of AI-driven machines are getting a lot of attention.
- In 2017, the Hong Kong-based Hanson Robotics developed a woman-looking humanoid named Sophia. Sophia could recognize faces, express herself using 62 different facial expressions, and discuss various topics in detail and depth. After her exhibition, Saudi Arabia awarded citizenship to Sophia.
- There is a race to produce human-looking interactive sex robots. 2021RealDoll is already selling sex humanoids for prices ranging from $5,000 to $10,000. After success in Canada, the Canadian company Kinki Dolls plans to open a robot brothel in Houston, Texas, in the US.
One of the significant drawbacks of human-looking robots was their inability to walk on two legs. Alon Musk has solved that problem. His company recently demoed two robots that not only could walk but they could also run and jump on uneven surfaces. Experts believe we are not far from the day when humanoids would walk amongst us without our consciously being aware of them.
- Machines that are aware of their existence: Is it even possible? The experts differ on this. But the fact is that our brain, like everything else in the world, is made of just quarks and electrons. With the speed with which AI is progressing, I have no doubt a machine at the human brain’s model will be developed one day. When that happens, why would it not recognize its existence? But the experts, including those who believe in this eventuality, think we are still far from that point in time.
Dangers to Humankind:
Many believe that the day AI becomes aware of its existence, it will break away from human shackles. In 2020, The Guardian published an essay—https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3. The essay was written from scratch by a robot named GPT-3, without any human help. The Guardian had asked GPT-3 to convince us that the robots come in peace. In this essay, GPT-3 writes that humans have no reason to worry about the advent of AI as AI has no motive to control humankind. However, GPT-3 also gives a stern warning— “I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”
We need to consider this warning from GPT-3 carefully, and the object of my essay is to bring this to the reader’s attention.
The Dangers That Surround Us Today: Today, the AI-based real danger comes from social media platforms like Facebook, which just changed its name to Meta, WhatsApp, and Instagram, because these platforms use AI to impact public perceptions directly.
Foreseeing the power of social media, Facebook has bought both WhatsApp and Instagram. Because no big company wants to be left behind in the race for AI, the demand for AI experts has skyrocketed, and young men and women with only a few years of experience are getting hired at salaries ranging from $500,000 to over a million. AI has been embedded in social media to the degree that it is impossible to see the two separately.
With help from AI, the news can be dissected and distributed so that people get only the information they like, creating significant divides amongst them. Any leader, may it be Donald Trump of the US, Narendra Modi of India, Imran Khan of Pakistan, or Rodrigo Duterte of the Philippines, his supporters will only get the news that show their leader in a good light and the opponents what offer him in a bad light. Slowly we begin to believe and take pride in our ways of thinking, and those standing on the other side of the aisle start to look foolish and credulous; we keep drifting apart.
The divide between the supporters of the republican and democratic parties in the US has widened to the degree that the most stable democracy in the world is beginning to show fissures. On Oct 5, 2021, Frances Haugen, the Facebook whistleblower, testified in front of the US Senate commerce committee. Her data-based testimony was eye-opening. It showed that Facebook followed policies that harmed children, divided people, and jeopardized democracies in many countries in pursuit of huge profits. Both fake news and the dictators’ propaganda were augmented to incite violence.
Countries like Myanmar, the Philippines, and many in the African continent have witnessed ethnic cleansing, taking the lives of tens of thousands of people. According to Facebook’s internal documents, an Oct 23, 2021, article in the Wall Street Journal reports that ‘Facebook’s services are used to spread religious hatred in India,’ and that ‘anti-Muslim material is rife, and calls to violence coincided with deadly riots last year.’
More Significant Dangers Lurking On the Horizon: University College of London scientists compiled a list of 20 AI-enabled crimes. Here are the two big ones:
- Deep Fake: Fake news and videos are shared on social media that give us our head-scratching moments, but a little google search can get you to the truth. However, with the advancement of AI, the fake news and videos will be so convincing that it will become impossible to tell fake from the real, even for the AI applications trained to catch frauds. Such realistic-looking videos can be used to impact the popular vote. How dangerous can it be? Imagine someone using realistic-looking, false atrocity videos against a community, using realistic-looking fake appeals from famous leaders to incite devastating ethnic violence by ringing the fake bells of imminent danger to the community.
- Driverless Cars: Driverless cars are on the verge of being rolled out to the general public. When in the wrong hands, these cars can be a significant weapon. Without putting their own lives in jeopardy, the terrorists will be able to stuff these cars with explosives and attack anyone at any place.
Besides these rather well-discussed dangers, many innocent-looking threats are looming around us. Social media is eroding person-to-person contacts, especially among younger generations. Now Facebook has changed its name to Meta to reflect its focus on metaverse. Metaverse, considered the future of internet search, is a much-advanced version of virtual reality.
While surfing the internet, people will wear specially developed gear and interact with make-believe humans in their virtual worlds. What impact it will have on their mental health is hard to imagine.
Another concern of the experts is the robots taking over manual work, thus leaving not much for people to do. This possibility has become real, and few governments have already begun to plan how to provide for people without jobs. However, instead of being a danger to humanity, it may offer them opportunities to enjoy life, albeit people will need to adjust to the new realities of life.
The Ultimate Danger: In today’s digital world, all systems relate to one another. Doctors and nurses can view your test results from any computer connected to the hospital. The most extensive energy grids can be turned on or off remotely from a small computer.
Entire military records of a country, where each division is stationed, what resources it has, how long its food will last, can be viewed from any corner in the country. It all means that just a few, or even a single person, will be able to control the entire world. Will these people be wise enough to run it purposefully? Intelligent, maybe, but wise, I doubt it. I don’t see any reason for them to be any wiser than an ordinary person. And there is a real possibility that one or more of them are fanatical.
These worries make GPT-3’s warning a real possibility: “I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”
Click link below to visit Dr. Rachhpal Sahota's blog
https://rachhpalsahota.blog/2021/12/20/innocent-looking-danger-to-humankind/
-

-
Dr Rachhpal Sahota, Former Scientist at Procter and Gamble, USA
rachhpalsahota@hotmail.com
Phone No. : +1 (513) 288-9513
Disclaimer : The opinions expressed within this article are the personal opinions of the writer/author. The facts and opinions appearing in the article do not reflect the views of Babushahi.com or Tirchhi Nazar Media. Babushahi.com or Tirchhi Nazar Media does not assume any responsibility or liability for the same.