top of page
  • Writer's pictureArlen Dancziger

The Dangers of Artificial Intelligence

Updated: Dec 3, 2020

When people think about Artificial Intelligence (AI), there are visions of terminator-style robots marching around town, killing every human in sight and starting their own society. Although this isn’t completely out of the question, the reality we may face one day should be slightly different, yet still scary.





I think a lot of people conflate AI with robots. These are not necessarily separate entities, but for the most part AI is not focused on building humanoid robots. It is instead focused on building intelligent computers, capable of solving problems and thinking in a more human way. Now although these two concepts may combine into super-computer-machine-gun-weilding-humanoid-robots one day, that isn’t the focus of AI research.


As stated before, there is still cause for concern. The possibilities and uses of this technology is endless, and they might not all be for the benefit of the entire human race. I’ll talk about some of the main concerns here, and my next article will target possible benefits of AI.


Let’s start with the obvious. AI is already better than humans at some tasks. A great example is the game “Go,” which is a strategy game believed to be one of the oldest board games in the world. A computer named AlphaGo beat the best “Go” players in the world. Then, a computer named AlphaGoZero beat AlphaGo 100-0. How are these machines so much better than us?





It starts with machine learning. Machine learning boils down to a simple process: give a computer a set of rules, input some specific detailed information, and let it teach itself to complete a task. After plenty of repetition, the computer masters the task, through experience, rather than being programmed from the ground up.


Machine learning is also the basis for deep learning. Deep learning is built upon the idea that a machine learns through experience, but it’s also built to work like a neural network, similar to the ones that exist in the human brain. So, instead of inputting specific detailed information, the neural network deciphers vague information on its own, decides which information is important or not, and learns from there (it can also be monitored by humans, which should add some relief to the thought of us being completely replaced). If machine learning is like a computer learning how to add inputted numbers together to get a sum, deep learning is like a machine looking outside and finding things to add and then adding them to find the sum.


So what are the upshots of this? There are definitely concerns. The late Stephen Hawking, a brilliant mind, thought that AI is the most dangerous thing we could ever invent. Elon Musk has shared similar sentiments. Experts across the field have their concerns. But what are their concerns?





The concerns can be split into two categories: Pre-singularity and Post-singularity. The singularity refers to the point in which AI eclipses general human intelligence. As stated before, AI has already surpassed humans in specific avenues, but has not yet reached general, life based decision making; which includes morality and other complicated facets of the human psyche.


The Pre-singularity issues are less life threatening and more way-of-life threatening. Considering we are already in this stage of AI, and more intricate technology is developing every day, we can see the repercussions plainly.


We have designed and integrated robotics into our factories. This leads to more profit for robot-using companies, but less jobs in manufacturing, possibly leading to increased poverty. Self-driving trucks would mean the end of a profession altogether. And this may be just the start, as AI gains its bearings in data organization and even the arts.





We have integrated AI based facial recognition systems into our phones and in airports. But they seem to have weird racial biases. Some facial recognition systems do not recognize black people, even identifying some of them as non-human primates. Others have rejected asian people's faces due to the shape of their eyes; asking the user to open their eyes even though the person has their eyes open. Cars with new radar and braking technologies have trouble identifying black people as well.


AI and data applications seem to be a hot issue. AI machines can sift through and organize data faster than we can. But in the age of untrustworthy news sources and biased news outlets, AI could have trouble determining what’s real and what isn’t. It could also be a perpetrator of inaccurate news, with the new developments of deep fakes. Deep fakes are odd concoctions of one person’s face superimposed on someone else’s body. The fake videos created can be almost indistinguishable from real videos, and could be used to incite political violence or a number of other disturbing applications.





The last Pre-singularity issue I can think of is automated weaponry. If AI is used in war or policing it could lead to innumerable false identifications and deaths. If you consider the issues facial recognition has in everyday life, it’s easy to see how mistaken identities could quickly turn into fatalities if the weapons are automated.


Post-singularity issues are much more species threatening than way-of-life threatening. These are the issues that scare people involved in AI today. If we are to create an AI capable of general human intelligence, that uses deep learning to become smarter, the problems that face us could be dire.


Firstly, AI could have distinct values from us. This is partly due to the fact that as a human race, we have distinct cultural-based values. To put it simply: how do we teach AI to consider our morals and values in decision-making processes when we don’t have a universal set of morals and values? Whose values do we base our AI on?


This issue can become increasingly significant if you consider that the attempt to build AI is split across the globe. If a North American company builds the first super-intelligent AI, hopefully it has our values; if another country does it, hopefully it still has our values. The companies building AI are gearing towards exacerbating this issue. They are working as if it’s a race to build the first general intelligence AI, rather than working together to build a safe AI.


The focus, also, is not on integration with humanity. Companies are building AI individually and also doing it in a way that makes the AI autonomous. If the focus shifts towards integrating AI into our brains or bodies, we may be better able to control the outcomes of AI development.





But integration is not as peachy as it may sound. We arguably already have AI integration; the smartphone in your hand is an extension of your brain to almost all the information on earth. You no longer have to remember specific facts, you can look them up when you need them. We can extend this into integration of AI in our bodies. We think we’re going to have super-human memory with unlimited storage in our brain, but AI integration could also be dangerous. It could decrease our freedoms; if AI thinks we’re doing something risky or illegal, it may be able to stop our bodily movements. It could be hacked by someone else, forcing us to do certain things or tormenting our psyche with inputted disturbing thoughts.


The biggest concern though, is that AI will surpass humans and dominate the planet, perhaps even kill us all. This idea is based on the rate of intelligence improvements in AI over time. AI seems to be growing at an exponential rate. With the use of machine and deep learning, it doesn’t need humans’ help to improve itself. Once the first AI achieves general human intelligence, it may surpass us to superhuman intelligence within minutes. Days later it could be unfathomably intelligent. Years later it could find a solution to global warming, invent a bunch of new things we couldn’t even dream about, and simulate 50,000 years of human advancement in mere minutes. The possibilities are endless, and beyond our comprehension.


This sounds like a good thing, but only if AI is created with human values in mind. If humans are deemed unnecessary by AI, we could be wiped out. Think of how we, as humans, treat an anthill. We leave it alone, we look at the cute ants working away, we might even throw a crumb of food at it. But when we want to make a new housing development, we destroy the anthill without hesitation. This may be exactly how AI sees us when it comes to its decision making processes.





There may be some people who see this as not the worst thing; humans haven’t exactly been good to the planet, and there are certainly some evil people. But to think that it’s in our best interest to be wiped out isn’t a position I think we should hold. ALTHOUGH, if you read my next article I will give an argument for why we might be okay with a situation in which we are wiped out.


There is an opportunity now to create the coolest thing we’ve ever done as a species. This thing could solve all our problems if we do it right. But what scares most AI experts is that we are lacking a unified vision on this project. If we want to do AI properly, some people have made the argument that we need a public regulation body to oversee AI developments. It shouldn’t be treated as a profit-hungry race to the top. We need to build AI safely and properly.


The problem though, is that we aren’t. And that could spell the end of life as we know it.


Thanks for reading! A like, share, comment, or subscription would be much appreciated.


80 views0 comments

Recent Posts

See All
bottom of page