Artificial Intelligence (AI) has become something of a buzzword for the media in recent times.
As the public is more exposed to technologies such as generative AI and Large Language Models through media coverage as well as personal usage, there has been increased conversation surrounding the dangers.
Many fear the changes brought about by the introduction of AI into humanity’s everyday lives. Articles with headlines such as ‘The True Dangers of AI are Closer Than We Think,’ and ‘Artificial Intelligence Could Lead to Extinction,’ are contributing to these concerns.
To be clear, I’m not saying that we shouldn’t be having the conversations that the above articles describe, but driven by ‘click culture’ (the desire to create more provocative headlines to make people more likely to ‘click’ and read) these articles contribute to a dangerous narrative that can lead people to believe that AI is not only scary, but that it is out of control and must be stopped.
In reality, AI is also being used around the world to contribute to a better world, which is reflected in headlines such as, ‘Breathing new life into fight to save the seas with artificial intelligence,’ using AI to detect illegal fishing, and ‘Spy agency turns to AI to tackle child abuse.’
How AI depicted in film differs from the reality
AI has many misconceptions surrounding how it works and what it is capable of. Perceptions are warped by Hollywood’s somewhat fantastical takes on AI, such as Stanley Kubrick’s 1968 film ‘2001: A Space Odyssey’ and more recently, even the Mission: Impossible franchise tackled AI gone wrong.
While most know that these are works of fiction, cinema has the power to alter our perceptions of the world around us, even subtly. From this, many believe that AI can think for itself and produce unique things such as images and text from scratch. The reality is that LLMs and image-generating AI models are trained on huge datasets, consisting mainly of human-made works.
AI technology is essentially computers that can spot patterns and make predictions to perform certain tasks. The patterns which AI models are trained to find are chosen by humans, and every reputable AI developer will have human control factors at every stage of creating a model – this point of human intervention will be explored later.
Maths anxiety and making STEM topics more accessible through education
Fearmongering is particularly powerful when the topic at hand is perceived as being complicated and academic; many people believe that they could never even begin to understand how AI technology works.
Perceptions of maths as difficult, such as the phenomenon of ‘mathematics anxiety’ – coined by Dénes Szűcs in 2019 – lead people to shy away from STEM and this is where I believe much can be done to increase awareness and understanding, and therefore increase overall positive attitudes towards embracing the potential of AI.
I believe that improved education in the STEM area in schools will help the public to be able to have a deeper comprehension of how AI works, and how human input is still essential for AI to function properly. AI used appropriately can prove an invaluable tool within the workplace and beyond, supporting many in their personal and professional lives.
The attitudes towards mathematics-based subjects therefore must change for AI to be accepted as such. For example, if it was more well known that AI does not produce bias against certain racial groups from thin air, but that the bias stems from the people who created it and the datasets it was provided, maybe AI would not be feared as a tool for discrimination, but as a misused tool by discriminatory owners.
The importance of legislating against AI misuse
So, for me, education is vital. But education alone is not enough. There are many cases where AI has been misused, such as in creating deepfake pornography, in having dangerous bias against minorities and in being used to plagiarise and cheat in academic settings (by no means an exhaustive list).
This is where legislation must fill in the gaps and protect those who need it against the dangers of misuse of AI and ensure that AI is not feared. Pieces of legislation such as the EU AI Act, the first regulation on Artificial Intelligence, are laying the groundwork for future lawmakers to consider the impact that AI will have on humanity.
Within the AI Act, the EU has laid out that all companies developing AI must guarantee that they are transparent and that their AI models are easily explained and trained by humans. Other safeguarding measures companies should adopt that can decrease the dangers of unregulated AI include ensuring that bias is kept to a minimum in dataset labelling practices and avoiding unsupervised learning to stop AI models from drifting and ageing.
In terms of end user misuse, such as in the case of deep-faking, legal action should be swift and companies should be held responsible for providing the tools to commit such acts. To ensure this happens, governments should be taking steps to introduce laws that punish such misuse, akin to what can be found in the EU AI act.
Real-world examples of AI being used for good
In terms of the positive impacts of AI, applications in medicine for example are already showing progress in areas that desperately need it, such as cancer research. You may have heard the story of an AI that was developed to identify types of baked goods in a Japanese bakery chain found unexpected success in identifying cancerous cells under a microscope.
Or, in terms of sustainability, maybe you read about how AI has been used to automatically monitor and control energy usage to help reduce carbon emissions by Schneider Electric, helping us move towards a more sustainable future.
AI computer vision has been deployed to quickly locate a missing child that had disappeared at a major sporting event, and just recently AI was used to read ancient scrolls found in Pompeii thought to be unreadable due to damage from heat and ash.
As we can see, AI, when placed in the correct hands, can be a powerful proponent of good in all areas of society. For people not to fear it, we as a collective must take steps to ensure transparency and accountability in the usage of AI and give people the tools to truly understand the potential of AI for humanity.
-
Originally published on SheCanCode - 14 March 2024
Ellen Is An Account Manager At Ipsotek – An Eviden Business Specialising In Computer Vision And AI Solutions – Blending Her Educational Background In Astrophysics With A Passion For Technology. With a sharp analytical mind and a knack for problem-solving honed through her academic journey, Ellen brings a fresh perspective to her role. She excels in translating complex technical concepts into practical solutions.