Markets by Trading view

Is Artificial Intelligence a Force for Good or Evil?

Facebook
Twitter
LinkedIn

The year 2022 marked a significant milestone in the realm of artificial intelligence (AI) as ChatGPT, a chatbot developed by OpenAI, captured widespread attention as it reached 100 million users within its first two months. This remarkable AI tool has the ability to emulate human language, providing intricate responses to complex inquiries within seconds.

However, the rapid progress of AI did not halt there. A couple months later, we are witnessing the unveiling of GPT-4, a new, “multimodal” AI model that responds not only to text but also images. As AI technology advances at an unprecedented pace, the question looms large: will AI be a force for good or a destructive power?

With this new technology, AI can now pass bar exams or write 40% of the code for a software engineer. While impressive, this raises concerns regarding issues such as plagiarism, academic fraud, and their impact on employment opportunities. AI has also advanced to the point when it can imitate a person’s voice or appearance, commonly known as deepfakes. This can largely contribute to scams and misinformation, but deepfakes have also been used to create non-consensual sexually explicit material.

While AI offers enormous benefits and is hoped to be able to solve currently intractable global problems, its rapid development also poses severe risks.

The financial services industry has been particularly quick to embrace AI, harnessing its capabilities to address challenges ranging from fraud detection to customer personalisation. Currently, companies and governments are spending billions of dollars on developing AI systems. As these systems continue to grow more advanced, they also bring widespread ethical concerns and safety considerations.

For one, critical concerns have been raised in several sectors about its impact on employment. According to a PwC analysis, AI, robotics and other forms of smart automation have the potential to bring economic benefits, contributing up to $15 trillion to global GDP by 2030. At the same time, according to the report, automation is expected to displace around 3% of existing jobs in its first wave (by the early 2020s), and by the mid-2030s up to 30% of jobs. While the automation will vary according to industry sector and countries, AI, while still in its early stages, already poses a threat to disrupt employment in an unprecedented way.

Nevertheless, there are also key arguments in defence of AI’s impact. Firstly, it can be argued that while replacing some jobs, AI will generate the need for new ones, similar to other technological developments that did so in the past. On top of that, it is being noted that the jobs requiring an element of humanity, such as social intelligence and empathy, will not be impacted heavily. Furthermore, as of now, people are still working alongside AI more than being replaced by it.

Is AI an existential risk to humanity?

Researchers, including Amy Webb, head of the Future Today Institute and a New York University business professor, worry that AI may be the ultimate existential risk to humanity. In her recent presentation at the SXSW conference in Austin, Texas, Webb discussed her concerns, noting that she envisions the possibility of AI going in one of two directions over the next 10 years.

In an optimistic scenario, AI development would be focused on the common good, with the aim to improve human life and facilitate daily tasks. It also would allow individuals to opt-in to whether their public information is included in the AI’s knowledge base. The second scenario, as noted by Webb, involves less data privacy, centralisation of power and AI anticipative of user needs, that then stifles their choices.

Unfortunately, she gives the optimistic scenario only 20% chance. Still, the direction in which the technology goes, Webb noted, depends largely on the responsibility with which the companies develop it. Another factor relates to the government and whether it can move quickly to establish legal guardrails to guide the technological developments and prevent its misuse.

On top of that, recently Demis Hassabis from DeepMind, a subsidiary of Google’s parent company, Alphabet commented, “I would advocate not moving fast and breaking things.” This validated the cautionary approached and solidified the growing concerns related to AI’s rapid development. Amongst more mainstream players, concerns have also been voiced by figures such as Elon Musk. The CEO of SpaceX and Tesla noted that however small one may regard that probability, it has the potential of civilisation destruction.

Musk, despite being one of the early investors in DeepMind and co-founder of OpenAI, has also compared AI to “summoning the demon.” He has been a vocal advocate for AI regulation, arguing that governments must take a proactive role in regulating AI developments to ensure that it is aligned with humanity’s best interest. Now, as a next step to take on his ex-company OpenAI, he has launched a new AI start-up to compete with it: X.AI.

Could AI lead to Extinction?

While concerns about the extinction of humanity seem like an overstatement of crazy scientists to some, their prominence highlights the urgent need for a careful consideration of risks posed by AI and the significance of responsible AI development. Recently, a letter co-signed by Musk and thousands of others has generated controversy. The contents of a letter, co-signed also by the scientist Gary Marcus and Apple co-founder Steve Wozniak, called for a six-month pause on the development of systems more powerful than GPT-4, until their capabilities and dangers can be properly studied and mitigated.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter reads.

“Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.”

Still, the letter’s controversy stems from the revelation that some signatories were fake, while some researchers whose work was cited said they didn’t agree with the letter. Following its publication, The Distributed AI Research Institute issued a statement criticising the letter and contended that the notion of “potentially existentially dangerous AI with God-like abilities: is an overhyped concept used by companies to gain publicity and funding. Instead, the focus of regulatory efforts should be on ensuring transparency, accountability, and the prevention of exploitative labour practices.”

With the risks posed by AI unprecedented and difficult to assess, predicting its potential impact remains a challenge. While some speculate about the possibility of a utopian future, others fear the possibility of an AI-induced apocalypse. On top of that, it is possible that both the optimistic and pessimistic scenarios might coexist, as they are deeply interconnected.

It is clear that the development and deployment of AI must be done with caution and care. In light of these concerns, there is an urgent need to focus on research that maximises the societal benefits of AI and aligns with humanity’s best interests. Rather than solely pursuing technological advancement, researchers must prioritise creating AI that is not only capable but also safe and beneficial for society.

Author: Barbara Listek

#AI #OpenAI #GPT4 #Robotics #Regulation #SXSW

See Also:

ChatGPT: The Next AI Revolution or Sheer Hype? | Disruption Banking

Opening the doors to South by Southwestern #SXSW23 | Disruption Banking

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Trending

Write your email to verify subscription

Loading...

Sign up for our free newsletter and receive the latest banking and fintech stories, straight to your inbox - every week