Does AI Pose Existential Threat to Humanity?

AI and Humanity

I spoke with a Neusroom journalist to unpack the benefits and risks of AI in today’s world. We also discussed how companies can invest in the development of AI in an ethical manner.

1. What are the current measures or safeguards in place to mitigate the risks of AI turning into an existential threat?

Napa: For now, when we speak of AI posing an existential threat, I believe that we are not referring to AI getting rid of humans. Instead, we refer to the ability of AI systems to completely surpass human intelligence and probably act on their own accord. At the time of writing this, there are no general regulations guiding AI systems and how they are developed or applied by companies and individuals. However, in March 2023, several tech leaders signed an open letter to pause the training of newer AI systems higher than GPT-4 for at least 6 months.

According to them, this would create time to reflect and ask questions about the impact of these systems and how to manage any risk they pose. Aware of the positive impact of AI and its potential risks, these tech leaders have also called on the government to take legal action if the AI labs fail to comply.

Although many fear that AI will replace jobs currently executed by humans, I hold a different opinion. I strongly believe that AI will create new jobs and those who fail to make the best out of the systems are likely to be replaced by those who do. Right now, businesses are trying to cut costs and be as efficient as possible so they rather do more with AI tools if possible.

Rather than worry about being replaced by AI, I think now is the best time to start upskilling and mastering how to infuse AI into your job. While new skills like Prompt Engineering, a technique used in artificial intelligence (AI) to optimize and fine-tune language models for particular tasks and desired outputs, are emerging, you can only get the best out of AI if you have studied how to ‘influence ’ or prompt it properly.

2. Are there any specific areas of AI research or development that you believe need more attention to prevent existential threats?

Napa: Currently, there are four groups of people. First are those who see the good in AI but are concerned about the risk it could pose as the AI systems are trained and improved. The second group is those who also see the good in AI but are not bothered about any potential risks the AI systems pose. The third group of people do not use AI and only see it as a negative tool, out to reduce human creativity and innovative thinking. Those in the fourth group are completely unaware or ignorant about AI systems in general.

According to AI scientists and engineers, GPT-4 is currently the most powerful AI system that exists. However, based on our use of ChatGPT and other OpenAI sources, results from these systems are still largely inaccurate. Instead of developing more powerful systems, it would make more sense to train the existing models so that it presents more accurate and robust results. I am also of the opinion that the systems should be trained to only exhibit positive tendencies and recognise negative instructions to ensure the safety of mankind.

3. How can Nigerian companies invest in the development of AI in an ethical manner that doesn’t pose a threat to human existence?

Napa: Nigerian companies especially in the tech space are already applying these AI systems. However, before more people are equipped to develop AI systems, I suggest the government should implement legislation to ensure these systems are not used to enable criminal platforms and behaviors. Some industry leaders like Mo Gawdat, ex-chief business officer for Google X, have advocated the introduction of higher taxes for companies using AI systems. This way, we can generally curb how fast the systems are developed and deployed.

In terms of developing AI in an ethical manner, the best we can do is to continue creating a balanced view of both the risks and rewards of these systems. We must also encourage companies and those they work with to use this technology for the good of humanity.

If companies ever have to choose between developing unethical but profitable AI systems and developing ethical systems with little profit over a period of time, I hope they do what’s right and choose the latter.

Read the full Neusroom article by Emmanuel Azubuike.

Share the Post:

Related Posts

Bookings

Fill out the form below, and we will be in touch shortly.