The potential of artificial intelligence to change the facet of every industry is limitless. Already, we have applied AI tools to the health, manufacturing, transportation, and finance industries.

But can we trust AI or a robot to act in a way that we want it to act? And is that trust based on a good, solid foundation?

Discoveries in AI are moving at a warp speed that sometimes we trust it without knowing whether that trust is unfounded. Can we rely on robots and AI technology without knowing the consequences of that trust?

 

1. Do robots have integrity?

Or are they clumsy and error-prone?

If you’ve used chatbots such as ChatGPT, most likely you found that the results are varied and that there are limitations to their power to respond to prompts.

We expect robots to have integrity and to be right ninety-nine percent of the time, more so because we created it with the intention that it will serve us reliably.

But who is monitoring our use of AI and the truthfulness of its output? As a chatbot program collects and stores anything that you enter in the program, how do users know how the data is used? Moreover, can a third party spy on your use of a chatbot?

A chatbot doesn’t cite its sources or where it gathers information from, and can even make up references. Fact-checking is still in development. Without a human eye to intervene and check the work, using a chatbot can be the same as plagiarizing, even if the user is not lifting the words directly from a source. In fact, the New York Times sued OpenAI for using their content to train their software.

Academics are wary of plagiarism in AI-powered writing tools. There can be ways to detect plagiarism but it’s like cutting off the head of a hydra, the solution is only temporary before people find new ways to outsmart detection.

The type of information that is available through the power of AI also enables a breeding ground for criminal activity and misuse. So not only must there be rules to ensure public safety, there must be ways of monitoring the landscape of AI.

 

2. Do we expect too much from robots?

There are many great things that AI can do for society. But our unavoidable dependency on AI technology leaves us prone to trust them and expect more from them above real people.

Will we end up trusting robots more than humans? Their convenience would tempt us to integrate them increasingly in our everyday lives. Robots can make just about everything more efficient. Inventions such as self-driving cars and the ability of robots to write complex computer programs are just two examples.

They also have the capacity to synthesize information from large sets of data at breakneck speed. But even with its superhuman computational power, AI is just as flawed as humans.

Someone needs to be held accountable for the truthfulness and reliability of AI tools. If the creators of chatbots are not responsible, then who is?

 

3. Do robots have feelings?

The introduction of AI technology poses the question of just what it means to be human. Does a robot have reasoning and mental faculties?

Moreover, can robots outcompete humans? We are already seeing a lot of fear and worry that robots have made redundant the jobs that were originally done by humans. Not only that, the power of chatbots to mimic human language is astounding, and they can create masterful works of art in less than five seconds. In trying to match wits against a robot, who would win?

Chatbot users must be cautious and understand just because it is more powerful than humans, doesn’t mean that it’s perfect. There is a lot of work, research, policy creation for us to do before we can fully implement AI into our lives.

As with any new invention, people should consider it with a dose of skepticism before it gains widespread acceptance, and be cautious about how we would integrate AI into our everyday lives.

Predictions that robots will replace the human workforce are not unfounded. Disruptive technology can shut down businesses and make them obsolete but it also has the potential to create new ones.

 

4. The ethical use of AI

Policy makers have the onerous task of drafting the rules for the ethical use of AI. There is rampant potential for the misuse of AI. As AI overtakes mere human intelligence, we must structure responsibility and governance around the use of AI. From plagiarism to criminal activity, we can’t understate the seriousness of the misuse of AI.

There must be clear rules in the use of AI that everyone can abide by. Getting people to actually follow them is another feat. It will only work if there is consensus and mutual trust given to each other.

In addition, there must be fairness and equality so that everyone can benefit from it regardless of their background.

 

5. AI: friend or foe?

We shouldn’t fear AI but we have to consider AI to be both a threat and an asset to humanity.

With the speed of technology, there was no time to prepare for it and weigh the risks and benefits so we could put the brakes on it.

If there ever was a time to be guided by the principle to “do no harm”, the moment is now. Not playing by the rules leaves everyone vulnerable to wrongdoing.

Like Frankenstein, the power of what we are unleashing can catch us unawares. Horror stories and science fiction plots aside, it is up to the users of AI to monitor their own actions and find solutions to new problems caused by AI, of which there are many.

There is a lot of dialogue and ideas stemming from AI. The more we are vocal about the ethics that govern the use of AI, the more likely that we are able to benefit from it.