The world has been going through a sort of change- maybe the word isn’t change but upheaval. People are at risk of losing their jobs, and wars are breaking out in major geographies of the world. Conflict is not new to humanity- our empires are built on sweat, tears, and blood.
But conflict today has changed.
Google’s AI systems have strict policies around them. There are detailed guidelines to ensure the ethical and practical use of the “software.” Every year, Google releases a report on its AI principles.
In it is a peculiar line that said, “AI applications we will not pursue…technologies that cause or are likely to cause overall harm.”
This line, in the policy’s current iteration, is gone. The question is: What does this mean?
In their blog, James Manyika and Demis Hassabis talk about the much-needed change in AI safety policies. They say, “Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.”
That is Google’s official stance for now.
But what does it mean?
There has always been a looming threat over us- the nuclear bomb presented us with a harsh reality. Weapons of Mass Destruction were real. Fortunately, it hurt both parties equally—since they both lived on Earth and making the Earth unlivable is not an option.
Everyone collectively decided not to harm our own home.
But AI offers full autonomy over war. Sending in drones without remote pilots or strongly built machines that annihilate the enemy is no longer a distant dream James Cameron has shown us; it is close.
What is Google and Alphabet thinking? Maybe they are afraid of other entities using their AI for malicious intent and are thinking of defending themselves and the country they are based in.
Or, this is the evolution of our conflict. It does not paint a pretty picture.
This unstoppable force, capable of autonomy, will cause mayhem and destruction hitherto unknown to mankind.
In war, people can feel. They can regret their actions and decide the next course. But what happens when war becomes a corporate affair? Unfeeling and efficient as automating an out-of-office email?
Is AI just another tool or a technology we don’t fully grasp?
The questions are grim. And the answers are vague.