Ethics HW

Is “a ban on offensive autonomous weapons beyond meaningful human control” going to work?

  • Why will this work? / Why will it not work?
  • Can you implement this in practice?
  • Who are the stakeholders?
  • Who should evaluate new scientific products?
  • How will this implicate IPS research and development?


  • 	Autonomous agents are an increasingly ubiquitous presence in our daily lives. Modern technology is designed to be
    personalized, intelligent and more robust than ever before. A natural application of the development of autonomous agents
    is in weaponry. The benefits of this are obvious-- an ideal agent capable of replicating a soldier allows controlled
    warfare without the cost of sustaining a soldier abroad or risking a human life. It is inevitable that warfare will be
    heavily reliant on autonomous systems in the near future. Yet, a ban on offensive autonomous weapons that are beyond
    human control is likely to succeed.
    	The first argument for believing in the success of such a ban is historical precedence. Modern nuclear weapons and
    chemical weapons for the most part have been controlled by international powers. While one may argue that nuclear warfare
    is always imminent or that chemical weapons have been used in the shattered countries of dictators like Saddam Hussein
    and Bashar al-Assad, it seems that the Western world for the most part has been proactive in securing away such weapons.
    Similarly, as the debate over gun control in America rages, modern gun control in other developed nations has for the
    most part seen a  decrease in violence. In other words, the weapons developed today for the most part are regulated to a
    degree that the average citizen is not affected by their use over his or her lifetime.
    	Next, the state of artificial intelligence regulation as it stands prevents autonomous weapons from being beyond a
    ban. No widely used autonomous warfare agent will be created without extreme scrutiny from the research community and
    international governments. Those entities which have the ability to construct and deploy such agents have few incentives
    to do so as it stands. More precisely, the entities that have the intellectual, financial, and material resources to
    develop a fully autonomous weapon are limited to major corporations, governments, and universities. Governments and
    defense suppliers may want to leverage artificial intelligence to create modern autonomous weapons, but only defense
    companies and the government would have an application for this. Though artificial intelligence regulation is a subject
    of debate, it is true that as it stands few restrictions are present on artificial intelligence precisely because of its
    limited scope. Yet, in instances of public safety, governments such as California’s have established regulations, and
    companies have accepted that regulatory oversight of artificial intelligence is necessary in order for the public to
    fully accept autonomous agents.
    	The question of banning autonomous weapons is not just for the United States and putting these bans into action will
    require the cooperation of major corporations and governments worldwide. Because this technology will probably develop a
    lot quicker than laws enforcing them, there should be an impartial committee of engineers that can define what an
    offensive autonomous weapon is, create safety regulations for them, and spell out consequences for developing or using
    them. There will still be bad actors, but there will at least be rules that outline the tangible outcomes that can
    happen. Scientific products (new or old) whose classification is ambiguous would be defined by the committee that might
    work (for instance) with the patent system or some other legal system. IPS research and development is a blurry area for
    legal regulation since the products and features will be novel and cutting edge. General maximum limits from committees
    defining what offensive autonomous weapons are will have to define what is allowed.
    	Bans on offensive autonomous weapons will work. Similar bans on extremely dangerous weapons have been proven to work
    and will provide a clearer picture on what an “offensive autonomous weapon” is. Definitions on what these weapons are
    and the consequences of using them should come from a global committee. This committee should work with major
    governments and industries who have the highest chance of being able to manufacture these dangerous weapons in the first
    place.
    
    
    
    
    California regulation: https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing