An Exclusive Interview with UN and ISO experts in Robots and Regulation

An Exclusive Tech and Law Center Interview with UN and ISO experts in Robots and Regulation, thanks to our distinguished fellow  

Dr. Yueh-Hsuan Weng, ROBOLAW.ASIA Initiative

 

Thank you very much for accepting this interview! Can you please tell us a bit about your background. What led you to become involved in robot ethics and when did you begin pursuing laws for robots?

Yueh-Hsuan Weng:

Yueh-Hsuan Weng

I am a co-founder of the ROBOLAW.ASIA Initiative at Peking University and formerly a researcher of EU FP7 Project: ROBOLAW and the Humanoid Robotics Institute at Waseda University. I received my Ph.D. from Peking University Law School in 2014, the title of my Ph.D. dissertation was called “The Study of Safety Governance for Service Robots: On Open-Texture Risk.”

I initially began my adventure in law and robotics at Waseda University in Tokyo. It is known for the world renowned “Humanoid Robotics Institute,” and during my stay there in 2004 as an exchange student, I decided to write a term report introducing the history of Waseda’s robotic research. During my interviews, many researchers unexpectedly introduced me to the importance of a revolutionarily concept called “Human-Robot Co-Existence”and later it became the root of my research.

 

Prof. Dr. ChristofHeyns, UNSpecial Rapporteur on Extra-judicial, Summary or Arbitrary Executions

 Christof Heyns:

Christof Heyns

I am a professor of Human Rights Law at the University of Pretoria. I was appointed as the United Nations Special Rapporteur on extra-judicial, summary or arbitrary executions in 2010. This mandate focuses of the right to life, and threats to that right. In that context I have submitted reports to the Human Rights Council as well as the General Assembly on the use of force by the police, threats against specific groups such as journalists, and armed drones. In 2013 I presented a report to the Human Right Council on Autonomous Weapons Systems in armed conflict, and I have also subsequently presented a report on the use of unmanned systems (whether armed drones or autonomous weapons systems) during law enforcement to the General Assembly.

 

Prof. Dr. Gurvinder S. Virk, ISOTC184/SC2

Gurvinder S. Virk:

Gurvinder Virk

I am Professor of robotics and have been involved in research and development of new robot solutions since 1995.These have evolved from robots for hazardous environments (where there are no humans due to the dangerous situations) to service robots (where humans are everywhere and close human-robot interaction is needed). Around 2002 it became apparent to me that the current regulations at that time which covered only industrial robots were inappropriate for the new emerging service robots and this situation was a barrier to commercialisation of our R&D results. Industrial robots are largely designed to operate in workcells and human should stay outside due to the danger of harm. The collaboration between industrial robot and human was and still is, very closely regulated due to safety concerns. The new service robot applications demanded close human-robot interactions and even human-robot contact while the robot is operating. This was not allowed. I therefore approached ISO (IDO TC184/SC2) in around 2004 with some colleagues from CLAWAR (which was a Network of Excellence on climbing and walking robots aimed at widening the application of robots) to raise the issue and highlight the need for new regulations. We were invited to an ISO meeting in 2005 where we presented the situation and a new Advisory Group was set up after an international ballot and call for experts to investigate the situation officially within ISO. I was asked to chair this group and about 30 international experts from Japan, Korea, Europe and USA came forward to work on the issue. After 1 year I presented our results and several new robot standardisation work groups were created under ISO TC184/SC2. These were:

WG1: Robot vocabulary

WG7: Personal care robot safety

WG8: Service robots

I was invited to chair WG7 and this had produced the EN ISO 13482 safety standard for personal care robot that allows human-robot interaction and human-robot contact. This was published in Feb 2014.

Other work groups have been created afterwards on medical robots (JWG9, JWG35 and JWG36) which are joint work projects between ISO and IEC TC62 which focuses on medical electrical equipment. This is because medical robots will also be medical electrical equipment and the medical regulations are different from the previously considered robots as machines regulations.

In addition modularity for service robots (WG10) has also been started recently.  I am intimately involved in all these new developments and am leading 3 of the workgroups (WG7, JWG9 and WG10).

 

Q. Yueh-Hsuan , you are fighting for laws that will guide how humans interact with robots. Can you tell us more about this?

 Yueh-Hsuan Weng: The recent incident of a Japanese robot Pepper is worth discussing, as there may be clues for us to think about the emerging co-existence issue with robots to be members of our society. The question now is: are robot laws necessary?

First of all, we should be aware the importance of public laws and regulations. While it does not refer to the debate on issues of robots to be recognized as the subject of law from the Constitution, it does mention making public regulation for the design, manufacture, selling, and usage of advanced robotics. Furthermore, robot ethics and legal regulation should not always exist in parallel, because from the perspective of regulations, robot law is just a union of robot ethics and robotics.

 

Q. When we consider the international public law, an emergency need is to consider a set of new regulation for lethal autonomous weapons. Do you believe we have to ban Killer Robots? What are current challenges?

Christof HeynsI am of the view that fully autonomous weapons should be banned – in other words, those systems or usages that do not allow meaningful human control. This is because I do not think machines can adequately make decisions concerning distinction and proportionality, and as such pose a danger to the lives of civilians. But even if they can make such decisions in specific cases, it also undermines the dignity of those targeted to have the decisions whether they will live or die taken by a robot. The people targeted are literally reduced to the numbers used in an algorithm– they are reduced to being merely targets.

And then there is the issue of responsibility. The right to life is violated if there is not a proper system of accountability for possible violations. The question is who will be responsible when things go wrong – as they invariably do with the use of force if humans have not exercised meaningful control? Responsibility in law and in ethics is to a large extent  tied to and premised on control. With a system of machines taking lethal decisions, it is difficult to see who can be held accountable.

 

Q. Yueh-Hsuan. what is your opinion on Prof. Christof Heyns’ belief that fully autonomous robots meant to be used as weapons should be banned? 

Yueh-Hsuan Weng: Yes, high risks accompany any military actions from lethal autonomous systems.I believe that the public must be in the loop at all times. However, there is a regulation gap in regards to control the circulation of core RT components. To quote my mentorProf. Atsuo Takanishiat Waseda HRI,Technically, there is a gray zone between Service Robots and Military Robots.

 

Q. Prof. Heyns, do you support the existence of autonomous robots not necessarily designed to kill but to aid militaries? For example, robots that are designed for surveillance.

Christof Heyns: Yes, in many cases it can enhance human control and decision making, and indeed also save lives.

 

Q. Also, when it comes to autonomous robots that are not intended to be used as weapons, do you believe there are a set of laws necessary to guide human interactions with said robots? 

 ChristofHeyns: My concern is with using AWS to perform critical functions – the release of force.

 

Q. When robots get highly autonomy, shall we consider an active safety regulation like Isaac Asimovs Three Laws of Robotics?

 Yueh-Hsuan Weng: With the exception of the “Risk Monitoring”mechanism which comes in the near future, we will need another sophisticated “Risk Control”mechanism to reduce “Open-Texture Risk”- risk occurring from unpredictable interactions in unstructured environments, when robots need high levels of autonomy to perform more unpredictable behaviors in the presence of human.

Asimov’s Three Laws of Robotics may sound like a feasible way for implementing “Risk Control”mechanism. However, in a previous study in 2009, I argued that the “Three Laws of Robotics”is unfeasible based on three potential problems: “Machine Meta-ethics”, “Formality”, and “Regulation.”

For example, “Third Existence” robots are not able to obey human laws within our natural language due to its limitation to interpret terms and clauses comprehensively, therefore resulting in the “Formality” problem.As for self-conscious “HBI” robots, they do not face the “Formality” difficulty, but we have to worry that if they spontaneously violates the human rules, and even to create their own “Robot Law 2.0” for human beings to obey? On the other hand, I have proposed a “Legal Machine Language,” in which ethics are embedded into robots through code, which is designed to resolve issues associated with Open-Texture Risk – something which Three Laws of Robotics cannot specifically address.

 

Q. Recently, there was surveillance footage of a man kicking a robotic clerk. Can you talk about your reaction when you saw or head about this footage and how it fits in with what you are working on?

 Yueh-Hsuan Weng: The incident has been received with immense scrutiny from the public as it is regarding a humanlike sociable machine that was inappropriately treated. When I head about this footage, my reaction was not surprise at all as incidents like this one have occurred before. During the 19th century, steam powered locomotives were deemed “monsters” and therefore inappropriately treated in Shanghai and Yokohama when they were initially introduced to the Asian society. Instead of Pepper, if an object such as an ATM or vehicle had been vandalized, the moral impact will be much less. As such, an evolved sets of ethical principles for sophisticated and intelligent machinery like Pepper have yet to be developed.

 

Q. Why do you think current laws, like damage to property, are not enough to protect robots? Why do you believe a new set of laws is necessary?

 Yueh-Hsuan Weng: My main argument is that the current laws do not help human beings to project their empathy while interacting with humanoid robots. We may soon need a new set of laws such as “Humanoid Morality Acts ”to provide robots a special legal status called the “Third Existence.” In addition, similar to pet owners, “Third Existence” owners should afford higher civil liability, and it can help to ease robot manufacturers to avoid too much product liability in regards to advanced robots’ uncertainty.

 

Q. How would you respond to people who say “it’s just a robot, they can’t feel or think anything” and therefore believe new laws are not necessary?

Yueh-Hsuan Weng: First of all, the morality of “Humanoid Morality Act” is human centered. For the foreseeable future, robots will be“ objects of law” even if they can’t feel or think. We may still need new laws to ensure their daily interactions with human beings.

In addition, from the perspective of risk management, a possibility could be to develop the“Robot Safety Governance Act”, which is an extension of current existing machine safety regulations. These technical norms located at the bottom of “Robot Law” will ensure the safety of new human-robot co-existence.

 

Q. Prof. Virk, unlike current existing industrial robots safety standards, the ISO 13482 is the worlds first safety standard for service robots. Furthermore, it could also bring structural and influential impact for next generation robots safety certificationproduct liabilityethics and insurance in the future. What is ISO 13482s role for realization the safety governance for next generation robots?

Gurvinder S. Virk: ISO 13842 presents the safety requirements for personal care robots. Personal care robots are defined to contribute directly to the quality of life of humans rather than be focussed on manufacturing applications. The standard defines the internally agreed consensus on how the manufacturer to design the new robots to allow close human-robot interactions so that there will be protection against litigation in the event of an accident occurring. This is the main aim of international safety standards to provide the regulator with rules that have been formulated via the open democratic manner. Of course it does not cover issues of negligence and incompetence, etc but if the manufacturer complies with the regulations and has certified evidence to this effect he will have protection in legal suits against him. This is most important in an area of technology which is rapidly evolving and changing. This means the standards must be reviewed regularly. Normally all standards are reviewed on a 5 year cycle but as 13482 is so new we have decided to considered formally within WG7 to see is it is already worthwhile to review it already even though it was published in Feb 2014. Probably it is too early but it is interesting to note how soon the international community is thinking about the review of the new standard.

In addition to the new safety requirements, it is important to be able to classify the sectors in a clear manner so that all know if the robot is an industrial robot or if it is a personal care robot since each must comply to different requirements. As the robot sectors grow and evolve and each having its own regulations, it is likely that there will be confusion between the robot sector boundaries; in some cases it will not be clear if the robot is an industrial robot or a personal care one. For example consider an exoskeleton robot designed to help the movements of a human. This is a personal care robot (defined as a physical assistant robot in 13482). If this is used to help a worker perform his job in a manufacturing application, does this make the robot an industrial robot? This situation has been discussed and the consensus is that as the exoskeleton is improving the quality of life of the human worker and not improving the manufacturing process directly, it is a personal care robot. It is clear other cases will arise where the situation is more tricky and difficult to resolve. This is likely to lead to legal cases if accidents occur. 13482 can provide guidance on such legal issues.

There is also the situation of misuse. If a manufacturer design a robot as an industrial robot and it is used incorrectly as a personal care robot (or vice versa), problems are likely to arise. The limits on the liability of the manufacturer are unclear as concerns related to foreseeable misuses of a product need to be addressed. This defined as “use of a machine in a way not intended by the designer, but which can result from readily predictable human behaviour” but this is not clear what is foreseeable in this sense and what not…  so misuse can be quite unclear!.  Two examples may help in this respect:

  1. Vacuum cleaner robot used as a real-world Frogger game implementer on real highways. I cannot imagine this could have been foreseen by anybody (my opinion)
  2. Vacuum cleaner trapping a person’s hair who was sleeping on the floor in Korea. Sleeping on floor is common in Korea and so having a vacuum cleaner operating where people are sleep could have been foreseen by the manufacturer (my opinion)!!

ISO 13482 does not consider the ethical issues and it would be useful to have some guidance on such issues globally. However there are likely to be regional and national views which will differ and so ranges of possible uses and de-limiters would be good have. Not sure how we can get such ethical perspectives.

 

Q. According to Dr. Wengs proposal, we might need two special regulations for next generation robots, they are Humanoid Morality Act and Robot Safety Governance Act. Do you believe that a better regulation on robot safety is to ask safety requirements comply with administrative laws (i.e. EC Directives), but not to keep it as voluntary requirements?

Gurvinder S. Virk: Not sure about this. EC Directives are law and if we need EC Directives for robot products is unclear at present. Maybe robots are OK to be treated as “any other product” at the moment but when the degree of autonomy has advanced much more, maybe we will need to think of more specific rules and regulations to accommodate the advanced intelligent robots and robot systems. Currently the regulatory framework cannot handle such advanced autonomous systems. Also systems which have the ability to adapt and learn from experience are not covered by current regulations. This is because currently systems are tested at a point in time and certified as such;  if the system has the ability to learn, its software will change due to the new capability it has acquired due to self-learning, it loses its certification. Hence having a self-learning mode is not a good option for commercially sold products if there are safety issues arising.

 

Q. Could you please explain the Risk Assessment mechanism from the ISO safety framework for service robots?

Gurvinder S. Virk: No risk monitoring mechanism but a risk assessment and risk reduction methodology which I will talk about. Hope this is OK.

There is a type A standard (ISO 12100 Safety of machinery – General principles for design – Risk assessment and risk reduction). Type A means it applies to all machines which is a huge area and robots are upto now essentially designed and regulated as machines. Medical robot regulations are in the pipeline but now have as yet been published.

12100 presents a methodology which has to be adopted in the design of machine products to ensure safety issues can be addressed. The process is structures to assist designers and manufacturers vis the 3 step method, which is as follows:

Step 1 : Inherently safe design measures. This means design only safe machines should be designed if possible

Step 2 :Safeguarding and complementary protective measures. This process identifies all the hazards and what harm can be caused under single fault conditions and introduces modifications to the design to reduce the likelihood of harm to a point it is acceptable

Step 3 :Information for use. This is information presented to the user to indicate any remaining issues that need to be considered by the user to operate the machine in an acceptably safe manner.

This process is expected to be followed to ensure no unacceptable risk exists and likely to cause harm during use.

 

Q. For long term consideration, do you believe that it is important to consider a safety abidingEthics by Design principle to embedded code for limiting autonomous robots behavioral risks?

 Christof HeynsThose who programme robots that can hurt people must certainly take ethical considerations into account.

Gurvinder S. Virk: Yes I think ethics should be introduced into the design process. Currently international standards/ regulations do not do this. The UK is working on a national document on robot ethical design. The document is BS 8611, Guide to the ethical design and application of robots and robotic systems. The work has extend safety assessment procedures considered in ISO 12100 to prevent “harm” by defining “ethical harm” and using the same approach taken in ISO 12100 to develop an ethical risk assessment and risk reduction process. The key definitions are as follows:

  • Harm: physical injury or damage to health
  • Ethical harm: anything likely to compromise psychological and/or societal and environmental social wellbeing

Clauses are developed in BS 8611 for many key ethical issues, such as privacy and confidentiality, human dignity, cultural diversity, legal issues, medical, military, etc.

The document is currently being developed but is expected to be published as a UK document soon.

 Yueh-Hsuan Weng: Yes, I believe that for a human-robot coexistence society to exist in the future, an“Ethics by Design”principle within embedded code to limit autonomous robots’behavioral risks is inevitable.

However, to highly autonomous robots who behave like human beings, it is unethical to entrust robot manufacturers to apply the principle under a policy in which the code of ethics is a responsibility associated with the job. This will not be enough to ensure safety. In such scenarios, robot manufacturers have to take consumers’preference as a priority, otherwise they may lose the market share from their competitors.

In this case we should consider a “Code is Law” policy –that the code of ethics should not simply be one of the manufacturers’ self-responsibility, but it should further become a part of statute law or “Technical Norms.” Although this enables the code of ethics to be well supervised during its designing stage, a major problem still falls on how to authorize the code of ethics with legal effectiveness as it relates to keeping a balance between many conflicts of interests.

 

TECH and LAW Centerhttp://www.techandlaw.net/news/6196.html

ROBOLAW.ASIA Initiative: http://www.robolaw.asia

 

Incident of drunk man kicking humanoid robot raises legal questions

pepperrobot

Pepper is described as an “engaging, friendly companion that can communicate with people through the most intuitive interface we know: voice, touch and emotions.” Credit: Aldebaran, SoftBank, Corp.


A few weeks ago, a drunk man in Japan was arrested for kicking a humanoid robot that was stationed as a greeter at a SoftBank, Corp., store, which develops the robots. According to the police report, the man said he was angry at the attitude of one of the store clerks. The “Pepper robot” now moves more slowly, and its internal computer system may have been damaged.

Under current Japanese law, the man can be charged with damage to property, but not injury, since injury is a charge reserved for humans. Dr. Yueh-Hsuan Weng, who is cofounder of the ROBOLAW.ASIA Initiative at Peking University in China, and former researcher of the Humanoid Robotics Institute at Waseda University in Japan, thinks a better charge lies somewhere in between.

Weng is advocating for special robot laws to address the unique nature of human-robot interactions. He argues that humans perceive highly intelligent, social robots like Pepper (which can read human emotions) differently than normal machines—maybe more like pets—and so the inappropriate treatment of robots by humans should be handled with this in mind.

The biggest moral concern, Weng explains, is that the current laws “do not help human beings to project their empathy while interacting with humanoid robots.” He explains the problem in greater detail in a review at the Tech and Law Center website:

“The incident has been received with immense scrutiny from the public, as it is regarding a human-like sociable machine that was inappropriately treated,” Weng wrote. “If the object had been an ATM or vehicle, the moral impact would be much less, as an evolved set of ethical principles for sophisticated and intelligent machinery like Pepper have yet to be developed.”

Working to develop such ethical principles, Weng has previously proposed that humanoid robots should have a legal status that is different than that of normal machines. He suggests that humanoid robots be legally regarded as a “third existence,” in contrast to the status of humans as the “first existence” and our normal machines and property as the “second existence.” This distinction would allow for special treatment of human-robot incidents, which Weng believes will be essential in the future.

“In regards to the Pepper incident, the humanoid robot Pepper is recognized as an ‘Object of Law’ under the current Japanese legal system,” Weng told Phys.org. “Therefore, it is not possible to apply the Article 204 (Injury) of Japanese Penal Code. On the contrary, the man could be sued by the Article 234-2 (Obstruction of Business by Damaging a Computer) or the Article 261 (Damage to Property). As for civil law, based on the Article 709 (Damages in Torts) Pepper’s owner, SoftBank, can claim economical compensation from the man regarding any damages resulting as a consequence of the attacked Pepper robot. So, in the near future we might sense the problem if we still keep this ‘second existence’ policy for dealing with sophisticated, since it violates our common sense for interacting with them—for example, Article 204 (Injury) of Japanese Penal Code cannot be applied.”

Whether a robot can be legally “injured” or not is debatable, and raises the question of what exactly robot laws should look like. Based on his research, Weng has proposed two special regulations for robots. First, a “Humanoid Morality Act” would define a proper relationship between humans and robots, including the use of coercive power to constrain unethical applications. Second, a “Robot Safety Governance Act” would extend current machine safety regulations to protect the safety of both humans and robots.

Weng also cautions against overregulation, recalling the Red Flag Laws implemented by the UK in the 19th century for operating the first automobiles. These overly conservative laws required a flagman to walk in front of every car to warn pedestrians of the coming vehicle. The laws had the unintended consequence of hindering innovation of the UK’s burgeoning automobile industry, which was later surpassed by countries such as Germany and France where the laws were not as strict.

Like automobiles in the late 19th century, today robots are a rapidly growing technology that have the potential to lethally harm humans. For this reason, Weng believes that legislators today face a similar dilemma of finding the right balance between protection and innovation.

To find this balance, and to better understand interactions, Japan has been slowly integrating robots into society in several “Tokku” special zones over the past 10 years. As Weng and coauthors show in a study published earlier this year in theInternational Journal of Social Robotics, observations from these experimental zones will help legislators frame appropriate laws that protect both humans and our unique, semi-autonomous creations.

October 2, 2015 by Lisa Zyga report

http://techxplore.com/news/2015-10-incident-drunk-humanoid-robot-legal.html

Regulation of Unknown: Does the Humanoid Robot “PEPPER” need Red Flag Laws?

Abstract
Pepper the robot, developed by SoftBank and manufactured by Foxconn, is able to socially interact with human beings based on its emotion reading and learning capabilities. It even has a biomorphic shape, just like us. However, recently a Pepper robot was attacked by a drunkard in Japan in September 2015. This incident is worth discussing, as there may be clues for us to think about the emerging issue for robots to be members of our society. Besides, the incident has been received with immense scrutiny from the public as it is regarding a humanlike sociable machine that was inappropriately treated. Was there anything wrong with Pepper’s emotion reading function, or was man at fault? Furthermore, do we need a new “Robot Law” in regards to regulate the design, manufacture, selling, and usage of advanced robotics? These issues are unavoidable for establishing the human-robot co-existence society.

SoftBank Corp's human-like robot named 'pepper' is displayed at its branch in Tokyo June 6, 2014. Japan's SoftBank Corp said on Thursday it will start selling human-like robots for personal use by February, expanding into a sector seen key to addressing labour shortages in one of the world's fastest ageing societies. The robots, which the mobile phone and Internet conglomerate envisions serving as baby-sitters, nurses, emergency medical workers or even party companions, will sell for 198,000 yen ($1,900) and are capable of learning and expressing emotions, Softbank CEO Masayoshi Son told a news conference. REUTERS/Yuya Shino (JAPAN - Tags: SCIENCE TECHNOLOGY BUSINESS TELECOMS SOCIETY) - RTR3SFVA

Pepper the robot, developed by SoftBank and manufactured by Foxconn, is able to socially interact with human beings based on its emotion reading and learning capabilities via Cloud Computing. Pepper is a highly intelligent machine that can read human emotions, respond our inquires, and interact with human beings. It even has a biomorphic shape, just like us. However, recently a Pepper robot was attacked by a drunkard in Japan. Was there anything wrong with Pepper’s emotion reading function, or was man at fault?

According to the Japan Times from Yokosuka city, Kanagawa prefecture, on a Sunday morning in September, a drunken man entered a local SoftBank store, and he kicked a Pepper robot stationed there. The man was soon arrested by the police. He has admitted to damaging the robot, claiming that he did not like the attitude of a store clerk. Though the clerk was not injured, the damaged robot now moves slower than its original interaction speed.
This incident is worth discussing, as there may be clues for us to think about the emerging co-existence issue for robots to be members of our society. The incident has been received with immense scrutiny from the public as it is regarding a humanlike sociable machine that was inappropriately treated. If the object has been an ATM or vehicle, the moral impact will be much less, as an evolved a sets of ethical principles for sophisticated and intelligent machinery like Pepper have yet to be developed.
A lesson can be learned from 19th century, it was an era before the end of horse-drawn transportation. The origin of human and horse co-existence can be traced to horses ridden by nomadic herders in Central Asia 5000 years ago. With the accompanying inventions of bits, collar harnesses, and coaches, horse-drawn transportation is gradually became a dominant way of land transportation. Eric Morris also pointed out that horses were absolute essentials for the functioning of 19th century cities of Western countries, mainly for personal transportation, freight haulage, and mechanical power. In the meantime, the rise of steam engine technology brought new possibilities for personal transportation. Richard Trevithick invented the world’s first self-propelling passenger-carrying vehicle, called “London Steam Carriage” in 1803, and later “Stockton and Darlington Railway”, the world’s first public railway to use steam locomotives, was opened in 1825. Unlike steam locomotives which have their own independent railway networks, steam powered automobiles have to be operated or tested in human living area, especially on public roads. It caused many new social concerns such as how to limit the speed of self-propelled vehicles? and how to ensure pedestrians’ safety. For example, if a horse carriage meets a steam car face to face, what happens next? How can we prevent a horse from getting scared by a steam car’s emitted vapor?, etc…
In the mid-19th century, the UK Parliament made a series of legislation called “Red Flag Laws” to regulate steam powered automobiles. These laws were implemented from 1861 to 1896. However, the regulations were cautious and conservative. For example, the law asked for at least three people to be in charge of the automobile’s operation: the driver, engineer, and flagman. Furthermore, the flagman should walk slowly preceding the moving car by no less than 60 yards, and wave a red flag or carry a lantern to warn horse riders or pedestrians to ensure safety.
Unfortunately, the effects of the regulation were disappointing. With the exception of holding a red flag, other strict laws including a speed limit of 2~4 mph (3~6 km/h), or additional toll fees for those vehicles adopting non-cylindrical wheels were overruled. Though overruled regulation can effectively reduce risks from emerging new technologies, it might prevent from innovation as well as the progress of the industry. Researchers believe the adoption of Red Flag Laws can explain why UK’s automobile industry fell behind Germany’s and France’s. Ironically, the world’s first steam powered passenger-carrying vehicle came from the UK.
In regards to the Pepper incident, the humanoid robot Pepper is recognized as an “Object of Law” under the current Japanese legal system. Therefore, it is not possible to apply the Article 204 (Injury) of Japanese Penal Code. On the contrary, the man could be sued by the Article 234-2 (Obstruction of Business by Damaging a Computer) or the Article 261 (Damage to Property). As for civil law, based on the Article 709 (Damages in Torts) Pepper’s owner – SoftBank can claim economical compensation to the man regarding any damages resulting in consequence of the attacked Pepper robot.
To the points of the analysis above, an emerging problem might be whether we should consider addressing new regulation impact on service robots. Under the current legal system, service robots are merely a property or “the second existence”; it is not enough to protect safety and moral risks in regards to human-robot co-existence. In other words, the new perspective of regulation shall be established under the premise of service robots as “the third existence” legal entity; robots are still the object of law, and they shall have a special legal status different from normal machines. However, the difficulty of implementing new regulation for service robots is something similar to the case of regulating steam powered cars in the 19th century. It’s a “Regulation of Unknown”. On one hand, such machines could cause lethal consequences to human beings without a proper regulation. On the other hand, it is difficult for regulators to keep up with the progress of advanced technologies. Therefore, there is a tendency of over-regulation, similar to the case of the steam powered cars.
To avoid repeating the Red Flag Laws in the era of intelligent robots, first we can consider “Deregulation” while referring the “Tokku” RT special zone. A special area such as this one can help regulators and manufacturers to find out many unexpected risks during the final stage prior to its practical application. Originated from Japan, the history of RT special zone is merely 10 years long, but there are already many special zones established in Fukuoka, Osaka, Gifu, Kanagawa and Tsukuba. As the development of robotics and its submergence to the society expand, the importance of special zone as an interface for robots and society will be more apparent.
On the other hand, we should be aware the importance of public law and regulation. While it does not refer to the debate on issues of robot rights or robots to be recognized as the subject of law from the Constitution, it does mention making public regulation for the design, manufacture, selling, and usage of advanced robotics. A possibility could be developing the “Robot Safety Governance Act”, which is the extension of current machine safety regulations. These technical norms located at the bottom of “Robot Law” will ensure the safety of new human-robot co-existence.
Along with the expanding of robotic technology into human living spaces, the importance of law and ethics will become more apparent and essential. The “Humanoid Morality Act” can reduce the gray zone or moral disputes regarding the usage of service robots. “Humanoid Morality Act” which should be at the beginning of “Robot Law”, will define a proper relationship between human and robots and the use of coercive power to constrain unethical applications of humanoid robotics or cyborg technologies. It will construct a fundamental norm for regulating daily interactions between human and robots. Clues regarding the potential demands for the “Humanoid Morality Act” can be found in the Pepper incident.
Bio-inspired robotics refers to the design a robot from the nature, especially biological mechanisms of animals. It could be a tiny flying robot based on fly’s wing flapping a wall climbing robot inspired from gecko’s feet grasping, or a soft robot designed by octopus’ locomotion. At the end, anthropomorphism will be an unavoidable path for bio-inspired robotics, it could be a robot that looks like human, a robot that walks like human, or it could be both in the near future. However, in regards to humanoid robots, they might bring more moral risks than other bio-inspired robots. Take a Sci-Fi film “VICE” as an example: a business man has designed a law-free resort – “VICE”, where the customers can play out their wildest and unethical desires with any robots who look, think, and feel like humans.
With the expectation of sexual intercourse with humanoids, enslaving humanoids, or mistreating them all includes strong moral disputes, there is a legal gap of using humanoids as tools for harassment or bullying human beings. When I gave my presentation at European University Institute (EUI) early this year, I tried to ask audiences what do they want to do if they have a human-like robot? There was a Ph.D. researcher of law that humorously said that he will make fun of me by fooling around with a humanoid robot based on my personal likeness. It could be fun to use robot like this way. However, there is another worry regarding the gray zone between humor and humiliation, and sometimes humans’ evil defeats their moral discipline. We should remember that the Canadian girl Amanda Todd killed herself at the age of 15 due to cyber bullying. What the difference between the Internet and robots is that the former could be a platform to distribute hate speeches, but the latter could be a bullying tool mixed with malice from both the virtual and physical worlds. Therefore, to consider a special regulation beyond moral discipline regarding daily interactions between human and robots will be important for our future.
Finally, robot ethics and legal regulation should not always be in parallel, because from the regulation perspective, robot law is a union of robot ethics and robotics. We might don’t need Red Flag Laws for Pepper robots, but it depends on what moral stands and actions we take toward the regulation of unknown.

Yueh-Hsuan Weng, Ph.D. (Peking Uni.)
TECH and LAW Center, Milan
ROBOLAW.ASIA Initiative, Beijing

Japan’s Robot Policy and the Special Zone for Regulating Next Generation Robots

East Asian countries are preparing for the implementation of robotic technologies into daily environments. Early this month, South Korea’s HUBO team won the DARPA Robotics Challenge and Beijing had announced their “Made in China 2025” national strategic plan, which aims to become the world’s leading industrial power. To compete with other countries, Japan’s Prime Minister Shinzo Abe called for the “Robot Revolution” in September 2014 and later for the publication of “Japan’s Robot Strategy” and the start of Robot Revolution Initiative (RRI) in January and May 2015, respectively.

Since a decade ago, Japanese Ministry of Economy, Trade and Industry (METI) has been in charge of developing a series of robot policies in many specific domains of business, innovation, and safety. “Japan’s Robot Strategy” is the latest comprehensive policy guideline for regulating robotics. It’s a five year mega strategic plan that aims to promote the nation’s competitiveness via regulation and “deregulation” methodologies.

The guideline encourages the development of “Artificial Intelligence (Data-driven and Brain-like),” “Sensing and recognition technology,” “Mechanism, actuator, and control technology,” “OS and middleware.” These core technologies will be crucial for developing “Next Generation Robots” and could be applied into a wide range of real world sectors such as manufacturing, service, nursing and medical, construction and agriculture.

In regards to the subject of law and robotics, the Robot Revolution Realization Council had systematically inspected potential problems from its legal system, and they revisited many existing regulations: “Radio Law,” “Pharmaceuticals and Medical Devices Law,” “Industrial Safety and Health Act,” “Road Traffic Law,” “Road Transport Vehicle Act,” “Civil Aeronautics Act,” “Control Law of Injustice Access,” “Consumer Products Safety Act,” “ISO 13482 Safety Standard for Life-supporting Robots,” “Industrial Standards Law.”

In response, they proposed the “Implementation of Robot Regulatory Reform” as guidelines. There are two strategies of regulatory reform. The first eases current regulations through creating new legal system or utilizing the environment. The other strategy is to establish legal framework required from the consumer protection perspective. In addition, field testing for robots is an essential part of deregulation, because it can help regulators and manufacturers to find out many unexpected risks during the final stage prior to its practical application.

The world’s first “Special Zone for Robot Development and Practical Testing (Tokku)” was approved by the Cabinet Office of Japan on November 28, 2003. At that time, Takanishi Laboratory, Humanoid Robotics Institute of Waseda University had conducted many empirical testing within several different areas of the special zone to evaluate the feasibility for bipedal humanoid robots on public roads from 2004 to 2007. It is also known as the world’s first public roads testing for bipedal robots.

The history of special zone is merely 10 years long, but there are already many special zones established in Fukuoka, Osaka, Gifu, Kanagawa and Tsukuba. As the development of robotics and its submergence to the society expand, the importance of special zone as an interface for robots and society will be more apparent.

There is a joint research between Waseda University Humanoid Robotics Institute and Peking University Law School published at International Journal of Social Robotics this year. It is a case study on legal impacts to bipedal humanoid robots in the special zone, and we found many interesting outcomes from the case study.

Based on our analysis, we proposed a three-level hierarchy of “Robot Law” organized by “The Robot Safety Governance Act,” “The Humanoid Morality Act” and “Revisions”. Firstly, “The Robot Safety Governance Act” is the extension of current machine safety regulations. These technical norms located at the bottom of “Robot Law” will ensure the safety of new human-robot co-existence. There will be two main challenges for integrating the code of ethics into robot safety regulatory framework. From a technical perspective, we have to consider how to provide a feasible framework embedded ethics into robots without Asimov’s Three Laws of Robotics, such as the “Ethical Governor”, proposed by GeorgiaTech’s Ron Arkin. Another challenge to realizing programmed code of ethics depends on attitudes from the lawmakers and regulators to the emerging “Ethics by Design” principle. As for “The Humanoid Morality Act,” which should be at the beginning of “Robot Law,” it will define a proper relationship between human and robots and the use of coercive power to constrain unethical applications of humanoid robotics or cyborg technologies. It will construct a fundamental norm for regulating daily interactions between human and robots. However, its importance will increase as the development of robotics applications in human society expand. Finally, “Revisions” will refer to the current existing laws that need to be revised due to conflicts with advanced robotics. It is strongly connected with the issue of deregulation, areas may include privacy protection laws, road traffic acts, international humanitarian laws, tort laws, etc…

Two Pyramids

                                                           Two Pyramids for Robot Regulation

Also, we verified the existence of “Open-Texture Risk” concept in this case study. Thus, we can refer it to regulate the “Third Existence” robots, since it can be a boundary between robots with Action Intelligence and Autonomous Intelligence (but still not beyond Singularity). From the aspect of open-texture risk, the physical injuries and damages created by autonomous “Third Existence” robots could be seen as the outcome of complex interactions between non-linear decision making and entities in the unstructured environment. This may cause a “liability gap” due to the difficulties in their ability to make judgements.

0033

                                            WABIAN-2R, Takanishi Laboratory, Waseda University

For example, at the Fukuoka TNC TV Building, WABIAN-2R fell down on a bumpy surface with tiles angled 2◦−5◦ (forward-axis) downwards. In this case, WABIAN-2R’s Walking Stability Controller dynamically adjusted its body balance based on built-in offline walking pattern and the data received by sensors that monitored the surrounding environment. This resulting autonomous behavior could be seen as a “function” of the product or a “decision” made by WABIAN-2R’s Walking Stability Controller by different groups of lawyers. Either or, the result is beyond its designers’ expectations and the two ways of definition are very different from a legal perspective. If defining this behavior as a product misfunction, the physical injuries or damages caused by WABIAN-2R’s Walking Stability Controller should be the manufacturer’s liability. However, open-texture risk brings a new level of difficulty in justifying the “product defects.” Risks of these robots’ behavior are difficult reduce during design and manufacturing stages, and the manufacturers have to provide more comprehensive information regarding usage to avoid liability issues. However, the appropriate amount of information to be provided during sales at this time is still controversial since robotics is still in its infancy. Therefore, a guideline to draw boundaries of liability between users and manufacturers on the obligation to provide product information is necessary. However, if we regarded this autonomous behavior as a decision made by WABIAN-2R’s Walking Stability Controller, there is the question of who should afford the tort liability for the physical injuries or damages. As the adaptability of autonomous robots grows, the characteristic of robots as a third existence with autonomous intelligence but lacks self-awareness will become more apparent. Before autonomous robots become truly developed, may we consider the third existence autonomous robots as a pet, with its owner assuming all liabilities? This may be the case since the robot itself lacks the subjectivity to afford legal liabilities.

Read the article of “Intersection of “Tokku” Special Zone, Robots, and the Law: A Case Study on Legal Impacts to Humanoid Robots”, International Journal of Social Robotics (2015)

 

Yueh-Hsuan Weng, Ph.D. (Peking Uni.)
Tech and Law Center, Milan
ROBOLAW.ASIA, Beijing

《专家访谈》长江商学院专访 – ROBOTS IN CHINA: THE BOT CONNECTION

001

 

by Greg Isaacson

How the growing use of robots in China will impact different sectors of the economy, as well as the country’s robot makers.

Itʼs a well-depicted scene: a skilled surgeon is performing delicate open-heart surgery in an amphitheatre filled with esteemed colleagues and eager-to-learn residents. The lights illuminating the procedure are bright and hot, calling forth small beads of sweat on the surgeonʼs forehead. The doctor says: “Scalpel.” Only instead of human hands itʼs two robotic arms that extend a tray of tools in response to the surgeonʼs request. No, this is not a summer sci-fi blockbuster, but rather whatʼs happening in Chinaʼs foremost medical research institutions, and in full practice at Beijingʼs General Navy Hospital where robots are the first ones to crack open skulls for brain surgery.

Read More at

http://knowledge.ckgsb.edu.cn/2014/11/11/technology/robots-in-china-the-bot-connection

++中心快讯++ 第3届 北大-斯坦福-牛津 互联网法律与公共政策三校联合会议(The Peking-Stanford-Oxford Joint Conference on Internet Law and Public Policy)

我们站在飞速奔跑的时代窗口,一面迷茫,一面充满了希望。一条轨道上,互联网创新带来的机遇让人眼花缭乱;另一条轨道上,全新的问题让我们如履薄冰:网络治理中是否应坚持网络中立?大数据时代如何保护公民个人信息?互联网+金融=?互联网企业IPO需要新型公司治理机制?软件是否真应赋予专利?互联网发展需要天空法则还是丛林竞争?

2014年11月22日-23日,由北京大学法学院、斯坦福大学法学院、牛津大学法学院共同举办的“2014:北大-斯坦福-牛津:互联网法律与公共政策研讨会”,将汇聚中美欧顶尖学者、官员、企业高管和律师于未名湖畔,与您共同思考,探究互联网的前程。

初冬时节,愿我们相约北大法学院凯原楼学术报告厅,赏梅看雪话未来。

在此,特别鸣谢腾讯互联网与社会研究院的鼎力支持。

参会报名请来邮psoconference@pku.edu.cn,提供姓名、单位、职务、联系方式),收到确认回复视为报名成功。会议报到时间:2014年11月22日中午1:30-2:00.

20141106163625_6562

《专利技术地图》RSJ日本机器人科研信息技术地图

「日本のロボット研究開発の歩み」 – 机器人科研信息技术地图乃为RSJ日本机器人学会高西淳夫(Atsuo Takanishi)副会长领导之「ロボット研究開発アーカイブ実行委員会」收集汇整日本机器人学界历年来若干具代表性以及重大历史意义等事例所建成之公开信息网站。该网站以2002年日本机器人学会成立20周年时编撰发行的「日本のロボット研究の歩み」DVD光盘为基础,再进一步扩大收整日本的许多杰出机器人技术事例并以提供论文全文、视频动画以及检索功能等方式试图为后世留下足供参考的珍贵范例。

0988

日本のロボット研究開発の歩み – 技术地图:http://rraj.rsj-web.org/ja_history

日本各大学附属之机器人实验室:http://www.rsj.or.jp/rij/