In this talk, I intend to discuss ethical problems raised by the implementation of gender-related biases in the design of social robots. Mainly, I argue that considerable moral risks are attached to this design practice, so that great caution is advised.
As social robots are increasingly adopted and their interaction skills perfected, it is important to critically assess related design choices from an ethical point of view as well. In particular, it is pivotal to shed light on the ethical risks of deliberately exploiting pre-existing social biases in order to build technologies that successfully meet user expectations, engender trust, and blend in with their context of use. Accordingly, I address the question whether it is ethical permissible to align the design of social robots to gender biases in order to improve the quality of interactions and maximize user satisfaction.
After some introductory considerations on the rationale underlying the design strategy of bias alignment, possible answers to doubts on its ethical permissibility are investigated and their respective contributions to the effort of aligning social robots to relevant ethical standards are evaluated. Finally, some conclusive remarks are drawn in terms of design ethics and possible policy recommendations.