
人机交互时代即将到来,我们准备好了吗?
我们在与机器人互动时面临的风险,与人际关系中的风险不同——这一点需要我们深思。社会需要对机器人相关概念进行明确定义,因为不同文化对人机交互的接受程度存在很大差异。我们还需要反思社交机器人和具有“道德主体”的潜在AI代理融合所涉及的伦理和风险。
Emma Ruttkamp-Bloem is a philosopher of science and technology, an AI ethics policy advisor, and a machine ethics researcher. She holds a PhD in Philosophy. Emma was a member of the UN Secretary General’s AI Advisory Body. She is the Chairperson of the UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST). Currently, she is the Head of the Department of Philosophy, University of Pretoria, and leads the AI ethics group at the South African Centre for AI Research (CAIR). Emma led the UNESCO Ad Hoc Expert Group that prepared the draft of the 2021 UNESCO Recommendation on the Ethics of AI and contributed to development of its implementation instruments. She continues working with UNESCO as a member of UNESCO’ AI Ethics without Borders and Women4EthicalAI initiatives. She is a member of the Global Academic Network, Centre for AI and Digital Policy, Washington DC and has worked on AI governance projects with the African Union Development Agency (AUDA)-NEPAD and the African Commission on Human and People’s Rights (ACHPR). She is a member of various international AI ethics advisory boards ranging from academia (e.g., the Wallenberg AI, Autonomous Systems and Software Programme Human Sciences), the inter-governmental sector (e.g., as expert advisory board member for the Global Commission on Responsible Artificial Intelligence in the Military Domain), to the private sector (e.g., SAP SE). She is an associate editor for the Journal of Science and Engineering Ethics, and a member of the editorial board of the Journal of AI Law and Regulation and the Cambridge Forum on AI: Law and Governance. Emma is a full member of the International Academy for the Philosophy of Science (AIPS).
我们在与机器人互动时面临的风险,与人际关系中的风险不同——这一点需要我们深思。社会需要对机器人相关概念进行明确定义,因为不同文化对人机交互的接受程度存在很大差异。我们还需要反思社交机器人和具有“道德主体”的潜在AI代理融合所涉及的伦理和风险。
La interacción con robots entraña riesgos diferentes a los de las relaciones entre humanos. Esto debe ser motivo de reflexión social.
The risks we face when interacting with robots differ from those in human-to-human relationships, and is something which must prompt societal reflection.

