By Krishna Deepak Maram | PhD Student | Cornell Tech


Elizabeth O’Neill (pictured above) recently gave a talk about Ethics Advisors. She is an assistant professor in philosophy at the Eindhoven University of Technology. Currently, she is a visiting research fellow at the Digital Life Initiative (DLI) in Cornell Tech. With the steady increase in the use of Artificial Intelligence(AI) all around, the following question arises: Could AI be used to improve our own moral reasoning and decision-making capabilities? In this talk, the speaker motivates and presents some perils of Ethics advisors.
Human Morality
Typically people tend to not have a fixed moral standpoint, instead, they view it as something constantly improving. Traditionally, moral decision-making capabilities are improved by reflection, soliciting advice from elders or friends, religious practices and prayers.
How can AI help?
An AI advisor could take the role of a ‘moral environment monitor’ that suggests a person when to make an important decision. If the user tends to make decisions consistent with his / her moral beliefs in the presence of a trusted friend, the AI advisor can observe this and make a suggestion. Other examples include ‘moral prompter’, that prompts you to think through a problem by posing questions. Sometimes, it so happens that people do not have the necessary information to make a decision. A ‘moral recommender system’ such as the app Humane Eating Project suggests users where to eat if one wants to support avoiding animal cruelty.
Typically, ethics advisors need to acquire crucial information about its users’ core beliefs, values in order to be able to make valuable suggestions. One suggestion was to take surveys through which users priorities can be inferred (E.g., Moral Foundation).
Ethical problems
Moral advisors powered by Artificial Intelligence present a cornucopia of ethical problems. One way to think of ethics advisors is as outsourcing of human cognition. One important problem due to this is that of ‘moral deskilling’. Would the use of such advisors cause a loss in human ability to make cognitive choices? Unfortunately, we do not know much about this yet. The rise in technology has already caused some degradation of interpersonal virtues (see Zuckerberg 2017). It seems likely that ethics advisors would worsen this, further the speaker points out that direct social interaction might even play a valuable part of human morality.
Another problem is of “responsibility”. Are people responsible for their behavior since their decisions are influenced by a moral advisor? The effect such a moral advisor would have on a person is very complex and it might even be true that moral inconsistencies act as a push for humans to increase their standards. Some philosophers are of the view that there is inherent value in people making moral choices themselves.
The speaker ends the talk by pointing out that there is still a long time before such ethics advisors materialize. In the near short-term, it is advisable that users treat them with caution and not have overconfidence in their abilities.
Thank you for this beautifully written post.
AI Girlfriend Free AI Tools AI Image Generator Latest Merch Deals Remote Software Job attorney jobs authentic jerseys store
ProFormance Global has worked with clubs, coaches and 1000's of players in supporting their individual journey in football. The focus at ProFormance is always on the individual Proformance Football and helping each person become the best possible version of themselves. Based in Hertfordshire, England with relationships across the globe.