Virtue Signalling: Reassuring Observers of Machine Behaviour
Virtue Signalling: Reassuring Observers of Machine Behaviour
Claire Benn, Alban Grastien
The Joint Session of the Aristotelian Society and Mind
There are many constraints that machines ought to abide by: not to harm, not to waste resources and so on. However, in this paper, we introduce another. We argue that, in situations of information asymmetry, partially observed machine systems ought to reassure observers of their behaviour that they understand the constraints that they are under and that they have and will abide by those constraints. Specifically, a system should not follow a course of action that, from the point of view of the observer, is not easily distinguishable from a course of action that is prohibited. We illustrate the problem with two realistic examples and one constructed to provide a unified framework. We then outline the solution, namely the constraint of Reassurance, the strongest version of which states: do not follow a path that, at some points of observation, is similar to a prohibited path. Taking seriously the application of this constraint to machine ethics, we formalise the problem and this solution. We then demonstrate, both technically and in application to our previous examples, three ways in which it can be made sensitive to variations in risk attitudes. We conclude by justifying what looks like inefficiency (taking a more costly path when a cheaper, permissible path is available) by appeal to signalling theory in economics, biology and sociology of religion. This theory explains how costly signals are often more effective as they are more honest indicators of the features to be communicated when they cannot be directly observed. Thus, while the constraint of reassurance proposed is costly, it provides an honest signalling of virtue.
Watch the presentation of this paper here