![]() ![]() Whether on the battlefield or at a protest, machines cannot make complex ethical choices, they cannot comprehend the value of human life. And this means less space for ‘humanity’. ![]() Losing meaningful human control means that the users of weapons are no longer fully engaged with the consequences of their actions. We should be challenging structures of inequality, not embedding them into weapons. Problematic new technologies are also often tested and used on marginalised communities first. Killing people based on pre-programmed labels and identities will always pull us towards reinforcing prejudices or structures of oppression. The prejudices in our society live in our data-sets, our categories, our labels and our algorithms. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanisation.Īllowing autonomous systems that target people would mean allowing systems to reinforce or exacerbate existing structures of inequality. If we allow this dehumanisation we will struggle to protect ourselves from machine decision-making in other areas of our lives. ![]() So machines deciding whether or not to subject us to attack is the ultimate form of digital dehumanisation. The truth is, machines cannot recognise people as ‘people’. Used against people, the technologies that enable autonomous weapons will automatically profile, pattern match and process human beings as data. Technology should empower all members of society, not reduce us – to stereotypes, labels, objects. ![]() Nine problems with killer robots (and one solution) ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |