Kialo requires cookies to work correctly.
General AI should have fundamental rights
Fundamental rights shouldn't apply to non-human beings.
The relevance of human rights would depend on how AI works and may be a poor fit. We cannot know until we actually produce a general conscious AI.
Rights of AI should be grounded in their own standing, not simply extending human rights.
A non-biologically reliant definition of life could be distilled as ‘and independent existence as an animate being’. A conscious AI satisfies this definition.
Even if AI is not defined as 'alive' by society, semantics are not sound basis for ethical determinations. Society could simply be wrong.
Definitional aspects of being biologically alive are irrelevant to the qualification for rights and protections.
Animal rights apply (as they ethically should) to non-humans. Any sentient being has as much a right as humans to avoid suffering and pursue well-being.
Rights should apply to anything capable of well-being or suffering.
To make this claim is to be speciesist (
Peter Singer explains speciesism
as a logical fallacy that grants humans rights that no other species can attain, simply because of the fact that a human is a human. Circulum viciosum).
There would not be significant difference between conscious AGI and humans.
Rights should increase with agency and self-reflection. Depth of intelligence, awareness and comprehension on-par with humans or better are characteristics which indicate that a being deserves rights.
Were a close acquaintance to oneself revealed to have been a machine with human aesthetics, but behaviorally indistinguishable from a human, one would not be able to hold that the entity was not alive.