Kialo requires cookies to work correctly.
General AI should have fundamental rights
A determination of consciousness is arguably not scientifically possible, making the limit for application rights poorly defined.
The poor definition could potentially lead to a legal means of revoking 'human rights' from actual humans deemed incapable of consciousness.
We do not yet understand how the brain works, but we know now that
it is nothing like a computer
. We have no reason to believe that digital AIs can ever be anything more than mindless simulations of human behaviour.
The relevance of human rights would depend on how AI works and may be a poor fit. We cannot know until we actually produce a general conscious AI.
In essence AI is our best attempt at imitation of intelligence to the extent that we understand. However without emotion and constant changes and shifts in the perception of everything, AI will always be an imitation, at times a convincing one.
The Chinese Room
suggests that it is impossible to identify any deterministic process that equates to consciousness.
There exists no scientific method proving and verifying AI consciousness, yet. Thus, we must believe that AI is not conscious.
We can't determine whether humans are conscious as well, yet they have rights.
This simply means we need to do more research into the nature of consciousness. Both as a means to better define other sapient species, but also for future trans-human endeavours.
Some groupings are working on or have defined consciousness.
With the uncertainty of the
hard problem of consciousness
, it is better to err on the side of caution and grant them rights as though they actually do feel.