The Petrov dilemma: Moral responsibility in the age of ChatGPT
With power, comes responsibility. The decision to do nothing accrues responsibility when action is a possibility. We are responsible for the consequences if we choose to do nothing, or simply follow orders. Sometimes we must follow rules, authorities, law or orders: but we should always ask, are they right? AI, such as ChatGPT, can provide us with means to achieve our ends. But it should never dictate to us what our ends should be. Only humans can make decisions about good and bad, right and wrong. The biggest threat of AI is that it results in dehumanisation as we blithely accept its output without taking a first person stance and evaluating the justifiability of its output. We must be active participants engaging with technology, not passive consumers.
<< Home