In 2013 in LaCrosse, Wisconsin, Eric Looms was sentenced to 6 years in prison after he pled guilty to eluding police in a stolen car. He denied any involvement in a drive-by shooting involving the vehicle. The sentence was at the upper end of the guidelines for the crime to which he pleaded. The judge’s sentence was based, in part, upon the recommendations of an artificial intelligence (“AI”) program with algorithms that assess the risk of violence and recidivism.
Not only did Loomis appeal, but the AI community also criticized the decision, which raises questions as to the role “big data” is playing in everyday decisions. Loomis’s appeal complained that he was not given a chance to evaluate the algorithms. Its assessment based on his gender violated his due process rights, claims Loomis. At a court hearing, an expert testified that the algorithms have a tendency to overestimate an accused’s risks.
Henry Kissinger says of the use of AI, it is “how the enlightenment ends.” Stephen Hawking cautioned that it could be the end of the human race. Concerns include that a despot could include it to “enforce censorship, micro-target propaganda and impose society-wide controls on citizens.” Used inappropriately, it could act indiscriminately and invasively. AI algorithms used in social media can deepen social tensions.
“People worry that society is handing over decision-making to secret software codes – essentially instructions – that have no understanding of the context, meaning or consequences of what they do. ” The fear is that the algorithms magnify the bias and flaws of their human coders and the data inputted.
The use of AI in decisions such as Mr. Loomis’s sentence is raising a whole host of serious ethical issues. The biggest ethical issues are the use of AI in editing genes in babies, the tech’s use in warfare, and data gathering. Interestingly, Google employees forced the company to quit a project that oversees AI use in drone attacks. Many wonder how such employees can have such an effect on the military’s ability to protect the nation’s interests. A key ethical issue is how much power AI companies can have in a democratic nation-state that has definable borders.
The AU has set up an AI Code of Ethics. Many AI companies are developing principles around AI use and setting up ethics boards to monitor its deployment. Platforms are being developed to limit AI’s ability to spread viral extremist content. Data gathers are protecting their user information while U.S. tech employees rebel against the use of AI in warfare. “A UK inquiry in 2019, for instance, has called for a deterrent-enforced code of conduct on data gathering and for users to have the ability to move their data to other parties and make their data available to competitors.”
Says one AI authority: “Be aware that AI is being deployed at a faster rate than ethical issues can be properly identified and resolved. ”
The tech industry is united in fighting privacy exemptions for intelligence agencies “because ‘back doors’ and ‘master keys’ provide openings for hackers.” A critical ethical issue, yet unanswered, is how much privacy individuals should be forced to give up to allow governments to provide safety. Another question is what role private companies should have in setting these limits.
“One example of underhand data gathering is when platforms secretly track users after they leave their sites. Another is the way facial-recognition companies ‘scrape’ photos on the internet – use any published photo – to hone the technology.”
An ethical issue surrounding the use of AI is whether the algorithms can deliver a fair outcome. Other questions involve the transparency of the process and a right of appeal. It is true that algorithms can process data and discover patterns far faster than humans can. Its shortcoming is that AI has no understanding of the cause and effect of its decision. It lacks common sense, emotion, irony, a sense of humor, imagination, the things that make us human. There is no free will and biases can be built in. they can be gamed and outsmarted and are not that sophisticated. The ethical dilemma is whether human decision-making should be handed over to a technology that lacks so many human characteristics.
Given society’s fascination with (some might say addiction to) AI, it is fairly certain that the technology will not go away any time soon. Only the future will tell what the outcome will be of the ethical questions raised by the use of AI.
From: AI: How Enlightenment Ends, Livewire, https://www.livewiremarkets.com/wires/ai-how-the-enlightenment-ends, May 16, 2019.