Resolute Square

The Moral Dimension: The Ethics Of Artificial Intelligence

Contributor Gary Hart reflects on the moral and ethical dilemmas arising from AI's rapid development and its impact on society, democracy, and international diplomacy.
Published:August 21, 2023
Share

By Gary Hart
 
         A recent opinion essay on the subject of the day, artificial intelligence, contained this observation: “a technology that has the potential to shape the politics of this century in a way nuclear arms shaped the last one.”

         Even those among us without scientific training are brought up short by such a comparison.

         As if that analogy is not enough, yet another knowledgeable commentator wrote this following statement: “If someone builds a too-powerful AI under present conditions, every single member of the human species and all biological life on Earth dies shortly thereafter.”

         Using the currently topical career of Robert Oppenheimer, the writer of this essay called the use of atomic bombs on Japan a forerunner of AI as a “crossroad that connects engineering and ethics.”

         That should be obvious even to those of us who are not scientists or engineers. But, if so, where is the complex idea of ethics, or moral philosophy, in the emerging discussion?

         I have been a philosophy student all my life, starting almost 70 years ago. And much of that study has involved complex ethical issues or what is often referred to as moral philosophy. Like most other non-scientists, I struggle to understand even the basics of AI. But my efforts to find the ethics crossroad have so far proved futile.

         If serious thought and discussion about the ethics of AI, prompted by comparisons to nuclear weapons of the 20th century, are taking place, they have not made their way into the public forum or everyday political discourse.

         There are philosophy departments in virtually all American universities, great and small. Are lectures being delivered and papers assigned on the ethical implications of AI? If so, why are they not appearing in the popular press?

         Despite some current evidence to the contrary, most Americans are pretty thoughtful people and are particularly keen on right and wrong. In fact, most of us could grasp and discuss issues of right and wrong well before understanding the difference between GPT-4 and an algorithm.

         The author of the opinion essay cited above argues for “a more intimate collaboration between the state (government) and the technology sector and a closer alignment of vision between the two.”

         It is here argued to add a third partner, wise men and women steeped in the tradition of ethical thinking dated from Aristotle forward.

         Unless I read American Prometheus too quickly, there was nothing like this at Los Alamos. But shouldn’t there have been?

         After Hiroshima and Nagasaki, and after a bitter attack on his loyalty, Oppenheimer had serious second thoughts about what he had produced at Los Alamos.

         If artificial intelligence is to the 21st century as nuclear weapons were to the 20th, it is not too soon to debate and discuss the ethical implications for mankind, for democratic governance, for right and wrong guardrails around this explosive and, yes, dangerous technology that seems to frighten even its most ardent engineers.

         Aristotle associated ethics with virtue and distinguished between intellectual virtue and virtues of character. Virtue is not developed by technical skills. We cannot create virtuous machines. So, we must insist that those who create the machines are themselves virtuous and have the good of society in mind in their creation and operation.

         It is argued here that, because of its as yet undefined power, AI has an ethical and moral component and must be operated within those dimensions by humans aware of that component.

         The still unanswered questions about AI are innately philosophical. What is the nature of cognition? What is the nature of thought? Will AI develop the capability to make laws or render legal judgments? If so, based on what principles? Will AI have in the foreseeable future the ability to restrict or take away our democratic rights and freedoms?

         America can and should take diplomatic leadership to prevent an AI arms race in coming months and years. To say “if we don’t do it, they (Russia, China, et. al.) will” is to invite international competition to be the first country to put its arsenals in the hands of GPT-10.

         Then, AI will be controlling us instead of us controlling it.

Related

  • It's All About The Taxes With Ro Khanna
    The Lincoln Project Podcast

    The Lincoln Project Podcast

    Rick Wilson interviews Congressman Ro Khanna, who represents Silicon Valley, on a range of critical topics. They discuss the tech industry’s support for Donald Trump, America’s and Congress’s readiness for artificial general intelligence (AGI), the necessity of antitrust regulation in Silicon Valley, the future of TikTok, and the impact of disinformation on politics.
    July 10, 2024
  • The Inevitable New World
    The Enemies List

    Rick Wilson's The Enemies List

    Understanding the new world order is a moving target. In this episode Rick engages in a thought-provoking conversation with Ian Bremmer, president and founder of the Eurasia Group. Bremmer, a prominent political scientist and author, delves into the complexities of the shifting global power dynamics and the rise of a multipolar world. The discussion touches on the implications of Russia's aggression, China's economic ascent, and the transformative power of technology, particularly AI, on international relations and democracy. Bremmer also provides insights on the challenges facing Western democracies and offers valuable advice to the younger generation on navigating this evolving geopolitical landscape.
    May 27, 2024
  • AI Apocalypse
    The Enemies List

    Rick Wilson's The Enemies List

    It feels like all anyone wants to talk about these days is how AI is taking over. Well, we thought it time to have an expert on to break it all down. In this episode Rick speaks with AI expert Daniel Faggella in a comprehensive discussion on the evolving landscape of artificial intelligence. They speak to the implications of AI on the workforce, economy, and society, examining both the current state and future possibilities. Faggella brings insights from his experience in market research, discussing how AI is reshaping industries from legal to entertainment. They explore the concept of artificial general intelligence (AGI), its accelerated timeline, and potential societal impacts. The conversation also touches on the role of AI in politics, the concept of a political singularity, and the ethical considerations of AI development.
    April 10, 2024