AI employees seek protection for exposing risks ROC contributed to the discussion in the past

June 6, 2024

Photo: terrnews.com Photo: terrnews.com     

A group of current and former employees from OpenAI and Google DeepMind are calling for protection from retaliation for sharing concerns about the “serious risks” of the technologies these and other companies are building. This is according to a report from Bloomberg.

These IT specialists are voicing their deep concern that neither the AI companies nor our society are ready for the potentially dangerous consequences of AI development, and there is no government oversight of the process. Among the few voices of warning are the employees, but they have all signed broad confidentiality agreements, as they stated in public letter signed by 13 people.

One great cause for concern is the fact that in recent weeks, OpenAI dissolved one of its most high-profile safety teams and experienced a series of staff departures. The employees had signed non-disparagement agreements, and violating them according to their consciences would affect them financially. After some pushback, OpenAI said it would release past employees from the agreement. But as one former employer wrote on X, “employees may still fear other forms of retaliation for disclosure, such as being fired and sued for damages.”

Because AI companies’ activities are not yet subject to regulation, whistleblower programs are unlikely to protect the concerned employees, since those programs only apply to reports on illegal activity. Since no one has legislated the potentially dangerous consequences of AI, there can as yet be no court action against the companies.

As another former employee, wrote, “There's nothing really stopping companies from building AGI [artificial general intelligence—a hypothetical version of AI that can outperform humans on many tasks] and using it for various things, and there isn’t much transparency,” said the former employee, who risked foregoing his equity in order to avoid signing a nondisparagement agreement. “I quit because I felt like we were not ready. We weren't ready as a company, and we weren't ready as a society for this, and we needed to really invest a lot more in preparing and thinking about the implications” Bloomberg reports.

***

In early March of this year, Metropolitan Kliment (Kapalin), of Kaluga and Borovsk a member of the Higher Church Council of the Russian Orthodox Church, at the international conference “God-Man-World” stated that the Russian Orthodox Church will study the concerns of believers related to the development of artificial intelligence (AI), particularly in the context of moral and ethical standards, and will present its findings to the authorities to seek solutions to these problems, as reported by TACC news agency.

“The Church’s task now is to carefully study all the concerns, all the problems that the development of artificial intelligence poses for believers, particularly Christians, and to highlight this to the state. The state must consider how to address this,” said Metropolitan Kliment.

According to him, a commission on social life, culture, science, and information, created by the Holy Synod of the Russian Orthodox Church, is researching issues related to the ethical aspects of AI development. Metropolitan Kliment, who heads this commission, noted that while the church cannot prohibit the development of artificial intelligence, it can “raise its voice” and influence this phenomenon in some way.

The hierarch of the Russian Orthodox Church mentioned that the issue of AI development has already been discussed by the Greek Orthodox Church at its council, and separate documents on this topic have been adopted by Catholics and Protestants.

In April, 2023, the ROC called on the Russian government to forbid the use of AI with a human face. The Patriarchal Commission on Family, Motherhood, and Childhood Protection of the Russian Orthodox Church stated the necessity to establish a ban on anthropomorphizing (attributing human characteristics and qualities to inanimate objects) programs, and to legally mandate a disclaimer for neural networks indicating that a person is interacting with artificial intelligence. This position was presented by the chairman of the commission, Fr. Feodor Lukyanov, at a roundtable discussion on the legal regulation and application of neural network technologies at the National Research University Higher School of Economics (HSE), the Russian news agency Vedomosti reported then. Attorney Pavel Katkov stated then that Fr. Feodor’s proposals are technically possible to put into practice, but would decrease Russia’s advantage in the IT field.

The Russian Orthodox Church cannot directly introduce bills into the State Duma, but as an organization representing a significant portion of society, it proposes various initiatives to parliamentarians and the government, Vedomosti quoted head of the external communications department of the Polilog consulting group, Larisa Gelina as saying. Additionally, church lawyers can be involved in the development of specific bills and legislative acts.

Neural networks are currently one of the most contentious topics for legal regulation, and not only in Russia, Gelina continues. Fr. Fyodor’s speech primarily shows that the Russian Orthodox Church is keeping up with modern trends and demonstrates to believers a readiness to respond promptly to new challenges. She said that public discussion is ongoing.

Follow OrthoChristian on Twitter, Vkontakte, Telegram, WhatsApp, MeWe, and Gab!

6/6/2024

Subscribe
to our mailing list

* indicates required
×