Publication schedules

Artikkelit julkaistaan 2 tunnin välein alkaen klo 9, julkaisuajat ovat siis 9,11,13,15,17 ja mikäli päivälle on myös videojulkaisu (ei aina) se tapahtuu klo 19, muussa tapauksessa klo 19 slottiin tulee normaali artikkeli.

Contact

Publication-X on Unbound publication, articles come from our partners, primarily only translating texts and other publications into Finnish.

If necessary, the easiest way to contact the editor is by email at [email protected] or Telegram:

https://t.me/konradkurzex

Publication-X also has its own telegram channel, https://t.me/publicationxcom

 

30.6.2024

Publication-X

"Perfecta tempestas medicandi"

Technocrats at OpenAI in "reckless" race for dominance

7 min read
Technocrats in "reckless" race for dominance at OpenAI

Quick link to this article: https://publication-x.com/en/f7dt

A group of OpenAI insiders are whistling their words about a culture of recklessness and secrecy at a San Francisco AI company that is racing to build the most powerful AI systems ever created.

The group, which includes nine current and former OpenAI employees, has in recent days been gathering collective concerns that the company has not done enough to prevent its AI systems from becoming dangerous.

According to members, OpenAI, which started as a non-profit research lab and rose to prominence  the 2022 ChatGPT publication with , puts profits and growth first in an attempt to build general artificial intelligence, or AGI, the industry term for computers. a program that can do anything a human can do.

They also allege that OpenAI has used hardball tactics to prevent employees from voicing their concerns about the technology, including restrictive contempt agreements that departing employees were asked to sign.

"OpenAI is really excited about building the AGI, and they are competing relentlessly to be the first there," said Daniel Kokotajlo, a former researcher in OpenAI's Governance Division and one of the group's organisers.

The group published  Tuesday open letter  , in which it called on leading AI companies, including OpenAI, to increase transparency and whistleblower protection.

The other members are William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees, Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees accepted the letter anonymously for fear of reprisals from the company. Kokotajlo said. One current and one former employee at Google DeepMind, Google's central AI lab, also signed the agreement.

OpenAI spokesperson Lindsey Held said in a statement: "We are proud of our achievements in providing the most efficient and secure AI systems, and we believe in our scientific approach to tackling risk. We agree that rigorous debate is crucial to the importance of this technology and we will continue to work with governments, civil society and other communities around the world.

A Google spokesperson declined to comment.

The campaign comes at a difficult time for OpenAI. It is still recovering from last year's coup attempt, when the company's board members voted to sack CEO Sam Altman over concerns about his integrity. Mr.. Altman was brought back days later, and new members were added to the board.

The company is also facing legal battles with content providers who have accused it of stealing copyrighted works to train its models (New York Times  challenged by OpenAI  and its partner Microsoft for copyright infringement last year.) Its recent release of a hyper-realistic voice assistant was disrupted by the  public controversy  Hollywood actress Scarlett Johansson, who claimed that OpenAI was a copycat. her voice without permission.

But nothing has stuck like the accusation that OpenAI has been too cavalier about security.

Last month, two senior AI researchers - Ilya Sutskever and Jan Leike - left OpenAI for the cloud. DR Sutskever, who was  served on the OpenAI Board  and voted for Mr Altman had warned of the potential risks of powerful AI systems. Some security-minded employees saw his departure as a setback.

So was Dr. Leike, who, together with Dr. Sutskever, had led the OpenAI "superalignment" team focused on risk management for effective AI models. Retrieved from  public messages,  where he announced his departure, Dr Leike said he believed that "safety culture and processes have taken a back seat to glossy products".

Neither Dr. Sutskever nor Dr. Leike signed the open letter written by former employees. But their resignation prompted other former OpenAI employees to speak out.

"When I joined OpenAI, I didn't adopt this attitude of 'Let's bring things into the world and see what happens and fix them afterwards'," Mr Saunders said.

Some former employees have ties to powerful altruism, a utilitarian movement that in recent years has begun to block the existential threats posed by AI. Critics have accused the movement of being a pro-technology movement. on the promotion of doomsday scenarios  , such as the notion that an uncontrolled AI system could take over and destroy humanity.

Mr. Kokotajlo, 31, joined OpenAI in 2022 as a management researcher and was asked to predict the progress of artificial intelligence. He was not optimistic, to put it mildly.

In his previous job at an AI security organisation, he predicted that AGI could arrive in 2050. But after seeing how fast AI was improving, he shortened his timeline. Now he believes there is a 50 percent chance that AGI will arrive by 2027 - in just three years.

He also believes that the probability of advanced AI catastrophically destroying or harming humanity - a grim statistic often shortened to "p(doom)" in AI circles - is 70%.

At OpenAI, Mr. Kokotajlo saw that while the company had security protocols in place - including a joint effort with Microsoft known as the "deployment safety board", which was supposed to review new designs for high risks before they were released - they rarely seemed to slow anything down. down

He said, for example, that in 2022 Microsoft quietly began testing in India a new version of its Bing search engine, which some OpenAI staff believed to contain an unreleased version of GPT-4, OpenAI's cutting-edge major language model. Mr.. Kokotajlo said that he was told that Microsoft had not received approval from the security board before testing the new model, and after the board learned of the tests...  several reports through , which said Bing was behaving strangely towards users - it did nothing to stop Microsoft from moving forward. more widely.

Microsoft spokesman Frank Shaw denied these allegations. He said that neither GPT-4 nor any OpenAI models had been used in the Indian tests. The first time Microsoft released technology based on GPT-4 was in early 2023, he said, and it had been reviewed and approved by the security panel's predecessor.

Eventually, Mr. Kokotajlo said he became so concerned that he told Mr. Altman last year that the company should "turn to security" and spend more time and resources on protecting itself from the risks of AI rather than improving its models. He said Mr. Altman had claimed to agree with him, but little changed.

In April, he resigned. In an email to his team, he said he was leaving because he had "lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.

"The world is not ready, and we are not ready," Mr Kokotajlo wrote. "And I am concerned that we are rushing ahead regardless and rationalizing our actions."

OpenAI  reported last week  started training for a new AI flagship and formed a new safety committee to examine the risks associated with the new model and other future technologies.

On the way out, Mr. Kokotajlo refused to sign a standard paper for departing OpenAI employees that contained a strict undervaluation clause that prevented them from saying negative things about the company or else risk having their equity taken away.

Many workers stand to lose millions of dollars if they refuse to sign. Mr.. Kokotajlo had about $1.7 million in tied-up capital, according to him, which represented the vast majority of his net worth, and he was willing to give it all up.

(A small storm broke out last month after the  Vox announced  news about these agreements. In response, OpenAI claimed that it had never recovered equity from former employees and would not do so. Altman said he was "really embarrassed" not to do so. have known about the contracts, and the company announced that it would remove the non-disparagement clauses from its standard paperwork and release former employees from their contracts.)

In their open letter, Mr. Kokotajlo and other former OpenAI employees demand that OpenAI and other AI companies do not use non-disparagement and non-disclosure agreements.

"Extensive confidentiality agreements prevent us from expressing our concerns, except to those companies that may not be able to address these issues," they write.

They also call on AI companies to "support a culture of open criticism" and create a reporting process for employees to anonymously raise security concerns.

They have retained a pro bono lawyer, Lawrence Lessig, a prominent legal scholar and activist. Mr. Lessig also advised Frances Haugen, a former Facebook employee who  became a whistleblower  and accused the company of putting profits before safety.

In an interview, Mr Lessig said that while traditional whistleblower protections are typically appropriate for reporting illegal activity, it was important for employees of AI companies to be able to discuss risks and potential harms freely because of the importance of the technology.

"Workers are an important safety line, and if they cannot speak freely without retaliation, that channel will be closed," he said.

Miss. OpenAI spokesperson Held said the company has "ways for employees to voice their concerns", including an anonymous honesty hotline.

Mr. Kokotajlo and his team are sceptical that self-regulation alone will be enough to prepare for a world with more powerful AI systems. So they call on lawmakers to regulate the industry as well.

"There has to be some kind of democratically accountable, transparent governance structure for this process," Mr. Kokotajlo said. "Instead of just a couple of different private companies competing with each other and keeping everything secret."

Archived copy: https://archive.ph/GrVGE

en_USEnglish