Skip to main content

Featured

  Promoting Peace in a Turbulent World: Strategies to Resolve Political Conflicts In today’s world, political conflicts are rampant, causing immense human suffering and destabilizing entire regions. From the ongoing war in Ukraine to the enduring Israel-Palestine conflict, the need for effective conflict resolution strategies has never been more urgent. This essay explores various approaches to mitigate and ultimately resolve political conflicts, emphasizing diplomacy, economic development, and international cooperation. Diplomacy and Dialogue Diplomacy remains one of the most potent tools for conflict resolution. Engaging in open, honest dialogue allows conflicting parties to understand each other’s perspectives and grievances. The United Nations (UN) plays a crucial role in facilitating such dialogues. The UN Security Council, for instance, can call upon parties to settle disputes through peaceful means and recommend methods of adjustment or terms of settlement 1 . Additional

 OpenAI workers warn that AI could cause ‘human extinction’

group of current and former employees at top Silicon Valley firms developing artificial intelligence warned in an open letter that without additional safeguards, AI could pose a threat of “human extinction.”


The letter, signed by 13 mostly former employees of firms like OpenAI, Anthropic, and Google’s DeepMind, argues top AI researchers need more protections to air criticisms of new developments and seek input from the public and policymakers over the direction of AI innovation.

“We believe in the potential of AI technology to deliver unprecedented benefits to humanity,” the Tuesday letter reads. “We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

The letter argues that the companies developing powerful AI technologies, including artificial general intelligence (AGI), a theorised AI system that’s as smart or smarter than human intelligence, “have strong financial incentives to avoid effective oversight,” from both their own employees and the public at large.

Neel Nanda of DeepMind is the only AI researcher currently affiliated with one of the copmanies who signed the letter.

“This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers,” he wrote on X. “But I believe AGI will be incredibly consequential and, as all labs acknowledge, could pose an existential threat. Any lab seeking to make AGI must prove itself worthy of public trust, and employees having a robust and protected right to whistleblow is a key first step.”

The message calls for companies to refrain from punishing or silencing current or former employees who speak out about the risks of AI, a likely reference to a scandal this month at OpenAI, where departing employees were told to choose between losing vested equity and or signing a non-disparagement agreement about the company that never expired. (OpenAI later lifted the requirement, saying, “It doesn’t reflect our values or the company we want to be.”)

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” an OpenAI spokesperson told The Independent. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”


The company added that it takes a number of steps to ensure its employees are heard and its products are developed responsibly, including an anonymous hotline for workers and a Safety and Security Committee scrutizing the company’s developments. OpenAI also pointed to its support for increased AI regulation and voluntary committments around AI safety.

The open letter follows a season of controversies for top AI firms, especially OpenAI, at the same time as they introduce AI assistants and aids with powerful new capabilities like the ability to have live voice-to-voice conversations with humans and react to visual information like a video feed or written math problem.

Actress Scarlett Johansson, who once voiced an AI assistant in the film Her, accused OpenAI of using her voice as a model for one of its products, despite her explicit rejection of such an alleged proposal. Though OpenAI CEO tweeted the word “her” on the launch of the voice assistant, the company has since denied using Johansson’s voice as a model.

Also in May, OpenAI disbanded a team it formed specifically to research long-term risks of AI, less than a year after it was formed.

Numerous top researchers at the firm have left in recent months, including co-founder Ilya Sutskever.

It’s the latest shake-up since the company suffered a high-profile board battle last year, in which Altman was temporarily removed and then reinstated less than a week later.

The Independent has contacted Anthropic and Google for comment.

From news to politics, travel to sport, culture to climate – The Independent has a host of free newsletters to suit your interests. To find the stories you want to read, and more, in your inbox, click here.

Comments

Popular Posts