OpenAI chief executive Sam Altman. image source: GETTY IMAGES
The owner of the business that created ChatGPT has stated that there are no plans to exit Europe.
Sam Altman, CEO of OpenAI, has changed his mind about leaving the EU if adhering to upcoming artificial intelligence (AI) rules proves to be too difficult.
Earlier this week, Altman expressed dissatisfaction with the European Union’s (EU) proposed legislation on artificial intelligence, calling it overly restrictive.
Altman, however, has now backtracked on his threat and expressed eagerness for carrying on with activities inside the EU in reaction to the extensive press coverage of his words.
Altman wrote in a tweet, “We are excited to continue to operate here and, of course, have no plans to leave.
” Instead of considering withdrawal, this change demonstrates OpenAI’s commitment to cooperating with the EU and implies a readiness to participate in impending AI rules.
The legislation under consideration includes a clause that might require corporations who use generative AI to reveal the copyrighted resources they used to train their systems to produce text and images.
This provision intends to allay worries expressed by people working in the creative sectors who claim that AI corporations use their performing, music, and artistic creations without their consent to train their systems to mimic them.
But according to Time, Mr. Altman has raised worries that some safety and transparency requirements specified in the AI Act would be technically challenging for OpenAI to meet.
These worries draw attention to potential difficulties that OpenAI and similar businesses may encounter in complying with the law’s obligations in terms of both technological viability and transparency standards.
a protester outside UCL, the venue for Sam Altman’s speech. image source: FUTURE PUBLISHING/GETTY IMAGES
Mr. Altman emphasized his confidence about the potential for AI to create more employment opportunities and reduce inequality during a gathering on Wednesday at University College London.
He also met with Rishi Sunak, the prime minister, and executives from DeepMind and Anthropic, two leading AI companies.
The discussions mostly focused on how to deal with the dangers posed by new technology, which ranged from concerns about disinformation and national security to potential “existential threats.
” The need for both voluntary initiatives and regulatory measures to properly control these risks was discussed by the participants.
Mr. Sunak maintains a different perspective from some academics who worry that super-intelligent AI systems could endanger humans.
He asserts that AI has the capacity to improve both the wellbeing of the British populace and the course of human history.
According to him, new opportunities in a number of industries can result in better public services and better outcomes for the populace.
The prime minister and AI executives gathered at No. 10. image source: NO10 DOWNING STREET
During the G7 summit held in Hiroshima, the leaders of the United States, United Kingdom, Germany, France, Italy, Japan, and Canada reached an agreement that the development of “trustworthy” AI should be a collaborative effort on an international scale.
Recognizing the importance of global cooperation, they emphasized the need for joint endeavors in ensuring the responsible and ethical deployment of AI.
In a parallel effort, the European Commission aims to establish an AI pact with Alphabet, the parent company of Google, prior to the implementation of any legislation within the European Union.
Thierry Breton, the EU industry chief, held discussions with Sundar Pichai, the CEO of Google, in Brussels, emphasizing the significance of international collaboration in effectively regulating AI.
The convergence of these initiatives underscores the belief among stakeholders that international cooperation is vital in addressing the challenges and opportunities presented by AI and establishing frameworks that prioritize trust, responsibility, and ethical considerations.
Mr. Breton expressed his agreement with Sundar Pichai, stating that waiting for AI regulation to become enforceable is not a viable option.
Instead, they emphasized the importance of collaborating with all AI developers to proactively establish an AI pact on a voluntary basis, even before the legal deadline.
Tim O’Reilly, a seasoned professional from Silicon Valley, author, and founder of O’Reilly Media, believes that a crucial initial step towards responsible AI development would involve implementing transparency measures.
He suggests mandating transparency to ensure that AI systems and their processes are open and understandable.
Additionally, O’Reilly advocates for the creation of regulatory institutions that can enforce accountability in the AI industry.
By having robust regulatory frameworks in place, the aim is to address concerns related to ethics, privacy, and potential biases associated with AI technologies.
Tim O’Reilly claims that excessive alarmism and panic around AI, along with the complicated nature of its regulation, could lead to analysis paralysis.
He recommends that businesses engaged in the creation of sophisticated AI work together to create a complete set of KPIs.
These measures would provide the foundation for consistent reporting to regulators and the general public on a regular basis.
O’Reilly also stresses the significance of developing a procedure for upgrading these measurements as fresh best practices in the field of AI are developed.
This strategy tries to strike a balance between addressing issues and guaranteeing continued responsibility and improvement in the AI sector.