Trending Spy News
European companies claim the EU’s AI Act could ‘jeopardise technological sovereignty’




Some of the biggest companies in Europe have taken collective action to criticize the European Union’s recently approved artificial intelligence regulations, claiming that the Artificial Intelligence Act is ineffective and could negatively impact competition. In an open letter sent to the European Parliament, Commission, and member states on Friday, and first seen by the Financial Times, over 150 executives from companies like Renault, Heineken, Airbus, and Siemens slammed the AI Act for its potential to “jeopardise Europe’s competitiveness and technological sovereignty.”
On June 14th, the European Parliament greenlit a draft of the AI Act following two years of developing its rules, and expanding them to encompass recent AI breakthroughs like large language AI models (LLMs) and foundation models, such as OpenAI’s GPT-4. There are still several phases remaining before the new law can take effect, with the remaining inter-institutional negotiations expected to end later this year.
With more than 150 european researchers, entrepreneurs, CEO… I am today signing an open letter to warn against the risks for our continent of the latest developments of the EU AI Act draft proposalhttps://t.co/FBZAliKkAs?
— Cédric O (@cedric_o) June 30, 2023
The signatories of the open letter claim that the AI Act in its current state may suppress the opportunity AI technology provides for Europe to “rejoin the technological avant-garde.” They argue that the approved rules are too extreme, and risk undermining the bloc’s technological ambitions instead of providing a suitable environment for AI innovation.
One of the major concerns flagged by the companies involve the legislation’s strict rules specifically targeting generative AI systems, a subset of AI models that typically fall under the “foundation model” designation. Under the AI Act, providers of foundation AI models — regardless of their intended application — will have to register their product with the EU, undergo risk assessments, and meet transparency requirements, such as having to publicly disclose any copyrighted data used to train their models.
The open letter claims that the companies developing these foundation AI systems would be subject to disproportionate compliance costs and liability risks, which may encourage AI providers to withdraw from the European market entirely. “Europe cannot afford to stay on the sidelines,” the letter said, encouraging EU lawmakers to drop its rigid compliance obligations for generative AI models and instead focus on those that can accommodate “broad principles in a risk-based approach.”
“We have come to the conclusion that the EU AI Act, in its current form, has catastrophic implications for European competitiveness,” said Jeannette zu Fürstenberg, founding partner of La Famiglia VC, and one of the signatories on the letter. “There is a strong spirit of innovation that is being unlocked in Europe right now, with key European talent leaving US companies to develop technology in Europe. Regulation that unfairly burdens young, innovative companies puts this spirit of innovation in jeopardy.”
The companies also called for the EU to form a regulatory body of experts within the AI industry to monitor how the AI Act can be applied as the technology continues to develop.
“It is a pity that the aggressive lobby of a few are capturing other serious companies,” said Dragoș Tudorache, a Member of the European Parliament who led the development of the AI Act, in response to the letter. Tudorache claims that the companies who have signed the letter are reacting “on the stimulus of a few,” and that the draft EU legislation provides “an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else.”
OpenAI, the company behind ChatGPT and Dall-E, lobbied the EU to change an earlier draft of the AI Act in 2022, requesting that lawmakers scrap a proposed amendment that would have subjected all providers of general-purpose AI systems — a vague, expansive category of AI that LLMs and foundation models can fall under — to the AI Act’s toughest restrictions. The amendment was ultimately never incorporated into the approved legislation.
OpenAI’s CEO Sam Altman, who himself signed an open letter warning of the potential dangers that future AI systems could pose, previously warned that the company could pull out of the European market if it was unable to comply with EU regulations. Altman later backtracked and said that OpenAI has “no plans to leave.”
Did you miss our previous article…
https://eyespypro.com/bing-will-now-surface-ai-generated-buying-guides/
Trending Spy News
FTC investigating OpenAI on ChatGPT data collection and publication of false information




The Federal Trade Commission (FTC) is investigating ChatGPT creator OpenAI over possible consumer harm through its data collection and the publication of false information.
First reported by The Washington Post, the FTC sent a 20-page letter to the company this week. The letter requests documents related to developing and training its large language models, as well as data security.
The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.
The FTC declined to comment. OpenAI did not immediately respond to requests for comment.
This is the first major US investigation into OpenAI, which burst into the public consciousness over the past year with the release of ChatGPT. The popularity of ChatGPT and the large language models that power it kicked off an AI arms race prompting competitors like Google and Meta to release their own models.
The FTC has signaled increased regulatory oversight of AI before. In 2021, the agency warned companies against using biased algorithms. Industry watchdog Center for AI and Digital Policy also called on the FTC to stop OpenAI from launching new GPT models in March.
Large language models can put out factually inaccurate information. OpenAI warns ChatGPT users that it can occasionally generate incorrect facts, and Google’s chatbot Bard’s first public demo did not inspire confidence in its accuracy. And based on personal experience, both have spit out incredibly flattering, though completely invented, facts about myself. Other people have gotten in trouble for using ChatGPT. A lawyer was sanctioned for submitting fake cases created by ChatGPT, and a Georgia radio host sued the company for results that claimed he was accused of embezzlement.
US lawmakers showed great interest in AI, both in understanding the technology and possibly looking into enacting regulations around it. The Biden administration released a plan to provide a responsible framework for AI development, including a $140 million investment to launch research centers. Supreme Court Justice Neil Gorsuch also discussed chatbots’ potential legal liability earlier this year.
It is in this environment that AI leaders like OpenAI CEO Sam Altman have made the rounds in Washington. Altman lobbied Congress to create regulations around AI.
Trending Spy News
OpenAI will use Associated Press news stories to train its models




OpenAI will train its AI models on The Associated Press’ news stories for the next two years, thanks to an agreement first reported by Axios. The deal between the two companies will give OpenAI access to some of the content in AP’s archive as far back as 1985.
As part of the agreement, AP will gain access to OpenAI’s “technology and product expertise,” although it’s not clear exactly what that entails. AP has long been exploring AI features and began generating reports about company earnings in 2014. It later leveraged the technology to automate stories about Minor League Baseball and college sports.
AP joins OpenAI’s growing list of partners. On Tuesday, the AI company announced a six-year deal with Shutterstock that will let OpenAI license images, videos, music, and metadata to train its text-to-image model, DALL-E. BuzzFeed also says it will use AI tools provided by OpenAI to “enhance” and “personalize” its content. OpenAI is also working with Microsoft on a number of AI-powered products as part of Microsoft’s partnership and “‘multibillion dollar investment” into the company.
Announcing partnership with @AP — we’ll help them thoughtfully explore use-cases for our technology, we’ll work with their content in our systems: https://t.co/3lAqzfCF5P
— Greg Brockman (@gdb) July 13, 2023
“The AP continues to be an industry leader in the use of AI; their feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” Brad Lightcap, OpenAI’s chief operating officer, says in a statement.
Earlier this year, AP announced AI-powered projects that will publish Spanish-language news alerts and document public safety incidents in a Minnesota newspaper. The outlet also launched an AI search tool that’s supposed to make it easier for news partners to find photos and videos in its library based on “descriptive language.”
AP’s partnership with OpenAI seems like a natural next step, but there are still a lot of crucial details missing about how the outlet will use the technology. AP makes it clear it “does not use it in its news stories.”
Did you miss our previous article…
https://eyespypro.com/congressistrying-to-stop-discriminatory-algorithms-again/
Trending Spy News
Congress is trying to stop discriminatory algorithms again




US policymakers hope to require online platforms to disclose information about their algorithms and allow the government to intervene if these are found to discriminate based on criteria like race or gender.
Sen. Edward Markey (D-MA) and Rep. Doris Matsui (D-CA) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which aims to ban the use of discriminatory or “harmful” automated decision-making. It would also establish safety standards, require platforms to provide a plain language explanation of algorithms used by websites, publish annual reports on content moderation practices, and create a governmental task force to investigate discriminatory algorithmic processes.
The bill applies to “online platforms” or any commercial, public-facing website or app that “provides a community forum for user-generated content.” This can include social media sites, content aggregation services, or media and file-sharing sites.
Markey and Matsui introduced a previous version of the bill in 2021. It moved to the Subcommittee on Consumer Protection and Commerce but died in committee.
Data-based decision-making, including social media recommendation algorithms or machine learning systems, often lives in proverbial black boxes. This opacity sometimes exists because of intellectual property concerns or a system’s complexity.
But lawmakers and regulators worry this could obscure biased decision-making with a huge impact on people’s lives, well beyond the reach of the online platforms the bill covers. Insurance companies, including those working with Medicaid patients, already use algorithms to grant or deny patient coverage. Agencies such as the FTC signaled in 2021 that they may pursue legal action against biased algorithms.
Calls to make more transparent algorithms have grown over the years. After several scandals in 2018 — which included the Cambridge Analytica debacle — AI research group AI Now found governments and companies don’t have a way to punish organizations that produce discriminatory systems. In a rare move, Facebook and Instagram announced the formation of a group to study potential racial bias in its algorithms.
“Congress must hold Big Tech accountable for its black-box algorithms that perpetuate discrimination, inequality, and racism in our society – all to make a quick buck,” Markey said in a statement.
Most proposed regulations around AI and algorithms include a push to create more transparency. The European Union’s proposed AI Act, in its final stages of negotiation, also noted the importance of transparency and accountability.
-
Trending Spy News2 months ago
FTC investigating OpenAI on ChatGPT data collection and publication of false information
-
Listening Devices3 months ago
Binoculars: The Detective’s Long-Distance Friend
-
Spy Cameras3 months ago
Surveillance Equipment: A Private Detective’s Best Friend
-
Listening Devices3 months ago
Protect Your Privacy: How To Detect And Defeat Surveillance
-
Spy Cameras3 months ago
Hidden Cameras: Unseen Eyes In Investigation
-
Security Devices3 months ago
Gps Trackers: Tracking Suspects With Precision
-
Spy Cameras3 months ago
Wireless Security Cameras: Keeping You Safe And Connected
-
Listening Devices3 months ago
Unleash Your Inner Spy With Cutting-Edge Gadgets!