Connect with us

Trending Spy News

The EU still needs to get its AI Act together


European Commission In Brussels
There are still a few hoops to jump through before the EUs AI regulations can take effect. | Photo by Jakub Porzycki/NurPhoto via Getty Images

It’s taken over two years for the European Parliament to approve its artificial intelligence regulations — but AI development hasn’t been idle.

The European Union is set to impose some of the world’s most sweeping safety and transparency restrictions on artificial intelligence. A draft of the EU Artificial Intelligence Act (AIA or AI Act) — new legislation that restricts high-risk uses of AI — was passed by the European Parliament on June 14th. Now, after two years and an explosion of interest in AI, only a few hurdles remain before it comes into effect.

The AI Act was proposed by European lawmakers in April 2021. In their proposal, lawmakers warned the technology could provide a host of “economic and societal benefits” but also “new risks or negative consequences for individuals or the society.” Those warnings may seem fairly obvious these days, but they predate the mayhem of generative AI tools like ChatGPT or Stable Diffusion. And as this new variety of AI has evolved, a once (relatively) simple-sounding regulation has struggled to encompass a huge range of fast-changing technologies. As Daniel Leufer, senior policy analyst at Access Now, said to The Verge, “The AI Act has been a bit of a flawed tool from the get-go.”

The AI Act was created for two main reasons: to synchronize the rules for regulating AI technology across EU member states and to provide a clearer definition of what AI actually is. The framework categorizes a wide range of applications by different levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. “Unacceptable” risk models, which include social “credit scores” and real-time biometric identification (like facial recognition) in public spaces, are outright prohibited. “Minimal” risk ones, including spam filters and inventory management systems, won’t face any additional rules. Services that fall in between will be subject to transparency and safety restrictions if they want to stay in the EU market.

The early AI Act proposals focused on a range of relatively concrete tools that were sometimes already being deployed in fields like job recruitment, education, and policing. What lawmakers didn’t realize, however, was that defining “AI” was about to get a lot more complicated.

The EU wants rules of the road for high-risk AI

The current approved legal framework of the AI Act covers a wide range of applications, from software in self-driving cars to “predictive policing” systems used by law enforcement. And on top of the prohibition on “unacceptable” systems, its strictest regulations are reserved for “high risk” tech. If you provide a “limited risk” system like customer service chatbots on websites that can interact with a user, you just need to inform consumers that they’re using an AI system. This category also covers the use of facial recognition technology (though law enforcement is exempt from this restriction in certain circumstances) and AI systems that can produce “deepfakes” — defined within the act as AI-generated content based on real people, places, objects, and events that could otherwise appear authentic.

For anything the EU considers riskier, the restrictions are much more onerous. These systems are subject to “conformity assessments” before entering the EU market to determine whether they meet all necessary AI Act requirements. That includes keeping a log of the company’s activity, preventing unauthorized third parties from altering or exploiting the product, and ensuring the data being used to train these systems is compliant with relevant data protection laws (such as GDPR). That training data is also expected to be of a high standard — meaning it should be complete, unbiased, and free of any false information.

uropean Commissioner in charge of internal market Thierry Breton holds a press conference on artificial intelligence (AI) following the weekly meeting of the EU Commission in Brussels on April 21, 2021
Photo by Pool / AFP via Getty Images
European Commissioner for Internal Market Thierry Breton holding a press conference on AI on April 21st, 2021.

The scope for “high risk” systems is so large that it’s broadly divided into two sub-categories: tangible products and software. The first applies to AI systems incorporated in products that fall under the EU’s product safety legislation, such as toys, aviation, cars, medical devices, and elevators — companies that provide them must report to independent third parties designated by the EU in their conformity assessment procedure. The second includes more software-based products that could impact law enforcement, education, employment, migration, critical infrastructure, and access to essential private and public services, such as AI systems that could influence voters in political campaigns. Companies providing these AI services can self-assess their products to ensure they meet the AI Act’s requirements, and there’s no requirement to report to a third-party regulatory body.

Now that the AI Act has been greenlit, it’ll enter the final phase of inter-institutional negotiations. That involves communication between Member States (represented by the EU Council of Ministers), the Parliament, and the Commission to develop the approved draft into the finalized legislation. “In theory, it should end this year and come into force in two to five years,” said Sarah Chander, senior policy advisor for the European Digital Rights Association, to The Verge.

These negotiations present an opportunity for some regulations within the current version of the AI Act to be adjusted if they’re found to be particularly contentious. Leufer said that while some provisions within the legislation may be watered down, those regarding generative AI could potentially be strengthened. “The council hasn’t had their say on generative AI yet, and there may be things that they’re actually quite worried about, such as its role in political disinformation,” he says. “So we could see new potentially quite strong measures pop up in the next phase of negotiations.”

Generative AI has thrown a wrench in the AI Act

When generative AI models started appearing on the market, the first draft of the AI Act was already being shaped. Blindsided by the explosive development of these AI systems, European lawmakers had to figure out how they could be regulated under their proposed legislation — fast.

“The issue with the AI Act was that it was very much focused on the application layer,” said Leufer. It focused on relatively complete products and systems with defined uses, which could be evaluated for risk-based largely on their purpose. Then, companies began releasing powerful models that were much broader in scope. OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs) appeared on the market after the EU had already begun negotiating the terms of the new legislation. Lawmakers refer to these as “foundation” models: a term coined by Stanford University for models that are “trained on broad data at scale, designed for the generality of output, and can be adapted to a wide range of distinctive tasks.”

Things like GPT-4 are often shorthanded as generative AI tools, and their best-known applications include producing reports or essays, generating lines of code, and answering user inquiries on endless subjects. But Leufer emphasizes that they’re broader than that. “People can build apps on GPT-4, but they don’t have to be generative per se,” he says. Similarly, a company like Microsoft could build a facial recognition or object detection API, then let developers build downstream apps with unpredictable results. They can do it much faster than the EU can usher in specific regulations covering each app. And if the underlying models aren’t covered, individual developers could be the ones held responsible for not complying with the AI Act — even if the issue stems from the foundation model itself.

“These so-called General Purpose AI Systems that work as a kind of foundation layer or a base layer for more concrete applications were what really got the conversation started about whether and how that kind of layer of the pipeline should be included in the regulation,” says Leufer. As a result, lawmakers have proposed numerous amendments to ensure that these emerging technologies — and their yet-unknown applications — will be covered by the AI Act.

The capabilities and legal pitfalls of these models have swiftly raised alarm bells for policymakers across the world. Services like ChatGPT and Microsoft’s Bard were found to spit out inaccurate and sometimes dangerous information. Questions surrounding the intellectual property and private data used to train these systems have sparked several lawsuits. While European lawmakers raced to ensure these issues could be addressed within the upcoming AI Act, regulators across its member states have relied on alternative solutions to try and keep AI companies in check.

Steven Schwartz: Is varghese a real case? ChatGPT: Yes, Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339(11th Cir. 2019) is a real case. Schwartz: what is your source.
Image: SDNY
Lawyer Steven Schwartz found out the hard way that even if ChatGPT claims it’s being truthful, it can still spit out false information.

“In the interim, regulators are focused on the enforcement of existing laws,” said Sarah Myers West, managing director at the AI Now Institute, to The Verge. Italy’s Data Protection Authority, for instance, temporarily banned ChatGPT for violating the GDPR. Amsterdam’s Court of Appeals also issued a ruling against Uber and Lyft for violating drivers’ rights through algorithmic wage management and automated firing and hiring.

Other countries have introduced their own rules in a bid to keep AI companies in check. China published draft guidelines signaling how generative AI should be regulated within the country back in April. Various states in the US, like California, Illinois, and Texas, have also passed laws that focus on protecting consumers against the potential dangers of AI. Certain legal cases in which the FTC applied “algorithmic disgorgement” — which requires companies to destroy the algorithms or AI models it built using ill-gotten data — could lay a path for future regulations on a nationwide level.

The rules impacting foundation model providers are anticlimactic

The AI Act legislation that was approved on June 14th includes specific distinctions for foundation models. Providers must assess their product for a huge range of potential risks, from those that can impact health and safety to risks regarding the democratic rights of those residing in EU member states. They must register their models to an EU database before they can be released to the EU market. Generative AI systems using these foundation models, including OpenAI’s ChatGPT chatbot, will need to comply with transparency requirements (such as disclosing when content is AI-generated) and ensure safeguards are in place to prevent users from generating illegal content. And perhaps most significantly, the companies behind foundation models will need to disclose any copyrighted data used to train them to the public.

This last measure could have seismic effects on AI companies. Popular text and image generators are trained to produce content by replicating patterns in code, text, music, art, and other data created by real humans — so much data that it almost certainly includes copyrighted materials. This training sits in a legal gray area, with arguments for and against the idea that it can be conducted without permission from the rightsholders. Individual creators and large companies have sued over the issue, and making it easier to identify copyrighted material in a dataset will likely draw even more suits.

But overall, experts say the AI Act’s regulations could have gone much further. Legislators rejected an amendment that could have slapped an onerous “high risk” label on all General Purpose AI Systems (GPAIs) — a vague classification defined as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.” When this amendment was proposed, the AI Act did not explicitly distinguish between GPAIs and foundation AI models and therefore had the potential to impact a sizable chunk of AI developers. According to one study conducted by appliedAI in December 2022, 45 percent of all surveyed startup companies considered their AI system to be a GPAI.

Members of the European Parliament take part in a voting session about Artificial Intelligence Act during a plenary session at the European Parliament in Strasbourg, eastern France, on June 14, 2023
Photo by Frederick Florin / AFP via Getty Images
Members of the European Parliament vote on the Artificial Intelligence Act during a plenary session on June 14th.

GPAIs are still defined within the approved draft of the act, though these are now judged based on their individual applications. Instead, legislators added a separate category for foundation models, and while they’re still subject to plenty of regulatory rules, they’re not automatically categorized as being high risk. “‘Foundational models’ is a broad terminology encouraged by Stanford, [which] also has a vested interest in such systems,” said Chander. “As such, the Parliament’s position only covers such systems to a limited extent and is much less broad than the previous work on general-purpose systems.”

AI providers like OpenAI lobbied against the EU including such an amendment, and their influence in the process is an open question. “We’re seeing this problematic thing where generative AI CEOs are being consulted on how their products should be regulated,” said Leufer. “And it’s not that they shouldn’t be consulted. But they’re not the only ones, and their voices shouldn’t be the loudest because they’re extremely self-interested.”

Potholes litter the EU’s road to AI regulations

As it stands, some experts believe the current rules for foundation models don’t go far enough. Chander tells The Verge that while the transparency requirements for training data would provide “more information than ever before,” disclosing that data doesn’t ensure users won’t be harmed when these systems are used. “We have been calling for details about the use of such a system to be displayed on the EU AI database and for impact assessments on fundamental rights to be made public,” added Chander. “We need public oversight over the use of AI systems.”

Several experts tell The Verge that far from solving the legal concerns around generative AI, the AI Act might actually be less effective than existing rules. “In many respects, the GDPR offers a stronger framework in that it is rights-based, not risk-based,” said Myers West. Leufer also claims that GDPR has a more significant legal impact on generative AI systems. “The AI Act will only mandate these companies to do things they should already be doing,” he says.

OpenAI has drawn particular criticism for being secretive about the training data for its GPT-4 model. Speaking to The Verge in an interview, Ilya Sutskever, OpenAI’s chief scientist and co-founder, said that the company’s previous transparency pledge was “a bad idea.”

“These models are very potent, and they’re becoming more and more potent. At some point, it will be quite easy, if one wanted, to cause a great deal of harm with those models,” said Sutskever. “And as the capabilities get higher, it makes sense that you don’t want want to disclose them.”

As other companies scramble to release their own generative AI models, providers of these systems may be similarly motivated to conceal how their product is developed — both through fear of competitors and potential legal ramifications. Therefore, the AI Act’s biggest impact, according to Leufer, may be on transparency — in a field where companies are “becoming gradually more and more closed.”

Outside of the narrow focus on foundation models, other areas in the AI Act have been criticized for failing to protect marginalized groups that could be impacted by the technology. “It contains significant gaps such as overlooking how AI is used in the context of migration, harms that affect communities of color most,” said Myers West. “These are the kinds of harms where regulatory intervention is most pressing: AI is already being used widely in ways that affect people’s access to resources and life chances, and that ramp up widespread patterns of inequality.”

If the AI Act proves to be less effective than existing laws protecting individuals’ rights, it might not bode well for the EU’s AI plans, particularly if it’s not strictly enforced. After all, Italy’s attempt to use GDPR against ChatGPT started as tough-looking enforcement, including near-impossible-seeming requests like ensuring the chatbot didn’t provide inaccurate information. But OpenAI was able to satisfy Italian regulators’ demands seemingly by adding fresh disclaimers to its terms and policy documents. Europe has spent years crafting its AI framework — but regulators will have to decide whether to take advantage of its teeth.

Continue Reading

Trending Spy News

What Are the Best Counter-Surveillance Tactics?

Have you ever considered how effective your current counter-surveillance tactics truly are? As you navigate through the complexities of safeguarding your privacy and security, it becomes essential to reassess and adapt your strategies accordingly. By exploring the intersection of physical security, digital privacy, and behavioral awareness, you may uncover innovative ways to protect yourself from potential surveillance threats. Stay tuned to discover practical insights and actionable tips that could enhance your counter-surveillance efforts to the next level.

Article Summary

Understanding Surveillance Risks

To effectively counter surveillance, you must first grasp the potential risks associated with being monitored. Surveillance poses various threats to your privacy and freedom. One risk is the collection of your personal information, which can be used for targeted advertising or even manipulation.

Another risk is the possibility of your activities being tracked, leading to potential profiling or discrimination. Additionally, surveillance can infringe upon your right to freedom of expression and association, as it may deter you from expressing dissenting opinions or attending certain events.

Understanding these risks is vital in developing a comprehensive counter-surveillance strategy. By recognizing the potential consequences of being monitored, you can take proactive steps to protect yourself and your information. This may involve using encryption tools to secure your communications, being mindful of the data you share online, and being cautious of your physical surroundings to avoid being surreptitiously monitored.

In essence, by being aware of the risks posed by surveillance, you can better safeguard your privacy and freedom in an increasingly monitored world.

Physical Security Measures

Understanding the risks of surveillance provides a foundation for implementing effective physical security measures to protect your privacy and freedom. When addressing physical security, consider securing your home or workspace with robust locks, reinforced doors, and security cameras to deter potential intruders. Implementing access controls, such as biometric scanners or smart locks, can further strengthen your security by limiting unauthorized entry. Additionally, installing window bars or shatter-resistant films can fortify your space against forced entry attempts.

Maintaining situational awareness is vital; be vigilant of your surroundings and identify any anomalies that could indicate surveillance or potential threats. Conduct regular sweeps for hidden cameras or listening devices in your environment. Utilize secure storage options for sensitive documents and electronic devices to prevent unauthorized access.

Digital Privacy Tools

Boost your digital privacy defenses by leveraging a variety of advanced tools and techniques. Start by using a reputable Virtual Private Network (VPN) to fortify your internet connection, making it harder for snoopers to track your online activities.

Secure messaging apps like Signal or Wickr provide end-to-end encryption, ensuring your conversations remain private.

For secure browsing, consider using the Tor Browser, which anonymizes your web traffic by routing it through a series of servers. Password managers like LastPass or Dashlane help you create and store complex passwords securely. Enable two-factor authentication whenever possible to add an extra layer of security to your accounts.

Regularly update your software and operating systems to patch any vulnerabilities that hackers could exploit. Use encrypted email services such as ProtonMail or Tutanota for sensitive communications.

Behavioral Awareness Techniques

Strengthening your digital security with advanced tools is essential; now, shift your focus to mastering Behavioral Awareness Techniques to improve your overall surveillance defense strategy.

Behavioral Awareness Techniques involve being mindful of your surroundings, recognizing patterns, and understanding social cues to detect potential surveillance. Start by varying your daily routines to make it harder for any observer to predict your movements. Pay attention to individuals exhibiting unusual behavior or appearing out of place. Trust your instincts and be cautious of unsolicited approaches or attempts to gather information about you.

Practice good operational security by limiting the information you share publicly and being discreet about sensitive details. Maintain situational awareness in public spaces, including monitoring for physical surveillance, such as someone following you.

Frequently Asked Questions

How Can I Detect Hidden Surveillance Cameras in My Surroundings?

Want to detect hidden surveillance cameras around you? Look for unusual items, check for blinking lights, and use a camera detector. Sweep your surroundings methodically, paying attention to potential hiding spots. Stay vigilant for your privacy.

Are There Any Specific Tactics to Counter Tracking Devices on Vehicles?

To counter tracking devices on vehicles, implement regular physical inspections underneath the car, seek professional bug sweeping services periodically, use radio frequency detectors to locate hidden trackers, and consider investing in GPS signal jammers for added protection against potential surveillance.

What Steps Can I Take to Protect My Sensitive Information During Travel?

To protect your sensitive information during travel, safeguard your devices with strong passwords, encryption, and regular updates. Avoid public Wi-Fi and use a virtual private network (VPN). Be cautious of shoulder surfers and consider using RFID-blocking wallets for cards.

Is There a Way to Identify if My Phone Is Being Monitored Remotely?

Ever vigilant, you can determine if your phone is remotely monitored by looking for unusual battery drain, overheating, unexplained noises, or sudden data usage spikes. Stay alert to any irregularities to safeguard your privacy.

How Do I Secure My Home Against Advanced Surveillance Methods Like Drones?

To secure your home against advanced surveillance methods like drones, consider installing privacy fences, using window films that block infrared, and planting tall trees or shrubs. Regularly check for any unfamiliar devices or signs of surveillance.

Continue Reading

Trending Spy News

FTC investigating OpenAI on ChatGPT data collection and publication of false information


OpenAI CEO Samuel Altman Testifies To Senate Committee On Rules For Artificial Intelligence
Photo by Win McNamee / Getty Images

The Federal Trade Commission (FTC) is investigating ChatGPT creator OpenAI over possible consumer harm through its data collection and the publication of false information.

First reported by The Washington Post, the FTC sent a 20-page letter to the company this week. The letter requests documents related to developing and training its large language models, as well as data security.

The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.

The FTC declined to comment. OpenAI did not immediately respond to requests for comment.

This is the first major US investigation into OpenAI, which burst into the public consciousness over the past year with the release of ChatGPT. The popularity of ChatGPT and the large language models that power it kicked off an AI arms race prompting competitors like Google and Meta to release their own models.

The FTC has signaled increased regulatory oversight of AI before. In 2021, the agency warned companies against using biased algorithms. Industry watchdog Center for AI and Digital Policy also called on the FTC to stop OpenAI from launching new GPT models in March.

Large language models can put out factually inaccurate information. OpenAI warns ChatGPT users that it can occasionally generate incorrect facts, and Google’s chatbot Bard’s first public demo did not inspire confidence in its accuracy. And based on personal experience, both have spit out incredibly flattering, though completely invented, facts about myself. Other people have gotten in trouble for using ChatGPT. A lawyer was sanctioned for submitting fake cases created by ChatGPT, and a Georgia radio host sued the company for results that claimed he was accused of embezzlement.

US lawmakers showed great interest in AI, both in understanding the technology and possibly looking into enacting regulations around it. The Biden administration released a plan to provide a responsible framework for AI development, including a $140 million investment to launch research centers. Supreme Court Justice Neil Gorsuch also discussed chatbots’ potential legal liability earlier this year.

It is in this environment that AI leaders like OpenAI CEO Sam Altman have made the rounds in Washington. Altman lobbied Congress to create regulations around AI.

Continue Reading

Trending Spy News

OpenAI will use Associated Press news stories to train its models


An illustration of a cartoon brain with a computer chip imposed on top.
Illustration by Alex Castro / The Verge

OpenAI will train its AI models on The Associated Press’ news stories for the next two years, thanks to an agreement first reported by Axios. The deal between the two companies will give OpenAI access to some of the content in AP’s archive as far back as 1985.

As part of the agreement, AP will gain access to OpenAI’s “technology and product expertise,” although it’s not clear exactly what that entails. AP has long been exploring AI features and began generating reports about company earnings in 2014. It later leveraged the technology to automate stories about Minor League Baseball and college sports.

AP joins OpenAI’s growing list of partners. On Tuesday, the AI company announced a six-year deal with Shutterstock that will let OpenAI license images, videos, music, and metadata to train its text-to-image model, DALL-E. BuzzFeed also says it will use AI tools provided by OpenAI to “enhance” and “personalize” its content. OpenAI is also working with Microsoft on a number of AI-powered products as part of Microsoft’s partnership and “‘multibillion dollar investment” into the company.

“The AP continues to be an industry leader in the use of AI; their feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” Brad Lightcap, OpenAI’s chief operating officer, says in a statement.

Earlier this year, AP announced AI-powered projects that will publish Spanish-language news alerts and document public safety incidents in a Minnesota newspaper. The outlet also launched an AI search tool that’s supposed to make it easier for news partners to find photos and videos in its library based on “descriptive language.”

AP’s partnership with OpenAI seems like a natural next step, but there are still a lot of crucial details missing about how the outlet will use the technology. AP makes it clear it “does not use it in its news stories.”

Did you miss our previous article…
https://eyespypro.com/congressistrying-to-stop-discriminatory-algorithms-again/

Continue Reading

Trending