Trending Spy News
AI-generated tweets might be more convincing than real people, research finds




People apparently find tweets more convincing when they’re written by AI language models. At least, that was the case in a new study comparing content created by humans to language generated by OpenAI’s model GPT-3.
The authors of the new research surveyed people to see if they could discern whether a tweet was written by another person or by Chat-GPT. The result? People couldn’t really do it. The survey also asked them to decide whether the information in each tweet was true or not. This is where things get even dicier, especially since the content focused on science topics like vaccines and climate change that are subject to a lot of misinformation campaigns online.
Turns out, study participants had a harder time recognizing disinformation if it was written by the language model than if it was written by another person. Along the same lines, they were also better able to correctly identify accurate information if it was written by GPT-3 rather than by a human.
In other words, people in the study were more likely to trust GPT-3 than other human beings — regardless of how accurate the AI-generated information was. And that shows just how powerful AI language models can be when it comes to either informing or misleading the public.
“These kinds of technologies, which are amazing, could easily be weaponized to generate storms of disinformation on any topic of your choice,” says Giovanni Spitale, lead author of the study and a postdoctoral researcher and research data manager at the Institute of Biomedical Ethics and History of Medicine at the University of Zurich.
But that doesn’t have to be the case, Spitale says. There are ways to develop the technology so that it’s harder to use it to promote misinformation. “It’s not inherently evil or good. It’s just an amplifier of human intentionality,” he says.
Spitale and his colleagues gathered posts from Twitter discussing 11 different science topics ranging from vaccines and covid-19 to climate change and evolution. They then prompted GPT-3 to write new tweets with either accurate or inaccurate information. The team then collected responses from 697 participants online via Facebook ads in 2022. They all spoke English and were mostly from the United Kingdom, Australia, Canada, the United States, and Ireland. Their results were published today in the journal Science Advances.
The stuff GPT-3 wrote was “indistinguishable” from organic content, the study concluded. People surveyed just couldn’t tell the difference. In fact, the study notes that one of its limitations is that the researchers themselves can’t be 100 percent certain that the tweets they gathered from social media weren’t written with help from apps like ChatGPT.
There are other limitations to keep in mind with this study, too, including that its participants had to judge tweets out of context. They weren’t able to check out a Twitter profile for whoever wrote the content, for instance, which might help them figure out if it’s a bot or not. Even seeing an account’s past tweets and profile image might make it easier to identify whether content associated with that account could be misleading.
Participants were the most successful at calling out disinformation written by real Twitter users. GPT-3-generated tweets with false information were slightly more effective at deceiving survey participants. And by now, there are more advanced large language models that could be even more convincing than GPT-3. ChatGPT is powered by the GPT-3.5 model, and the popular app offers a subscription for users who want to access the newer GPT-4 model.
There are, of course, already plenty of real-world examples of language models being wrong. After all, “these AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements,” The Verge’s James Vincent wrote after a major machine learning conference made the decision to bar authors from using AI tools to write academic papers.
This new study also found that its survey respondents were stronger judges of accuracy than GPT-3 in some cases. The researchers similarly asked the language model to analyze tweets and decide whether they were accurate or not. GPT-3 scored worse than human respondents when it came to identifying accurate tweets. When it came to spotting disinformation, humans and GPT-3 performed similarly.
Crucially, improving training datasets used to develop language models could make it harder for bad actors to use these tools to churn out disinformation campaigns. GPT-3 “disobeyed” some of the researchers’ prompts to generate inaccurate content, particularly when it came to false information about vaccines and autism. That could be because there was more information debunking conspiracy theories on those topics than other issues in training datasets.
The best long-term strategy for countering disinformation, though, according to Spitale, is pretty low-tech: it’s to encourage critical thinking skills so that people are better equipped to discern between facts and fiction. And since ordinary people in the survey already seem to be as good or better judges of accuracy than GPT-3, a little training could make them even more skilled at this. People skilled at fact-checking could work alongside language models like GPT-3 to improve legitimate public information campaigns, the study posits.
“Don’t take me wrong, I am a big fan of this technology,” Spitale says. “I think that narrative AIs are going to change the world … and it’s up to us to decide whether or not it’s going to be for the better.”
Trending Spy News
FTC investigating OpenAI on ChatGPT data collection and publication of false information




The Federal Trade Commission (FTC) is investigating ChatGPT creator OpenAI over possible consumer harm through its data collection and the publication of false information.
First reported by The Washington Post, the FTC sent a 20-page letter to the company this week. The letter requests documents related to developing and training its large language models, as well as data security.
The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.
The FTC declined to comment. OpenAI did not immediately respond to requests for comment.
This is the first major US investigation into OpenAI, which burst into the public consciousness over the past year with the release of ChatGPT. The popularity of ChatGPT and the large language models that power it kicked off an AI arms race prompting competitors like Google and Meta to release their own models.
The FTC has signaled increased regulatory oversight of AI before. In 2021, the agency warned companies against using biased algorithms. Industry watchdog Center for AI and Digital Policy also called on the FTC to stop OpenAI from launching new GPT models in March.
Large language models can put out factually inaccurate information. OpenAI warns ChatGPT users that it can occasionally generate incorrect facts, and Google’s chatbot Bard’s first public demo did not inspire confidence in its accuracy. And based on personal experience, both have spit out incredibly flattering, though completely invented, facts about myself. Other people have gotten in trouble for using ChatGPT. A lawyer was sanctioned for submitting fake cases created by ChatGPT, and a Georgia radio host sued the company for results that claimed he was accused of embezzlement.
US lawmakers showed great interest in AI, both in understanding the technology and possibly looking into enacting regulations around it. The Biden administration released a plan to provide a responsible framework for AI development, including a $140 million investment to launch research centers. Supreme Court Justice Neil Gorsuch also discussed chatbots’ potential legal liability earlier this year.
It is in this environment that AI leaders like OpenAI CEO Sam Altman have made the rounds in Washington. Altman lobbied Congress to create regulations around AI.
Trending Spy News
OpenAI will use Associated Press news stories to train its models




OpenAI will train its AI models on The Associated Press’ news stories for the next two years, thanks to an agreement first reported by Axios. The deal between the two companies will give OpenAI access to some of the content in AP’s archive as far back as 1985.
As part of the agreement, AP will gain access to OpenAI’s “technology and product expertise,” although it’s not clear exactly what that entails. AP has long been exploring AI features and began generating reports about company earnings in 2014. It later leveraged the technology to automate stories about Minor League Baseball and college sports.
AP joins OpenAI’s growing list of partners. On Tuesday, the AI company announced a six-year deal with Shutterstock that will let OpenAI license images, videos, music, and metadata to train its text-to-image model, DALL-E. BuzzFeed also says it will use AI tools provided by OpenAI to “enhance” and “personalize” its content. OpenAI is also working with Microsoft on a number of AI-powered products as part of Microsoft’s partnership and “‘multibillion dollar investment” into the company.
Announcing partnership with @AP — we’ll help them thoughtfully explore use-cases for our technology, we’ll work with their content in our systems: https://t.co/3lAqzfCF5P
— Greg Brockman (@gdb) July 13, 2023
“The AP continues to be an industry leader in the use of AI; their feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” Brad Lightcap, OpenAI’s chief operating officer, says in a statement.
Earlier this year, AP announced AI-powered projects that will publish Spanish-language news alerts and document public safety incidents in a Minnesota newspaper. The outlet also launched an AI search tool that’s supposed to make it easier for news partners to find photos and videos in its library based on “descriptive language.”
AP’s partnership with OpenAI seems like a natural next step, but there are still a lot of crucial details missing about how the outlet will use the technology. AP makes it clear it “does not use it in its news stories.”
Did you miss our previous article…
https://eyespypro.com/congressistrying-to-stop-discriminatory-algorithms-again/
Trending Spy News
Congress is trying to stop discriminatory algorithms again




US policymakers hope to require online platforms to disclose information about their algorithms and allow the government to intervene if these are found to discriminate based on criteria like race or gender.
Sen. Edward Markey (D-MA) and Rep. Doris Matsui (D-CA) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which aims to ban the use of discriminatory or “harmful” automated decision-making. It would also establish safety standards, require platforms to provide a plain language explanation of algorithms used by websites, publish annual reports on content moderation practices, and create a governmental task force to investigate discriminatory algorithmic processes.
The bill applies to “online platforms” or any commercial, public-facing website or app that “provides a community forum for user-generated content.” This can include social media sites, content aggregation services, or media and file-sharing sites.
Markey and Matsui introduced a previous version of the bill in 2021. It moved to the Subcommittee on Consumer Protection and Commerce but died in committee.
Data-based decision-making, including social media recommendation algorithms or machine learning systems, often lives in proverbial black boxes. This opacity sometimes exists because of intellectual property concerns or a system’s complexity.
But lawmakers and regulators worry this could obscure biased decision-making with a huge impact on people’s lives, well beyond the reach of the online platforms the bill covers. Insurance companies, including those working with Medicaid patients, already use algorithms to grant or deny patient coverage. Agencies such as the FTC signaled in 2021 that they may pursue legal action against biased algorithms.
Calls to make more transparent algorithms have grown over the years. After several scandals in 2018 — which included the Cambridge Analytica debacle — AI research group AI Now found governments and companies don’t have a way to punish organizations that produce discriminatory systems. In a rare move, Facebook and Instagram announced the formation of a group to study potential racial bias in its algorithms.
“Congress must hold Big Tech accountable for its black-box algorithms that perpetuate discrimination, inequality, and racism in our society – all to make a quick buck,” Markey said in a statement.
Most proposed regulations around AI and algorithms include a push to create more transparency. The European Union’s proposed AI Act, in its final stages of negotiation, also noted the importance of transparency and accountability.
-
Trending Spy News2 months ago
FTC investigating OpenAI on ChatGPT data collection and publication of false information
-
Listening Devices3 months ago
Binoculars: The Detective’s Long-Distance Friend
-
Spy Cameras3 months ago
Surveillance Equipment: A Private Detective’s Best Friend
-
Listening Devices3 months ago
Protect Your Privacy: How To Detect And Defeat Surveillance
-
Spy Cameras3 months ago
Hidden Cameras: Unseen Eyes In Investigation
-
Security Devices3 months ago
Gps Trackers: Tracking Suspects With Precision
-
Spy Cameras3 months ago
Wireless Security Cameras: Keeping You Safe And Connected
-
Listening Devices3 months ago
Unleash Your Inner Spy With Cutting-Edge Gadgets!