Connect with us

Trending Spy News

Meta explains how AI influences what we see on Facebook and Instagram

Published

on


Nick Clegg against a blue background of Facebook logos.
Meta’s President of Global Affairs Nick Clegg (pictured) affirms the company’s commitment to transparency. | Kristen Radtke / The Verge

Meta has published a deep dive into the company’s social media algorithms in a bid to demystify how content is recommended for Instagram and Facebook users. In a blog post published on Thursday, Meta’s President of Global Affairs Nick Clegg said that the info dump on the AI systems behind its algorithms is part of the company’s “wider ethos of openness, transparency, and accountability,” and outlined what Facebook and Instagram users can do to better control what content they see on the platforms.

“With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks,” Clegg said in the blog. “We believe that the best way to respond to those concerns is with openness.”

Most of the information is contained within 22 “system cards” that cover the Feed, Stories, Reels, and other ways that people discover and consume content on Meta’s social media platforms. Each of these cards provides detailed, yet approachable information about how the AI systems behind these features rank and recommend content. For example, the overview into Instagram Explore — a feature that shows users photo and reels content from accounts they don’t follow — explains the three-step process behind the automated AI recommendation engine.

  1. Gather Inventory: the system gathers public Instagram content like photos and reels that abides by the company’s quality and integrity rules.
  2. Leverage Signals: the AI system then considers how users have engaged with similar content or interests, also known as “input signals.”
  3. Rank Content: finally, the system then ranks the content from the previous step, pushing content that it predicts will be of greater interest to the user to a higher position within the Explore tab

The card says that Instagram users can influence this process by saving content (indicating that the system should show you similar stuff), or marking it as “not interested” to encourage the system to filter out similar content in the future. Users can also see reels and photos that haven’t been specifically selected for them by the algorithm by selecting “Not personalized” in the Explore filter. More information about Meta’s predictive AI models, the input signals used to direct them, and how frequently they’re used to rank content, is available via its Transparency Center.

Alongside the system cards, the blog post mentions a few other Instagram and Facebook features that can inform users why they’re seeing certain content, and how they can tailor their recommendations. Meta is expanding the “Why Am I Seeing This?” feature to Facebook Reels, Instagram Reels, and Instagram’s Explore tab in “the coming weeks.” This will allow users to click on an individual reel to find out how their previous activity may have influenced the system to show it to them. Instagram is also testing a new Reels feature that will allow users to mark recommended reels as “Interested” to see similar content in the future. The ability to mark content as “Not Interested” has been available since 2021.

Meta also announced that it will begin rolling out its Content Library and API, a new suite of tools for researchers, in the coming weeks, which will contain a bunch of public data from Instagram and Facebook. Data from this library can be searched, explored, and filtered, and researchers will be able to apply for access to these tools through approved partners, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. Meta claims these tools will provide “the most comprehensive access to publicly-available content across Facebook and Instagram of any research tool we have built to date” alongside helping the company to meet its data-sharing and transparency compliance obligations.

Those transparency obligations are potentially the largest factor driving Meta’s decision to better explain how it uses AI to shape the content we see and interact with. The explosive development of AI technology and its subsequent popularity in recent months has drawn attention from regulators around the world who have expressed concern about how these systems collect, manage, and use our personal data. Meta’s algorithms aren’t new, but the way it mismanaged user data during the Cambridge Analytica scandal is likely a motivational reminder to over communicate.

Trending Spy News

FTC investigating OpenAI on ChatGPT data collection and publication of false information

Published

on

By


OpenAI CEO Samuel Altman Testifies To Senate Committee On Rules For Artificial Intelligence
Photo by Win McNamee / Getty Images

The Federal Trade Commission (FTC) is investigating ChatGPT creator OpenAI over possible consumer harm through its data collection and the publication of false information.

First reported by The Washington Post, the FTC sent a 20-page letter to the company this week. The letter requests documents related to developing and training its large language models, as well as data security.

The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.

The FTC declined to comment. OpenAI did not immediately respond to requests for comment.

This is the first major US investigation into OpenAI, which burst into the public consciousness over the past year with the release of ChatGPT. The popularity of ChatGPT and the large language models that power it kicked off an AI arms race prompting competitors like Google and Meta to release their own models.

The FTC has signaled increased regulatory oversight of AI before. In 2021, the agency warned companies against using biased algorithms. Industry watchdog Center for AI and Digital Policy also called on the FTC to stop OpenAI from launching new GPT models in March.

Large language models can put out factually inaccurate information. OpenAI warns ChatGPT users that it can occasionally generate incorrect facts, and Google’s chatbot Bard’s first public demo did not inspire confidence in its accuracy. And based on personal experience, both have spit out incredibly flattering, though completely invented, facts about myself. Other people have gotten in trouble for using ChatGPT. A lawyer was sanctioned for submitting fake cases created by ChatGPT, and a Georgia radio host sued the company for results that claimed he was accused of embezzlement.

US lawmakers showed great interest in AI, both in understanding the technology and possibly looking into enacting regulations around it. The Biden administration released a plan to provide a responsible framework for AI development, including a $140 million investment to launch research centers. Supreme Court Justice Neil Gorsuch also discussed chatbots’ potential legal liability earlier this year.

It is in this environment that AI leaders like OpenAI CEO Sam Altman have made the rounds in Washington. Altman lobbied Congress to create regulations around AI.

Continue Reading

Trending Spy News

OpenAI will use Associated Press news stories to train its models

Published

on

By


An illustration of a cartoon brain with a computer chip imposed on top.
Illustration by Alex Castro / The Verge

OpenAI will train its AI models on The Associated Press’ news stories for the next two years, thanks to an agreement first reported by Axios. The deal between the two companies will give OpenAI access to some of the content in AP’s archive as far back as 1985.

As part of the agreement, AP will gain access to OpenAI’s “technology and product expertise,” although it’s not clear exactly what that entails. AP has long been exploring AI features and began generating reports about company earnings in 2014. It later leveraged the technology to automate stories about Minor League Baseball and college sports.

AP joins OpenAI’s growing list of partners. On Tuesday, the AI company announced a six-year deal with Shutterstock that will let OpenAI license images, videos, music, and metadata to train its text-to-image model, DALL-E. BuzzFeed also says it will use AI tools provided by OpenAI to “enhance” and “personalize” its content. OpenAI is also working with Microsoft on a number of AI-powered products as part of Microsoft’s partnership and “‘multibillion dollar investment” into the company.

“The AP continues to be an industry leader in the use of AI; their feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” Brad Lightcap, OpenAI’s chief operating officer, says in a statement.

Earlier this year, AP announced AI-powered projects that will publish Spanish-language news alerts and document public safety incidents in a Minnesota newspaper. The outlet also launched an AI search tool that’s supposed to make it easier for news partners to find photos and videos in its library based on “descriptive language.”

AP’s partnership with OpenAI seems like a natural next step, but there are still a lot of crucial details missing about how the outlet will use the technology. AP makes it clear it “does not use it in its news stories.”

Did you miss our previous article…
https://eyespypro.com/congressistrying-to-stop-discriminatory-algorithms-again/

Continue Reading

Trending Spy News

Congress is trying to stop discriminatory algorithms again

Published

on

By


A person with their hand hovering over the Like button on Facebook.
Photo by Amelia Holowaty Krales / The Verge

US policymakers hope to require online platforms to disclose information about their algorithms and allow the government to intervene if these are found to discriminate based on criteria like race or gender.

Sen. Edward Markey (D-MA) and Rep. Doris Matsui (D-CA) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which aims to ban the use of discriminatory or “harmful” automated decision-making. It would also establish safety standards, require platforms to provide a plain language explanation of algorithms used by websites, publish annual reports on content moderation practices, and create a governmental task force to investigate discriminatory algorithmic processes.

The bill applies to “online platforms” or any commercial, public-facing website or app that “provides a community forum for user-generated content.” This can include social media sites, content aggregation services, or media and file-sharing sites.

Markey and Matsui introduced a previous version of the bill in 2021. It moved to the Subcommittee on Consumer Protection and Commerce but died in committee.

Data-based decision-making, including social media recommendation algorithms or machine learning systems, often lives in proverbial black boxes. This opacity sometimes exists because of intellectual property concerns or a system’s complexity.

But lawmakers and regulators worry this could obscure biased decision-making with a huge impact on people’s lives, well beyond the reach of the online platforms the bill covers. Insurance companies, including those working with Medicaid patients, already use algorithms to grant or deny patient coverage. Agencies such as the FTC signaled in 2021 that they may pursue legal action against biased algorithms.

Calls to make more transparent algorithms have grown over the years. After several scandals in 2018 — which included the Cambridge Analytica debacle — AI research group AI Now found governments and companies don’t have a way to punish organizations that produce discriminatory systems. In a rare move, Facebook and Instagram announced the formation of a group to study potential racial bias in its algorithms.

“Congress must hold Big Tech accountable for its black-box algorithms that perpetuate discrimination, inequality, and racism in our society – all to make a quick buck,” Markey said in a statement.

Most proposed regulations around AI and algorithms include a push to create more transparency. The European Union’s proposed AI Act, in its final stages of negotiation, also noted the importance of transparency and accountability.

Continue Reading

Trending