Trending Spy News
Meta’s Nick Clegg on how AI is reshaping the feed




This is Platformer, a newsletter on the intersection of Silicon Valley and democracy from Casey Newton and Zoë Schiffer. Sign up here.
Last July, as the Instagram feed began to fill up with recommended posts, the company was thrown briefly into crisis. The once-familiar landscape of friends, family, and influencers you had chosen to follow had begun to be replaced by algorithmic guesses. “Make Instagram Instagram again,” opined Kylie Jenner. Many viral tweets followed in the same vein.
“When you discover something in your feed that you didn’t follow before, there should be a high bar — it should just be great,” Instagram chief Adam Mosseri told me at the time. “You should be delighted to see it. And I don’t think that’s happening enough right now. So I think we need to take a step back, in terms of the percentage of feed that are recommendations, get better at ranking and recommendations, and then — if and when we do — we can start to grow again.”
Mosseri told me he was confident Instagram would get there. And indeed, as I scroll through the app today, what the company calls “unconnected content” — posts from people you don’t follow — has once again roared to the forefront. After I watched a few Reels from one popular comedian that a friend had sent me, my Instagram feed quickly filled up with the Reels of his I hadn’t watched yet.
As a longtime Instagram user, I still find all this somewhat jarring. But while recommendations are more prevalent than ever in the app, there’s no hint of the uproar that consumed Instagram last summer. In part that’s because the recommendations really are better than they were a year ago; in part that’s because the trend that precipitated all this — increasing consumer demand for short-form video — continues to accelerate.
Also, of course, it’s in part that eventually changes like these just wear us down. What once felt weird and bad now feels, through sheer force of repetition, mostly normal.
But while the transition away from Facebook’s old friends and family-dominated feeds to Meta’s algorithmic wonderland seems to be proceeding mostly without incident, the move has given the company a new policy and communications challenge. If you’re going to recommend posts for people to look at, you have to know why you’re making those recommendations.
Without a thorough understanding of how the company’s many interconnected systems are promoting content, you can wind up promoting all sorts of harms. And even if you don’t, an app’s users will have a lot of questions about what they’re seeing. What exactly do you know about them — or think you know about them? Why are they seeing this instead of that?
To some extent, of course, that’s not a new problem. Facebook and Twitter have long faced questions over why they promoted posts from some users and not others. But in a world where users were choosing what to follow, the questions essentially boiled down to what order the company’s ranking systems placed posts in. Now that the posts in your feed can come from anywhere, it all gets much more confusing.
“One of the biggest problems we have is because that interaction is invisible to the naked eye, it’s pretty difficult to explain to the layperson,” Nick Clegg, Meta’s president of global affairs, told me in an interview. “Of course, what fills that vacuum is the worst fears and the worst suspicions.”
That leads us to Meta’s move this week to publish 22 “system cards” outlining why you’re seeing what you’re seeing in the company’s feeds. Written to be accessible to a lay person, the cards explain how Meta sources photos and videos to show you, names some of the signals it uses to make predictions, and describes how it ranks posts in the feed from there.
In addition to publishing the cards, which most users probably won’t see, the company is bringing its “Why am I seeing this?” feature to Reels on Facebook and Instagram’s explore page. The idea is to give individual users the sense that they are the ones shaping their experiences on these apps, creating them indirectly through what they like, share, and comment on. If works, it could reduce the anxiety people have about Meta’s role in shaping their feeds.
“I think if we could dispel some of the mythology around that, it would be a very significant step forward,” Clegg said.
Of course, that depends in part on how the information in these system cards is received. While little in them seems likely to surprise anyone who has spent much time on social media, seeing it all in black and white could fuel new critiques of Meta. Particularly If you’re the sort of person who worries that social apps are engineered to be addictive.
Reading the card for Instagram’s feed, for example, the signals Meta takes into account when deciding what to show you include “How likely you are to spend more than 15 seconds in this session,” “How long you are predicted to spend viewing the next two posts that appear after the one you are currently viewing,” and “How long you are predicted to spend viewing content in your feed below what is displayed in the top position.”
The system cards, in other words, lay out how Meta works to get you to use their apps for long periods of time. To the extent that this dispels any mythology about the company, I wonder how useful it is to Meta.
Clegg told me that ranking content based on likely engagement isn’t much different from newspapers or book authors choosing stories that readers will likely enjoy. “I know that for some people ‘engagement’ is a dirty word,” he said. “I think it’s actually a lot more nuanced than that.”
Meta also uses “slower time signals,” he said, measuring people’s satisfaction with the app overall rather than just individual posts, and it regularly surveys users about their feelings. That all gets fed back into the product design too, he said.
“I don’t think it’s fair to say that all we’re trying to do is just to keep people doomscrolling forever,” he said. “We have no incentive — you’re just simply not going to retain people over time if that’s what you’re trying to solve for. And these system cards, by the way, would look quite different if that’s what we were trying to solve for.”
Potentially even more useful is another new feature the company is testing, which will let users mark that they are “interested” in a Reel that the company showed them — essentially, giving an explicit endorsement to a recommended video. As the rare person who feels like the TikTok feed has never quite figured out what I really want to see there, I’m interested to see whether asking people for feedback like this more directly will lead to better feeds.
Speaking of TikTok, that company took its own crack at transparency by opening its algorithmic transparency centers, which are designed to offer visitors an in-person look at systems that are quite similar in many ways that the ones that Meta is describing with its new system cards. And given the difficult position TikTok is in with the US government, it’s fair to ask how much goodwill companies can actually generate with efforts like these.
One possibility is that publishing detailed explanations of ranking systems does buy goodwill, but ultimately couldn’t address questions about potential interference from the Chinese government. For all its own issues, that’s one problem that Meta, as an American company, doesn’t have.
The other possibility, though, is that transparency represents an effort to solve the wrong problem. In this view, it’s not that we don’t understand the contents of our feeds — it’s that we mostly know how these systems work, and we don’t like it.
On balance, though, I’ll take transparency every time, if only because it’s difficult to build a better future when you barely understand the present. And on that front, I was heartened to see that Meta is expanding the work it’s doing with academic researchers. The company also announced this week that it’s making a library of public posts, pages, groups, and events on Facebook available to qualified research institutions through an application process. The company says doing this will help it meet its obligations under Europe’s new Digital Services Act — one of the first concrete benefits we can expect to see from that law.
“Generally speaking, we believe that as these technologies are developed, companies should be more open about how their systems work and collaborate openly across industry, government and civil society to help ensure they are developed responsibly,” Clegg wrote in his blog post today. And for once, Meta had adopted a position that almost no one could disagree with.
Trending Spy News
FTC investigating OpenAI on ChatGPT data collection and publication of false information




The Federal Trade Commission (FTC) is investigating ChatGPT creator OpenAI over possible consumer harm through its data collection and the publication of false information.
First reported by The Washington Post, the FTC sent a 20-page letter to the company this week. The letter requests documents related to developing and training its large language models, as well as data security.
The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.
The FTC declined to comment. OpenAI did not immediately respond to requests for comment.
This is the first major US investigation into OpenAI, which burst into the public consciousness over the past year with the release of ChatGPT. The popularity of ChatGPT and the large language models that power it kicked off an AI arms race prompting competitors like Google and Meta to release their own models.
The FTC has signaled increased regulatory oversight of AI before. In 2021, the agency warned companies against using biased algorithms. Industry watchdog Center for AI and Digital Policy also called on the FTC to stop OpenAI from launching new GPT models in March.
Large language models can put out factually inaccurate information. OpenAI warns ChatGPT users that it can occasionally generate incorrect facts, and Google’s chatbot Bard’s first public demo did not inspire confidence in its accuracy. And based on personal experience, both have spit out incredibly flattering, though completely invented, facts about myself. Other people have gotten in trouble for using ChatGPT. A lawyer was sanctioned for submitting fake cases created by ChatGPT, and a Georgia radio host sued the company for results that claimed he was accused of embezzlement.
US lawmakers showed great interest in AI, both in understanding the technology and possibly looking into enacting regulations around it. The Biden administration released a plan to provide a responsible framework for AI development, including a $140 million investment to launch research centers. Supreme Court Justice Neil Gorsuch also discussed chatbots’ potential legal liability earlier this year.
It is in this environment that AI leaders like OpenAI CEO Sam Altman have made the rounds in Washington. Altman lobbied Congress to create regulations around AI.
Trending Spy News
OpenAI will use Associated Press news stories to train its models




OpenAI will train its AI models on The Associated Press’ news stories for the next two years, thanks to an agreement first reported by Axios. The deal between the two companies will give OpenAI access to some of the content in AP’s archive as far back as 1985.
As part of the agreement, AP will gain access to OpenAI’s “technology and product expertise,” although it’s not clear exactly what that entails. AP has long been exploring AI features and began generating reports about company earnings in 2014. It later leveraged the technology to automate stories about Minor League Baseball and college sports.
AP joins OpenAI’s growing list of partners. On Tuesday, the AI company announced a six-year deal with Shutterstock that will let OpenAI license images, videos, music, and metadata to train its text-to-image model, DALL-E. BuzzFeed also says it will use AI tools provided by OpenAI to “enhance” and “personalize” its content. OpenAI is also working with Microsoft on a number of AI-powered products as part of Microsoft’s partnership and “‘multibillion dollar investment” into the company.
Announcing partnership with @AP — we’ll help them thoughtfully explore use-cases for our technology, we’ll work with their content in our systems: https://t.co/3lAqzfCF5P
— Greg Brockman (@gdb) July 13, 2023
“The AP continues to be an industry leader in the use of AI; their feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” Brad Lightcap, OpenAI’s chief operating officer, says in a statement.
Earlier this year, AP announced AI-powered projects that will publish Spanish-language news alerts and document public safety incidents in a Minnesota newspaper. The outlet also launched an AI search tool that’s supposed to make it easier for news partners to find photos and videos in its library based on “descriptive language.”
AP’s partnership with OpenAI seems like a natural next step, but there are still a lot of crucial details missing about how the outlet will use the technology. AP makes it clear it “does not use it in its news stories.”
Did you miss our previous article…
https://eyespypro.com/congressistrying-to-stop-discriminatory-algorithms-again/
Trending Spy News
Congress is trying to stop discriminatory algorithms again




US policymakers hope to require online platforms to disclose information about their algorithms and allow the government to intervene if these are found to discriminate based on criteria like race or gender.
Sen. Edward Markey (D-MA) and Rep. Doris Matsui (D-CA) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which aims to ban the use of discriminatory or “harmful” automated decision-making. It would also establish safety standards, require platforms to provide a plain language explanation of algorithms used by websites, publish annual reports on content moderation practices, and create a governmental task force to investigate discriminatory algorithmic processes.
The bill applies to “online platforms” or any commercial, public-facing website or app that “provides a community forum for user-generated content.” This can include social media sites, content aggregation services, or media and file-sharing sites.
Markey and Matsui introduced a previous version of the bill in 2021. It moved to the Subcommittee on Consumer Protection and Commerce but died in committee.
Data-based decision-making, including social media recommendation algorithms or machine learning systems, often lives in proverbial black boxes. This opacity sometimes exists because of intellectual property concerns or a system’s complexity.
But lawmakers and regulators worry this could obscure biased decision-making with a huge impact on people’s lives, well beyond the reach of the online platforms the bill covers. Insurance companies, including those working with Medicaid patients, already use algorithms to grant or deny patient coverage. Agencies such as the FTC signaled in 2021 that they may pursue legal action against biased algorithms.
Calls to make more transparent algorithms have grown over the years. After several scandals in 2018 — which included the Cambridge Analytica debacle — AI research group AI Now found governments and companies don’t have a way to punish organizations that produce discriminatory systems. In a rare move, Facebook and Instagram announced the formation of a group to study potential racial bias in its algorithms.
“Congress must hold Big Tech accountable for its black-box algorithms that perpetuate discrimination, inequality, and racism in our society – all to make a quick buck,” Markey said in a statement.
Most proposed regulations around AI and algorithms include a push to create more transparency. The European Union’s proposed AI Act, in its final stages of negotiation, also noted the importance of transparency and accountability.
-
Trending Spy News2 months ago
FTC investigating OpenAI on ChatGPT data collection and publication of false information
-
Listening Devices3 months ago
Binoculars: The Detective’s Long-Distance Friend
-
Spy Cameras3 months ago
Surveillance Equipment: A Private Detective’s Best Friend
-
Listening Devices3 months ago
Protect Your Privacy: How To Detect And Defeat Surveillance
-
Spy Cameras3 months ago
Hidden Cameras: Unseen Eyes In Investigation
-
Security Devices3 months ago
Gps Trackers: Tracking Suspects With Precision
-
Spy Cameras3 months ago
Wireless Security Cameras: Keeping You Safe And Connected
-
Listening Devices3 months ago
Unleash Your Inner Spy With Cutting-Edge Gadgets!