Connect with us

Trending Spy News

Elon Musk blames data scraping by AI startups for his new paywalls on reading tweets


Elon Musk shrugging on a background with the Twitter logo
Illustration by Kristen Radtke / The Verge; Getty Images

Elon Musk continues to blame Twitter’s new limitations on AI companies scraping “vast amounts of data” as he announced new “temporary” limits on how many posts people can read.

Now unverified accounts will only be able to see 600 posts per day, and for “new” unverified accounts, just 300 in a day. The limits for verified accounts (presumably whether they’re bought as a part of the Twitter Blue subscription, granted through an organization, or verification Elon forced on people like Stephen King, LeBron James, and anyone else with more than a million followers) still allow reading only a maximum of 6,000 posts per day.

Shortly after that, Musk tweeted that the rate limits would “soon” increase to 8,000 tweets for verified users, 800 for unverified, and 400 for new unverified accounts.

@verge

Twitter is falling apart, and it’s driving Elon bonkers. #elonmusk #twitter #technews #techtok #monopoly

♬ original sound – The Verge

The limitations arrived one day after Twitter suddenly started blocking access for anyone who isn’t logged in, which Musk claimed was necessary because “Several hundred organizations (maybe more) were scraping Twitter data extremely aggressively, to the point where it was affecting the real user experience.”

The change is just one of several ways Musk has tried to monetize Twitter in the last several months. The company announced a three-tier API change in March that would begin charging for the use of its API, just three months after finally rolling out the revamped $8 per month Twitter Blue pay-for-verification scheme. Musk has also replaced himself with a new CEO, Linda Yaccarino. The former ad exec from NBC Universal has been hired to restore relationships with advertisers that had slashed their spending on Twitter.

As a private company, we know less about Twitter’s financial situation than we did before Musk’s purchase, but the hiring of Yaccarino reflected how important advertising revenue is to the business. Limiting access to the site cuts directly against the goal of creating opportunities to see the ad spots companies are paying for, but Musk’s monopoly brain view of Twitter may be obscuring that.

Musk is blaming companies trying to ingest data for artificial intelligence training the large language models (LLMs) like the ones behind ChatGPT, Microsoft Bing, and Google Bard.

But he didn’t mention his decision to lay off more than half of Twitter’s staff since taking over the company last fall, including people critical to maintaining its infrastructure. The haphazard layoffs meant the company even had to rehire some engineers who had been let go, and people have repeatedly warned that firing so many people would affect Twitter’s stability.

A significant outage in March was the result of a change by a single engineer. Platformer reported Twitter’s Google Cloud bill went unpaid for months until very recently, reflecting a “Deep Cuts Plan” Reuters had previously reported that sought to cut millions of dollars per day in spending on infrastructure costs.

Last November, an unnamed Twitter engineer interviewed by MIT Technology Review said that after the staff reductions, “Things will be broken more often. Things will be broken for longer periods of time. Things will be broken in more severe ways… They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just give up.” In the same article, site reliability engineer Ben Kreuger said, “I would expect to start seeing significant public-facing problems with the technology within six months.” It has been seven.

Correction July 1st, 2023 4:55PM ET: A previous version of this story mentioned a response to Mr. Beast as being from Elon Musk himself. In fact, it was from an Elon Musk parody account. It has been removed. We regret the error.

Did you miss our previous article…
https://eyespypro.com/the-ftc-wants-to-put-a-ban-on-fake-reviews/

Trending Spy News

FTC investigating OpenAI on ChatGPT data collection and publication of false information


OpenAI CEO Samuel Altman Testifies To Senate Committee On Rules For Artificial Intelligence
Photo by Win McNamee / Getty Images

The Federal Trade Commission (FTC) is investigating ChatGPT creator OpenAI over possible consumer harm through its data collection and the publication of false information.

First reported by The Washington Post, the FTC sent a 20-page letter to the company this week. The letter requests documents related to developing and training its large language models, as well as data security.

The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.

The FTC declined to comment. OpenAI did not immediately respond to requests for comment.

This is the first major US investigation into OpenAI, which burst into the public consciousness over the past year with the release of ChatGPT. The popularity of ChatGPT and the large language models that power it kicked off an AI arms race prompting competitors like Google and Meta to release their own models.

The FTC has signaled increased regulatory oversight of AI before. In 2021, the agency warned companies against using biased algorithms. Industry watchdog Center for AI and Digital Policy also called on the FTC to stop OpenAI from launching new GPT models in March.

Large language models can put out factually inaccurate information. OpenAI warns ChatGPT users that it can occasionally generate incorrect facts, and Google’s chatbot Bard’s first public demo did not inspire confidence in its accuracy. And based on personal experience, both have spit out incredibly flattering, though completely invented, facts about myself. Other people have gotten in trouble for using ChatGPT. A lawyer was sanctioned for submitting fake cases created by ChatGPT, and a Georgia radio host sued the company for results that claimed he was accused of embezzlement.

US lawmakers showed great interest in AI, both in understanding the technology and possibly looking into enacting regulations around it. The Biden administration released a plan to provide a responsible framework for AI development, including a $140 million investment to launch research centers. Supreme Court Justice Neil Gorsuch also discussed chatbots’ potential legal liability earlier this year.

It is in this environment that AI leaders like OpenAI CEO Sam Altman have made the rounds in Washington. Altman lobbied Congress to create regulations around AI.

Continue Reading

Trending Spy News

OpenAI will use Associated Press news stories to train its models


An illustration of a cartoon brain with a computer chip imposed on top.
Illustration by Alex Castro / The Verge

OpenAI will train its AI models on The Associated Press’ news stories for the next two years, thanks to an agreement first reported by Axios. The deal between the two companies will give OpenAI access to some of the content in AP’s archive as far back as 1985.

As part of the agreement, AP will gain access to OpenAI’s “technology and product expertise,” although it’s not clear exactly what that entails. AP has long been exploring AI features and began generating reports about company earnings in 2014. It later leveraged the technology to automate stories about Minor League Baseball and college sports.

AP joins OpenAI’s growing list of partners. On Tuesday, the AI company announced a six-year deal with Shutterstock that will let OpenAI license images, videos, music, and metadata to train its text-to-image model, DALL-E. BuzzFeed also says it will use AI tools provided by OpenAI to “enhance” and “personalize” its content. OpenAI is also working with Microsoft on a number of AI-powered products as part of Microsoft’s partnership and “‘multibillion dollar investment” into the company.

“The AP continues to be an industry leader in the use of AI; their feedback — along with access to their high-quality, factual text archive — will help to improve the capabilities and usefulness of OpenAI’s systems,” Brad Lightcap, OpenAI’s chief operating officer, says in a statement.

Earlier this year, AP announced AI-powered projects that will publish Spanish-language news alerts and document public safety incidents in a Minnesota newspaper. The outlet also launched an AI search tool that’s supposed to make it easier for news partners to find photos and videos in its library based on “descriptive language.”

AP’s partnership with OpenAI seems like a natural next step, but there are still a lot of crucial details missing about how the outlet will use the technology. AP makes it clear it “does not use it in its news stories.”

Did you miss our previous article…
https://eyespypro.com/congressistrying-to-stop-discriminatory-algorithms-again/

Continue Reading

Trending Spy News

Congress is trying to stop discriminatory algorithms again


A person with their hand hovering over the Like button on Facebook.
Photo by Amelia Holowaty Krales / The Verge

US policymakers hope to require online platforms to disclose information about their algorithms and allow the government to intervene if these are found to discriminate based on criteria like race or gender.

Sen. Edward Markey (D-MA) and Rep. Doris Matsui (D-CA) reintroduced the Algorithmic Justice and Online Platform Transparency Act, which aims to ban the use of discriminatory or “harmful” automated decision-making. It would also establish safety standards, require platforms to provide a plain language explanation of algorithms used by websites, publish annual reports on content moderation practices, and create a governmental task force to investigate discriminatory algorithmic processes.

The bill applies to “online platforms” or any commercial, public-facing website or app that “provides a community forum for user-generated content.” This can include social media sites, content aggregation services, or media and file-sharing sites.

Markey and Matsui introduced a previous version of the bill in 2021. It moved to the Subcommittee on Consumer Protection and Commerce but died in committee.

Data-based decision-making, including social media recommendation algorithms or machine learning systems, often lives in proverbial black boxes. This opacity sometimes exists because of intellectual property concerns or a system’s complexity.

But lawmakers and regulators worry this could obscure biased decision-making with a huge impact on people’s lives, well beyond the reach of the online platforms the bill covers. Insurance companies, including those working with Medicaid patients, already use algorithms to grant or deny patient coverage. Agencies such as the FTC signaled in 2021 that they may pursue legal action against biased algorithms.

Calls to make more transparent algorithms have grown over the years. After several scandals in 2018 — which included the Cambridge Analytica debacle — AI research group AI Now found governments and companies don’t have a way to punish organizations that produce discriminatory systems. In a rare move, Facebook and Instagram announced the formation of a group to study potential racial bias in its algorithms.

“Congress must hold Big Tech accountable for its black-box algorithms that perpetuate discrimination, inequality, and racism in our society – all to make a quick buck,” Markey said in a statement.

Most proposed regulations around AI and algorithms include a push to create more transparency. The European Union’s proposed AI Act, in its final stages of negotiation, also noted the importance of transparency and accountability.

Continue Reading

Trending