Connect with us

Tech

Facebook unveils new controls for kids using its platforms

Published

on

Facebook, in the aftermath of damning testimony that its platforms harm children, will be introducing several features including prompting teens to take a break using its photo sharing app Instagram, and “nudging” teens if they are repeatedly looking at the same content that’s not conducive to their well-being.

The Menlo Park, California-based Facebook is also planning to introduce new controls for adults of teens on an optional basis so that parents or guardians can supervise what their teens are doing online. These initiatives come after Facebook announced late last month that it was pausing work on its Instagram for Kids project. But critics say the plan lacks details and they are skeptical that the new features would be effective.

The new controls were outlined on Sunday by Nick Clegg, Facebook’s vice president for global affairs, who made the rounds on various Sunday news shows including CNN’s “State of the Union” and ABC’s “This Week with George Stephanopoulos” where he was grilled about Facebook’s use of algorithms as well as its role in spreading harmful misinformation ahead of the Jan. 6 Capitol riots.

“We are constantly iterating in order to improve our products,” Clegg told Dana Bash on “State of the Union” Sunday. “We cannot, with a wave of the wand, make everyone’s life perfect. What we can do is improve our products, so that our products are as safe and as enjoyable to use.”

Clegg said that Facebook has invested $13 billion over the past few years in making sure to keep the platform safe and that the company has 40,000 people working on these issues. And while Clegg said that Facebook has done its best to keep harmful content out of its platforms, he says he was open for more regulation and oversight.

“We need greater transparency,” he told CNN’s Bash. He noted that the systems that Facebook has in place should be held to account, if necessary, by regulation so that “people can match what our systems say they’re supposed to do from what actually happens.”

The flurry of interviews came after whistleblower Frances Haugen, a former data scientist with Facebook, went before Congress last week to accuse the social media platform of failing to make changes to Instagram after internal research showed apparent harm to some teens and of being dishonest in its public fight against hate and misinformation. Haugen’s accusations were supported by tens of thousands of pages of internal research documents she secretly copied before leaving her job in the company’s civic integrity unit.

Josh Golin, executive director of Fairplay, a watchdog for the children and media marketing industry, said that he doesn’t think introducing controls to help parents supervise teens would be effective since many teens set up secret accounts any way. He was also dubious about how effective nudging teens to take a break or move away from harmful content would be. He noted Facebook needs to show exactly how they would implement it and offer research that shows these tools are effective.

“There is tremendous reason to be skeptical,” he said. He added that regulators need to restrict what Facebook does with its algorithms.

He said he also believes that Facebook should cancel its Instagram project for kids.

When Clegg was grilled by both Bash and Stephanopoulos in separate interviews about the use of algorithms in amplifying misinformation ahead of Jan. 6 riots, he responded that if Facebook removed the algorithms people would see more, not less hate speech, and more, not less, misinformation.

Clegg told both hosts that the algorithms serve as “giant spam filters.”

Democratic Sen. Amy Klobuchar of Minnesota, who chairs the Senate Commerce Subcommittee on Competition Policy, Antitrust, and Consumer Rights, told Bash in a separate interview Sunday that it’s time to update children’s privacy laws and offer more transparency in the use of algorithms.

“I appreciate that he is willing to talk about things, but I believe the time for conversation is done,” said Klobuchar, referring to Clegg’s plan. “The time for action is now.”

Tech

Twitter bans sharing of photos without consent

Published

on

By

Twitter launched new rules Tuesday blocking users from sharing private images of other people without their consent, in a tightening of the network’s policy just a day after it changed CEOs.

Under the new rules, people who are not public figures can ask Twitter to take down pictures or video of them that they report were posted without permission.

Twitter said this policy does not apply to “public figures or individuals when media and accompanying tweet text are shared in the public interest or add value to public discourse.”

“We will always try to assess the context in which the content is shared and, in such cases, we may allow the images or videos to remain on the service,” the company added.

The right of Internet users to appeal to platforms when images or data about them are posted by third parties, especially for malicious purposes, has been debated for years.

Twitter already prohibited the publication of private information such as a person’s phone number or address, but there are “growing concerns” about the use of content to “harass, intimidate and reveal the identities of individuals,” Twitter said.

The company noted a “disproportionate effect on women, activists, dissidents, and members of minority communities.”

High-profile examples of online harassment include the barrages of racist, sexist and homophobic abuse on Twitch, the world’s biggest video game streaming site.

But instances of harassment abound, and victims must often wage lengthy fights to see hurtful, insulting or illegally produced images of themselves removed from the online platforms.

Some Twitter users pushed the company to clarify exactly how the tightened policy would work.

“Does this mean that if I take a picture of, say, a concert in Central Park, I need the permission of everyone in it? We diminish the sense of the public to the detriment of the public,” tweeted Jeff Jarvis, a journalism professor at the City University of New York.

The change came the day after Twitter co-founder Jack Dorsey announced he was leaving the company, and handed CEO duties to company executive Parag Agrawal.

The platform, like other social media networks, has struggled against bullying, misinformation and hate-fueled content.

Continue Reading

Tech

Russia says Twitter mobile slowdown to remain until all banned content is removed, fines Google

Published

on

By

Russia will continue slowing down the speed of Twitter on mobile devices until all content deemed illegal is deleted, state communications regulator Roskomnadzor told Reuters, as Moscow continues to make demands of Big Tech.

Russian authorities have taken steps recently to regulate technology giants more closely by imposing small fines for content violations, while also seeking to force foreign companies to have official representation in Russia and store Russians’ personal data on its territory.

Twitter has been subjected to a punitive slowdown in Russia since March for posts containing child pornography, drug abuse information or calls for minors to commit suicide, Roskomnadzor has said.

Twitter, which did not immediately comment on Monday, denies allowing its platform to be used to promote illegal behavior. It says it has a zero-tolerance policy for child sexual exploitation and prohibits the promotion of suicide or self-harm.

Videos and photos are noticeably slower to load on mobile devices, but Roskomnadzor eased speed restrictions on fixed networks in May.

Roskomnadzor said Twitter, which it has fined a total of 38.4 million roubles this year, has systematically ignored requests to remove banned material since 2014, but has taken down more than 90 percent of illegal posts.

“As of now, 761 undeleted posts remain,” Roskomnadzor said in response to Reuters questions. “The condition for lifting the access restriction on mobile devices is that Twitter completely removes banned materials detected by Roskomnadzor.”

The regulator has said it will seek fines on the annual turnover of Alphabet’s Google and Facebook in Russia for repeated legal violations, threats the two companies did not comment on at the time.

“We also reiterate that the social network Twitter has been repeatedly found guilty by a Russian court of committing administrative offenses,” Roskomnadzor said.

Russia has also fined Alphabet Inc.’s Google 3 million roubles on Monday for not deleting content that it deemed illegal, part of a wider dispute between Russia and the US tech giant.

Russia in October threatened to fine Google a percentage of its annual Russian turnover for repeatedly failing to delete banned content on its search engine and YouTube, in Moscow’s strongest move yet to rein in foreign tech firms.

Google, which last month said it had paid more than 32 million roubles in fines, did not immediately respond to a request for comment.

Continue Reading

Tech

TikTok takes steps to make platform safer for teens

Published

on

By

Short-form video app TikTok has released the findings of a report specially commissioned to help better understand young people’s engagement with potentially harmful challenges and hoaxes — pranks or scams created to frighten someone — in a bid to strengthen safety on the platform.

In a statement, the company said that its social networking service had been designed to “advance joy, connection, and inspiration,” but added that fostering an environment where creative expression thrived required that it also prioritized safety for the online community, especially its younger members.

With this in mind, TikTok hired independent safeguarding agency Praesidio Safeguarding to carry out a global survey of more than 10,000 people.

The firm also convened a panel of 12 youth safety experts from around the world to review and provide input into the report, and partnered with Dr. Richard Graham, a clinical child psychiatrist specializing in healthy adolescent development, and Dr. Gretchen Brion-Meisels, a behavioral scientist focused on risk prevention in adolescence, to advise it and contribute to the study.

The report found that there was a high level of exposure to online challenges and teenagers were quite likely to come across all kinds of online changes in their day-to-day lives.

Social media was seen to play the biggest role in generating awareness of these challenges, but the influence of traditional media was also significant.

When teens were asked to describe a recent online challenge, 48 percent were considered to be safe, 32 percent included some risk but were still regarded as safe, 14 percent were viewed as risky and dangerous, and 3 percent were described as very dangerous. Only 0.3 percent of the teenagers quizzed said they had taken part in a challenge they thought was really dangerous.

Meanwhile, 46 percent said they wanted “good information on risks more widely” along with “information on what is too far.” Receiving good information on risks was also ranked as a top preventative strategy by parents (43 percent) and teachers (42 percent).

Earlier this year, the AFP reported that a Pakistani teenager died while pretending to kill himself as his friends recorded a TikTok video. In January, another Pakistani teenager was killed after being hit by a train, and last year, a security guard died while playing with his rifle while making a clip.

Such videos were categorized in the report as “suicide and self-harm hoaxes” where the intention had been to show something fake and trick people into believing that it was true.

Not only could challenges go horribly wrong, as evidenced by the Pakistan cases, but they could also spread fear and panic among viewers. Internet hoaxes were shown to have had a negative impact on 31 percent of teens, and of those, 63 percent said it was their mental health that had been affected.

Based on the findings of the report, TikTok was strengthening protection efforts on the platform by removing warning videos. The research indicated that warnings about self-harm hoaxes could impact the well-being of young people, as they often treated the hoax as real. As a result, the company planned to remove alarmist warnings while allowing conversation that dispelled panic and promoted accurate information.

Despite already having safety policies in place the firm was now working to expand enforcement measures. The platform has created technology that alerts safety teams to sudden increases in violating content linked to hashtags and has now expanded it to capture potentially dangerous behavior.

TikTok also intends to build on its Safety Center by providing new resources such as those dedicated to online challenges and hoaxes and improving its warning labels to redirect users to the right resources when they search for content related to harmful challenges or hoaxes.

The company said the report was the first step in making “a thoughtful contribution to the safety and safeguarding of families online,” adding that it would “continue to explore and implement additional measures on behalf of the community.”

Continue Reading

Trending