Connect with us

Tech

Microsoft shutting down LinkedIn in China

Published

on

Microsoft is shutting down its LinkedIn service in China later this year after Internet rules were tightened by Beijing, the latest American tech giant to lessen its ties to the country.

The company said in a blog post Thursday it has faced a “significantly more challenging operating environment and greater compliance requirements in China.”

LinkedIn will replace its localized platform in China with a new app called InJobs that has some of LinkedIn’s career-networking features but “will not include a social feed or the ability to share posts or articles.”

LinkedIn in March said it would pause new member sign-ups on LinkedIn China because of unspecified regulatory issues. China’s Internet watchdog in May said it had found LinkedIn as well as Microsoft’s Bing search engine and about 100 other apps were engaged in improper collection and use of data and ordered them to fix the problem.

Several scholars this year also reported getting warning letters from LinkedIn that they were sharing “prohibited content” that would not be made viewable in China but could still be seen by LinkedIn users elsewhere.

In 2014, LinkedIn launched a site in simplified Chinese, the written characters used on the mainland, to expand its reach in the country. It said at the time that expanding in China raises “difficult questions” because it will be required to censor content, but that it would be clear about how it conducts business in China and undertake “extensive measures” to protect members’ rights and data.

Microsoft bought LinkedIn in 2016.

Google pulled its search engine out of mainland China in 2010 after the government began censoring search results and videos on YouTube.

Tech

Twitter bans sharing of photos without consent

Published

on

By

Twitter launched new rules Tuesday blocking users from sharing private images of other people without their consent, in a tightening of the network’s policy just a day after it changed CEOs.

Under the new rules, people who are not public figures can ask Twitter to take down pictures or video of them that they report were posted without permission.

Twitter said this policy does not apply to “public figures or individuals when media and accompanying tweet text are shared in the public interest or add value to public discourse.”

“We will always try to assess the context in which the content is shared and, in such cases, we may allow the images or videos to remain on the service,” the company added.

The right of Internet users to appeal to platforms when images or data about them are posted by third parties, especially for malicious purposes, has been debated for years.

Twitter already prohibited the publication of private information such as a person’s phone number or address, but there are “growing concerns” about the use of content to “harass, intimidate and reveal the identities of individuals,” Twitter said.

The company noted a “disproportionate effect on women, activists, dissidents, and members of minority communities.”

High-profile examples of online harassment include the barrages of racist, sexist and homophobic abuse on Twitch, the world’s biggest video game streaming site.

But instances of harassment abound, and victims must often wage lengthy fights to see hurtful, insulting or illegally produced images of themselves removed from the online platforms.

Some Twitter users pushed the company to clarify exactly how the tightened policy would work.

“Does this mean that if I take a picture of, say, a concert in Central Park, I need the permission of everyone in it? We diminish the sense of the public to the detriment of the public,” tweeted Jeff Jarvis, a journalism professor at the City University of New York.

The change came the day after Twitter co-founder Jack Dorsey announced he was leaving the company, and handed CEO duties to company executive Parag Agrawal.

The platform, like other social media networks, has struggled against bullying, misinformation and hate-fueled content.

Continue Reading

Tech

Russia says Twitter mobile slowdown to remain until all banned content is removed, fines Google

Published

on

By

Russia will continue slowing down the speed of Twitter on mobile devices until all content deemed illegal is deleted, state communications regulator Roskomnadzor told Reuters, as Moscow continues to make demands of Big Tech.

Russian authorities have taken steps recently to regulate technology giants more closely by imposing small fines for content violations, while also seeking to force foreign companies to have official representation in Russia and store Russians’ personal data on its territory.

Twitter has been subjected to a punitive slowdown in Russia since March for posts containing child pornography, drug abuse information or calls for minors to commit suicide, Roskomnadzor has said.

Twitter, which did not immediately comment on Monday, denies allowing its platform to be used to promote illegal behavior. It says it has a zero-tolerance policy for child sexual exploitation and prohibits the promotion of suicide or self-harm.

Videos and photos are noticeably slower to load on mobile devices, but Roskomnadzor eased speed restrictions on fixed networks in May.

Roskomnadzor said Twitter, which it has fined a total of 38.4 million roubles this year, has systematically ignored requests to remove banned material since 2014, but has taken down more than 90 percent of illegal posts.

“As of now, 761 undeleted posts remain,” Roskomnadzor said in response to Reuters questions. “The condition for lifting the access restriction on mobile devices is that Twitter completely removes banned materials detected by Roskomnadzor.”

The regulator has said it will seek fines on the annual turnover of Alphabet’s Google and Facebook in Russia for repeated legal violations, threats the two companies did not comment on at the time.

“We also reiterate that the social network Twitter has been repeatedly found guilty by a Russian court of committing administrative offenses,” Roskomnadzor said.

Russia has also fined Alphabet Inc.’s Google 3 million roubles on Monday for not deleting content that it deemed illegal, part of a wider dispute between Russia and the US tech giant.

Russia in October threatened to fine Google a percentage of its annual Russian turnover for repeatedly failing to delete banned content on its search engine and YouTube, in Moscow’s strongest move yet to rein in foreign tech firms.

Google, which last month said it had paid more than 32 million roubles in fines, did not immediately respond to a request for comment.

Continue Reading

Tech

TikTok takes steps to make platform safer for teens

Published

on

By

Short-form video app TikTok has released the findings of a report specially commissioned to help better understand young people’s engagement with potentially harmful challenges and hoaxes — pranks or scams created to frighten someone — in a bid to strengthen safety on the platform.

In a statement, the company said that its social networking service had been designed to “advance joy, connection, and inspiration,” but added that fostering an environment where creative expression thrived required that it also prioritized safety for the online community, especially its younger members.

With this in mind, TikTok hired independent safeguarding agency Praesidio Safeguarding to carry out a global survey of more than 10,000 people.

The firm also convened a panel of 12 youth safety experts from around the world to review and provide input into the report, and partnered with Dr. Richard Graham, a clinical child psychiatrist specializing in healthy adolescent development, and Dr. Gretchen Brion-Meisels, a behavioral scientist focused on risk prevention in adolescence, to advise it and contribute to the study.

The report found that there was a high level of exposure to online challenges and teenagers were quite likely to come across all kinds of online changes in their day-to-day lives.

Social media was seen to play the biggest role in generating awareness of these challenges, but the influence of traditional media was also significant.

When teens were asked to describe a recent online challenge, 48 percent were considered to be safe, 32 percent included some risk but were still regarded as safe, 14 percent were viewed as risky and dangerous, and 3 percent were described as very dangerous. Only 0.3 percent of the teenagers quizzed said they had taken part in a challenge they thought was really dangerous.

Meanwhile, 46 percent said they wanted “good information on risks more widely” along with “information on what is too far.” Receiving good information on risks was also ranked as a top preventative strategy by parents (43 percent) and teachers (42 percent).

Earlier this year, the AFP reported that a Pakistani teenager died while pretending to kill himself as his friends recorded a TikTok video. In January, another Pakistani teenager was killed after being hit by a train, and last year, a security guard died while playing with his rifle while making a clip.

Such videos were categorized in the report as “suicide and self-harm hoaxes” where the intention had been to show something fake and trick people into believing that it was true.

Not only could challenges go horribly wrong, as evidenced by the Pakistan cases, but they could also spread fear and panic among viewers. Internet hoaxes were shown to have had a negative impact on 31 percent of teens, and of those, 63 percent said it was their mental health that had been affected.

Based on the findings of the report, TikTok was strengthening protection efforts on the platform by removing warning videos. The research indicated that warnings about self-harm hoaxes could impact the well-being of young people, as they often treated the hoax as real. As a result, the company planned to remove alarmist warnings while allowing conversation that dispelled panic and promoted accurate information.

Despite already having safety policies in place the firm was now working to expand enforcement measures. The platform has created technology that alerts safety teams to sudden increases in violating content linked to hashtags and has now expanded it to capture potentially dangerous behavior.

TikTok also intends to build on its Safety Center by providing new resources such as those dedicated to online challenges and hoaxes and improving its warning labels to redirect users to the right resources when they search for content related to harmful challenges or hoaxes.

The company said the report was the first step in making “a thoughtful contribution to the safety and safeguarding of families online,” adding that it would “continue to explore and implement additional measures on behalf of the community.”

Continue Reading

Trending