На информационном ресурсе применяются рекомендательные технологии (информационные технологии предоставления информации на основе сбора, систематизации и анализа сведений, относящихся к предпочтениям пользователей сети "Интернет", находящихся на территории Российской Федерации)

Feedbox

12 подписчиков

Social Media Roundup: Facebook Community Standards Metrics, New WhatsApp Features, Tinder Places

Author: Amit Chowdhry / Source: Forbes

A mobile phone screen displaying the icons for various social networking apps (Photo credit: OLI SCARFF/AFP/Getty Images)

“Social Media Roundup” is a weekly roundup of news pertaining to all of your favorite websites and applications used for social networking.

Published on Sundays, “Social Media Roundup” will help you stay up-to-date on all the important social media news you need to know.

Facebook

Suspension Of 200 Apps

This past week, Facebook said it reviewed thousands of apps to determine whether user data has been misused. Following the investigation, Facebook suspended approximately 200 apps. Facebook took this action as part of an agreement that CEO Mark Zuckerberg made in late March to go through an app review process following the Cambridge Analytica scandal. And Facebook toldTechCrunchthat it would provide more details about apps that it decides to ban after concluding each review on a case-by-case basis.

“The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct interviews, make requests for information (RFI) — which ask a series of detailed questions about the app and the data it has access to — and perform audits that may include on-site inspections,” said Facebook VP of Product Partnerships Ime Archibong in a blog post. “We have large teams of internal and external experts working hard to investigate these apps as quickly as possible.

To date thousands of apps have been investigated and around 200 have been suspended — pending a thorough investigation into whether they did in fact misuse any data.”

Community Standards Enforcement Preliminary Report

Forbes contributor Larry Magid recently reported about Facebook’s release of the Community Standards Enforcement Preliminary Report. Facebook’s Community Standards Enforcement Preliminary Report was separated into the following categories: Graphic Violence, Adult Nudity and Sexual Activity, Terrorist Propaganda (ISIS, al-Qaeda and affiliates), Hate Speech, Spam and Fake Accounts.

In a blog post, Facebook’s Vice President of Analytics Alex Schultz said that the report is intended to give a detailed description of internal processes and data methodology. “It’s an attempt to open up about how Facebook is doing at removing bad content from our site, so you can be the judge. And it’s designed to make it easy for scholars, policymakers and community groups to give us feedback so that we can do better over time,” wrote Schultz. “We can’t change the fact that people will always try to post bad things on Facebook – whether it is hate speech, terrorist propaganda or images that exploit children. But we can try to control how many times content that violates our Community Standards is seen.”

Here is a breakdown of the numbers:

Graphic Violence – Facebook said it takes action on content that glorifies violence or celebrates humiliation or suffering of others. And Facebook estimated that 0.22% to 0.27% of views were of content that violated the standards for graphic violence in Q1 2018. This is an increase from 0.16% to 0.19% in Q4 2017. The increase was likely due to a higher volume of graphic violence content shared on Facebook. More specifically, Facebook took action on a total of 3.4 million pieces of content in Q1 2018, which is an increase from 1.2 million pieces of content in Q4 2017.

“This increase is mostly due to improvements in our detection technology, including using photo-matching to cover with warnings photos that matched ones we previously marked as disturbing. These actions were responsible for around 70% of the increase in Q1,” said Facebook in its report. “We also fixed a prior technical issue that caused us to not always cover photos with warnings when we should have. Fixing this issue created a spike in actions as we worked through the backlog of disturbing graphic violence content. These actions were responsible for around 13% of the increase. The remaining increase was due to an increase in content containing graphic violence on Facebook.”

Adult Nudity And Sexual Activity – Facebook takes action on photos and videos of nude adults, both real and digitally created. Facebook estimated that 0.07% to 0.09% of views were of content that violated adult nudity and sexual activity standards in Q1 2018. This is slightly higher than 0.06% to 0.08% of views in Q4 2017. And in Q1 2018, Facebook took action on 21 million pieces of adult nudity and sexual activity content, which is about the same amount compared to Q4 2017.

Terrorist Propaganda – Facebook said it does not tolerate any content that endorses or represents terrorist organizations. And the company takes action on content that shows support for those organizations and their acts. Facebook pointed out that the number of views of terrorist propaganda content related to ISIS, al-Qaeda and their affiliates is low because the majority of it is removed before people see it. Facebook took action on 1.9 million pieces of content in Q1 2018, up from 1.1 million in Q4 2017. The increase is due to improvements in Facebook’s ability to find violating content using photo-detection technology as uploads are attempted.

Hate Speech – Hate speech is considered a direct attack on people based on characteristics such as race, ethnicity, origin, religious affiliation, gender, sexual orientation, gender identity, serious disability or disease. Facebook said that it took action on around 2.5 million pieces of content, up from 1.6 million in Q4 2017. The increase was due to improvements to detection technology.

Spam – Facebook defines spam as inauthentic activity that is automated (often involving bots or scripts) or coordinated (multiple accounts to spread and promote deceptive content). This includes false advertising, fraud, malicious content and the promotion of counterfeit goods. In Q1 2018, Facebook took action on 837 million pieces of content, which is up from 727 million in Q4 2017.

Fake Accounts – The number of fake accounts on Facebook represented about 3%-4% of the monthly active users (MAUs) between Q4 2017 and Q1 2018. In Q1 2018, Facebook disabled 583 million accounts, which is down from 694 million in Q4 2017.

Youth Portal

Facebook has recently launched a Youth Portal website, which serves as a guide for teenagers to better understand how the platform works. The Youth Portal website details security, safety and privacy measures.

A section of the Youth Portal that especially stands out is the Guiding Principles. In this section, users are encouraged to think for 5 seconds before posting content publicly. And it suggests users think about whether they would be comfortable reading their content out loud to parents and grandparents. Plus it also suggests that users be aware of who you have allowed being a part of your space by checking their friends list regularly and to accept friend requests from only people you know.

Facebook also encourages users to say something if they see something such as mean comments on a friend’s posts. And Facebook suggested that if using the social network is making you…

Click here to read more

The post Social Media Roundup: Facebook Community Standards Metrics, New WhatsApp Features, Tinder Places appeared first on FeedBox.

Ссылка на первоисточник
наверх