Category Forum
KNOW Identity Forum San Francisco

Note: This event was conducted under Chatham House Rules resulting in no attribution of the conversation.

On Wednesday, October 16, the One World Identity (OWI) team returned to the Bay Area to host a Special KNOW Identity Event with a digital transformation company, Genpact, discussing how to build trust and safety at scale with a focus on content moderation.  

If you’re not familiar with trust and safety, it is a set of business values that drives engagement and participation in a digital ecosystem by reducing the risk of harm, fraud, or other criminal or illicit behavior. If harm is inflicted or fraud does occur, proper recourse mechanisms are in place to re-establish trust and safety.

For content-driven platforms, like Twitter or YouTube, trust and safety operators focus on maintaining the integrity of their platform by monitoring and reporting breaches in their terms and conditions, protecting their users from hate speech, and preventing abuse across their channels.  

For most companies, trust and safety is a difficult challenge. It requires striking a balance between free speech, platform integrity, and user experience. The KNOW Identity Special Event dove straight into these tough and controversial topics with a panel of industry experts, including David Wellner Head of Strategic Partnerships at Evident ID, Cameron D’Ambrosi Principal at One World Identity, Jeff Piper Product Manager at Google, and Mark Childs SVP Technology Lead of Genpact moderating. Here is what we learned. 

“Building Trust & Safety into your products is a competitive advantage.”

The panel kicked off with a discussion on new data regulations — GDPR, CCPA, etc. — and their potential impact on the collection and use of personal data to inform trust and safety solutions. Moreover, how should companies approach building trust and safety at scale when considering each country has their own specific set of diverse data privacy requirements?

The panel largely agreed that although regulatory compliance was important, companies with successful trust and safety practices integrate leading data privacy techniques into their users’ seamless experience. The panel’s suggestion was to focus on big wins that fulfilled the spirit of the new mandates but were ultimately satisfied their users’ expectations. 

Additionally, the panel reminded the audience of trust and safety’s long-term value. Companies who conduct themselves with the highest global standards of data privacy and security and make platform integrity a priority will have less hurdles to overcome when entering new markets and geographic territories. 

Non-alternative Data is a Slippery Slope 

The conversation transitioned to the question of non-authoritative data sources and how they should be ethically leveraged to inform the decision-making process. Authoritative data sources are typically records from trusted sources such as government and financial institutions. Non-authoritative data sources are typically social media activity or other third-party providers who possess unverified data. 

The fear is if non-authoritative data sources are permitted into trust and safety operations, a user’s action on one platform could be used to affect their relationship status with a separate platform. Should Seamless be allowed to prevent a user from signing up, or remove them from their platform based on an incident report they received from Uber or TaskRabbit? 

The panel cut a clear line in the sand. Non-authoritative data sources are great as guiding support signals; however, companies should refrain from taking action solely based on these types of data. The industry ought to establish and maintain strict guidelines for what constitutes authoritative data or you’ll inadvertently create a slippery slope of what data types and sources can be used in the future.

The Future of Federated Trust Networks

Another popular topic in the industry today is Federated Trust Networks (FTN). An FTN is a collaborative network of service providers who agree to share data between themselves proactively fighting fraud and creating a more secure ecosystem. The panel explored the pros and cons of platforms collaborating on their trust and safety operations.

On the pro side, working together would spread the love. Large enterprises have the resources and infrastructure to deploy solutions that are not available to small- and medium-sized businesses. Creating an infrastructure to share insights or potentially fraudulent activity across a network would raise the boats for all participants.

On the other side, trust and safety is still considered a differentiator. Today, only a select cohort of companies have made the active decision to invest in building a dedicated trust and safety operation. It would be difficult to convince these companies that opening their doors to competitors, who opted to not make these initiatives a priority, would be a good thing.

Furthermore, companies remain hesitant as questions regarding liability remain unanswered. If the industry begins to make decisions informed by a collaborative data set, who is liable in the event an incorrect decision is made? Should the entire network share liability? Is it the responsibility of the original data provider? Until these glaring questions are answered, the panel agreed it would be challenging to create a federated trust network.

Handling Volume is the Everest of Content Moderation 

The most pressing challenge facing trust and safety operators focused on content moderation is volume. Twitter receives 6,000 posts a second, 500 million tweets a day, and 200 billion tweets per year. On Facebook, there are 24.1 billion posts on average per month, with over 250 billion pictures uploaded.

Trust and safety professionals are tasked with moderating these media platforms to prevent the abuse of their company’s terms and conditions, identify illegal activity, and making sure their users feel safe engaging with their brand. 

To handle the challenge of scale, trust and safety professionals leverage artificial intelligence (AI) and machine learning (ML) technologies for assistance. These solutions can help identify malicious behavior patterns, prevent fraudulent attempts and save people’s lives. However, these technologies are relatively new, and technologies lack strong use cases. 

The panel discussed the balance of these technologies with human intervention and overall, emphasized caution. 

These solutions are incredibly powerful but have fundamental flaws. AI and ML solutions require extensive data inputs to mature and learn specific patterns before they are effective. These data sets, however, are manually made and selected and thus, suffer from the same cognitive biases as their human administrators. Relying on machines to make decisions without the mechanisms or controls to prevent the perversion of their decision-making process is problematic.

However, these areas — correction and reasoning — are the areas where human intervention excels. Unfortunately, manual operators do not have the capacity to monitor all of the global activity. Additionally, exposure to immoral content has a vast array of negative cognitive repercussions on human operators. 

The panel agreed there will always be coexisting balance between technologies and humans. The industry should continue to build innovative technologies that help reduce the number of manual review cases and help achieve trust and safety at scale. 

What the Future of Trust and Safety looks like

A standing tradition at our forums is to end the evening with a look ahead. The panel foresaw several interesting trends.

On the technology front, the panel addressed blockchain and other decentralized ledger technologies. On one hand, the panel was excited by blockchain’s ability to facilitate the shift away from a centralized system of honeypot data sources to decentralized data pools, which reduces the downside of a single data breach. On the other hand, the blockchain technology has yet to be seamlessly integrated into the user experience. Until blockchain can provide value while not impeding a seamless user journey, there is little hope for the technology to gain adoption at scale. 

The panel also addressed the question of increased participation between trust and safety operators and law enforcement. Private industry has the insights, analytics, and data sets to increase the visibility of activity across networks that police officers do not have access to. And law enforcement officials are tasked with keeping the public safe from existential threats. The panel agreed that tech platforms and law enforcement ought to develop their relationship but must not be at the fault of violating their user’s right to personal privacy. 

The KNOW Identity Roadshow Continues 

Many thanks to the speakers and attendees who participated in the San Francisco KNOW Identity Forum! And a special thanks to our partner Genpact for their continued work evolving future-focused business functions. At the front lines of trust and safety operations, they continue to push the conversation forward, innovate on techniques, and provide the industry with best-in-class solutions. We’re looking forward to continuing conversations like these at our next Forum in Boston, Atlanta, Seattle and more. The KNOW roadshow culminates in the annual KNOW Identity Conference in April 2020. We hope to see you there!