Meeting of Minds at Apple’s Fraud Engineering Team
A few years ago, engineers Karine Mellata and Michael Lin teamed up at Apple’s fraud engineering and algorithmic risk team. Their work primarily focused on combating various forms of online abuse, including spam, bot automation, compromised accounts, and developer fraud, all to safeguard Apple’s expanding user base.
The Challenge of Evolving Online Abuse Patterns
Despite their persistent efforts to create new models to keep pace with the changing landscape of online abuse, Mellata and Lin realized they were constantly having to revisit and rebuild the foundation of their trust and safety infrastructure. This was a Sisyphean task that inhibited their ability to really stay ahead of the perpetrators.
A Vision for a Dynamic, Modernized Internet Safety System
With growing regulatory pressure to consolidate and streamline disparate trust and safety operations, Mellata envisioned the potential for impactful change. She imagined a dynamic system capable of adapting as rapidly as the abuse it was designed to counter and expressed this ambition in a conversation with TechCrunch.
Founding Intrinsic to Empower Safety Teams
To turn this vision into reality, Mellata and Lin established Intrinsic. Their startup provides vital tools that allow safety teams to effectively prevent abusive activity on their platforms. Securing $3.1 million in seed funding, Intrinsic has garnered support from notable investors like the Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.
Intrinsic’s Multi-faceted Content Moderation Platform
Intrinsic’s multifunctional platform has been crafted to moderate content created by both users and AI. It offers a robust infrastructure that empowers clients, notably social media and e-commerce companies, to identify and act upon policy-violating content. Intrinsic automates various moderation tasks, such as user bans and content reviews.
The Customizable Nature of Intrinsic’s AI Tool
Mellata highlights Intrinsic’s adaptability, noting the AI platform’s capability to address specific issues like preventing inadvertent legal advice in marketing content or identifying region-specific prohibited items on marketplaces. She stresses that Intrinsic’s customization supersedes generalized classifiers and that even well-equipped teams would require significant development time to deploy similar in-house solutions.
Distinct Advantages Over Competing Platforms
When asked about competitors such as Spectrum Labs, Azure, and Cinder, Mellata points out Intrinsic’s unique features, like its explainability in content moderation decisions and extensive manual review tools. These allow customers to interrogate the system about errors and refine their moderation models using their own data.
The Demand for Dynamic Trust and Safety Solutions
According to Mellata, traditional trust and safety systems lack the flexibility to evolve with the nature of online abuse. Consequently, teams with limited resources are increasingly seeking external solutions that can reduce costs while maintaining rigorous safety standards.
Evaluation of Intrinsic’s Moderation Accuracy
Without independent third-party audits, it’s challenging to ascertain the accuracy and unbiased nature of any vendor’s content moderation models. However, Intrinsic is reportedly making headway, securing major contracts with “large, established” customers.
Expanding Operations and Technological Reach
Looking forward, Intrinsic plans to grow its team and broaden its technology to include oversight of not just text and images, but also video and audio content.
Intrinsic Navigates Economic Challenges with Promising Automation
As the tech industry experiences a broader slowdown, the interest in automation for trust and safety grows. Mellata believes that this trend puts Intrinsic in a prime position. By providing cost-effective, efficient, and thorough abuse detection, Intrinsic appeals to executives looking to trim budgets and mitigate risks
Read More