Gcore Unveils Advanced AI Solution for Online Content Moderation
Gcore AI Content Moderation automates the review of video content streams, flagging inappropriate content and alerting human moderators where necessary.
Topics
Gcore, the global edge AI, cloud, network, and security solutions provider, launched Gcore AI Content Moderation, a real-time solution.
The solution enables online service providers to automate the moderation of audio, text, and user-generated video content without needing prior artificial intelligence (AI) or machine learning (ML) experience.
“Human-only content moderation has become an unmanageable task — it is simply impossible to assign enough human resources and keep costs in check,” commented Alexey Petrovskikh, Head of Video Streaming at Gcore. “Gcore AI Content Moderation gives organisations the power to moderate content at a global scale, while keeping humans in the loop to review flagged content. This new service plays an essential role in enabling companies to protect their users, communities, and their own reputation — all while ensuring regulatory compliance.”
Also Recommended: Integral Ad Science Partners with Reddit
Additionally, organisations can improve user safety and ensure compliance with regulations such as the EU’s Digital Services Act (DSA) and the UK’s Online Safety Bill (OSB).
Gcore AI Content Moderation automates the review of video content streams, flagging inappropriate content and alerting human moderators where necessary. The solution integrates cutting-edge technologies to begin reviewing videos or live streams within seconds of their publication:
- State-of-the-art computer vision: Advanced computer vision models are used for object detection, segmentation, and classification to identify and flag inappropriate visual content accurately.
- Optical character recognition (OCR): Text visible in videos is converted into machine-readable format, enabling the moderation of inappropriate or sensitive textual information displayed within video content.
- Speech recognition: Sophisticated algorithms analyse audio tracks to detect and flag foul language or hateful speech, thereby moderating audio as effectively as visual content.
- Multiple-model output aggregation: Complex decisions, such as identifying content involving child exploitation, require the integration of multiple data points and outputs from different models. Gcore AI Content Moderation aggregates these outputs to make precise and reliable moderation decisions.