top of page
Hubly_Hubshield_AI_Online_abuse_moderator.png

Let's combat online abuse together

We’re able to detect and prevent harmful content including cyberbullying, hate speech, sexual harassment, suicidal behaviour, topics, entities, sentiment and more, in multiple languages.

We have a range of cost effective pricing options to suit your needs.

Already using Hubly? Sign in.

Hubly_background image_2022_edited.png
1597a8bbcc68acab036a18ba8e98120d.jpg

We're on a mission to combat online abuse

Hubly provides Hub community owners with a range of powerful AI tools and reporting to help detect and isolate abusive content should it arise within their community. We call this Hubshield.

41VDihWl99L.png

Detect abuse instantly

We tag all instances of abuse, hate speech, personal attacks,sexual harassment..

icon-language-29.png

Natural language

Our NLP AI is constantly evolving and supports multiple languages 

business_40-512.png

Super
Affordable

We have multiple subscriptions to fit your requirements from free to enterprise

1093274.png

Easy
to use

You can configure Hubshield to moderate content the way you want

Hubshield_logo_white.png
Hubly_Personal_moderation2.png
82187.png

Abusive
copy

Stop harmful natural language copy before it is published

image-recognition-1.png

Harmful
imagery

We scan and detect for potential unsuitable images

531010-200.png

Offensive
video

Video uploads from all media sources are scanned for offensive imagery

devil.png

Emojis and
Gifs

We detect potentially harmful and abusive emojis and gifs used in hate and bullying

Hubshield

Hubshield proactively detects, learns, isolates and quarantines inappropriate, unwanted, or offensive content containing harmful, racist, trolling  and abusive social content before it is published.

6305815.png

Stop online abuse

Deploy Hubshield on existing communities & communications platforms

web-developer-3237679-2694745.png

Developer friendly

Deploy Hubshield on existing communities & communications platforms

compliance-icon-7.png

Compliancy
ready

Hubshield ensures your communities are compliant with government laws

outdoor-sport-field-track-003-512.webp

Ready to
go

As soon as you create a Hub our AI abuse detection software is working 24/7 

How Hubshield works...

Hub owners, Administrators and moderators can then review, suspend and ban members where necessary. The machine learning algorithms constantly adapt and learn, and the data is used to improve and produce useful insights, analytics and reporting.

Screenshot 2021-11-24 at 12.17.08.png

Strategy Tip: You can use Hubshield as a content filter to review online content before it is published to other forms of Social Media. Your Hub becomes the first point of entry for your organisation, business and team for all corporate narrative. 

CONTENT FILTERING
QUALITY CONTROL OF CORPORATE NARRATIVE
REPUTATION MANAGEMENT

Detecting problematic content

The main purpose of the AI is to detect problematic content. However, under the bonnet our system is designed to function as a complete NLU (Natural Language Understanding) system. Currently we're able to detect problematic content, such as:
Personal attacks and cyberbullying
Hate speech
Profanity
Sexual advances
Criminal activity (selling, procuring restricted items like drugs, firearms, etc.)
Suicidal thoughts
Extract topics (formats: IAB, IPTC, Wikidata, native)
extract named entities
Detect aspect-based sentiment for consumer goods and services
split sentences
Tokenize Chinese, Japanese, Thai text
Decompound German, Dutch, Norwegian words
Provide parse trees
Extract noun phrases, verb phrases, prepositional phrases
Locate a URL of an image best representing a fragment of text
Compare components of a name and validate that the name is real
All functions are available to all the supported languages.


The Language Model API allows browsing the language data, up to the level of word senses, in all the languages supported by the system.

 

offensive-taxonomy.png

Image and Video moderation

We can proactively detect inappropriate, unwanted, or offensive content containing nudity, suggestiveness, violence, and other such categories.

We can detect explicit adult or suggestive content, violence, weapons, drugs, tobacco, alcohol, hate symbols, gambling, disturbing content, and rude gestures in both images and videos, and get back a confidence score for each detected label.

For videos, we can also return the timestamps for each detection. Moderation labels are organised in a hierarchical taxonomy that provides both top-level categories, such as ‘Suggestive’, and nuanced second-level categories that identify the specific type of content, such as female swimwear or partial nudity.


​Using this information, you we can create granular business rules for different geographies, target audiences, time of day, and so on.

bottom of page