top of page
shutterstock_1526456507.jpg

Let's combat online abuse together

We’re able to detect and prevent cyberbullying, hate speech, sexual harassment, suicidal behaviour, topics, entities, sentiment, and more, in nearly 30 languages.

Hateful

Content analysed is hateful and Isolated before it is published

Racism

Text depicting a belief that race, skin colour, language, nationality, national or ethnical origins justifies contempt for a person or group of people.

Hatred

Text demonstrating an extreme intolerance towards a person or group of people.

Trolling

Text expressing hatred of, or prejudice against women or girls. It relates to sexism, promoting an inferior status for women and rewarding those who accept it.

Spam

Text that is spam, unwanted, unsolicited digital communication often sent out in bulk.

Insult

Text that offends, that hurts someone's dignity.

Body Shaming

Text that subjects someone to criticism, ridicule or mockery for supposed bodily faults or imperfections.

Sexual Harassment

Text depicting harassment of sexual nature.

Misogyny

Text expressing hatred of, or prejudice against women or girls. It relates to sexism, promoting an inferior status for women and rewarding those who accept it.

Link

Text containing external URL or links, that is taken to offensive or scam websites etc.

Threat

Text indicating an intention to harm someone, to make them do something against their will.

Homophobia

Text describing a prejudice-based intolerance towards homosexuality, gays, lesbians, bisexuals.

Moral Harassment    

Text meant to annoy someone, to intimidate or humiliate them, with the purpose of degrading someone's life through targeting their physical or mental health.

Scam 

Text indicating a scam or fraudulent or deceptive act or activity

Noise

Content analysed is "unnecessary noise" (spam, ads, scam...).

We're on a mission to combat online abuse

Hubly provides Hub community owners with a range of powerful AI tools and reporting to help detect and isolate abusive content should it arise within their community. We call this Hubshield.

Hubly_Hubshield_Online_abuse_detection.png

Hubshield

Hubshield proactively detects, isolates and quarantines inappropriate, unwanted, or offensive content containing harmful, racist, trolling  and abusive social content before it is published.

41VDihWl99L.png

Spot abuse instantly

We tag instances of hate speech, personal attacks,sexual harassment..

web-developer-3237679-2694745.png

Developer friendly

Send text request, receive annotations in a JSON response.

2_edited.png

Natural language

Our AI supports more languages than any other vendor. 

6_edited.png

Enterprise & law enforcement ready

On-premise & embedded deployment. Compliance. Packet signing. 

3_edited.png

Very
Affordable

We’re seriously cheap.We have a generous free plan.

camera-icon-21_edited_edited.png

Image
Detection

We scan and detect for potential unsuitable images

4_edited.png

Easy
to use

configurable to return what you need. Can’t be easier than that

5_edited.png

Ready to
go

As soon as you create a Hub our AI abuse detection software is working 24/7 

How hubshield works...

Hub owners, Administrators and moderators can then review, suspend and ban members where necessary. The machine learning algorithms constantly adapt and learn, and the data is used to improve and produce useful insights, analytics and reporting.

Screenshot 2021-11-24 at 12.17.08.png

Strategy Tip: You can use Hubshield as a content filter to review online content before it is published to other forms of Social Media. Your Hub becomes the first point of entry for your organisation, business and team for all corporate narrative. 

CONTENT FILTERING
QUALITY CONTROL OF CORPORATE NARRATIVE
REPUTATION MANAGEMENT

What functionality does Hubly's AI provide?

Detecting problematic content

The main purpose of the AI is to detect problematic content. However, under the bonnet our system is designed to function as a complete NLU (Natural Language Understanding) system. Currently we're able to detect problematic content, such as:
Personal attacks and cyberbullying
Hate speech
Profanity
Sexual advances
Criminal activity (selling, procuring restricted items like drugs, firearms, etc.)
Suicidal thoughts
Extract topics (formats: IAB, IPTC, Wikidata, native)
extract named entities
Detect aspect-based sentiment for consumer goods and services
split sentences
Tokenize Chinese, Japanese, Thai text
Decompound German, Dutch, Norwegian words
Provide parse trees
Extract noun phrases, verb phrases, prepositional phrases
Locate a URL of an image best representing a fragment of text
Compare components of a name and validate that the name is real
All functions are available to all the supported languages.


The Language Model API allows browsing the language data, up to the level of word senses, in all the languages supported by the system.

 

offensive-taxonomy.png

Image and Video moderation

We can proactively detect inappropriate, unwanted, or offensive content containing nudity, suggestiveness, violence, and other such categories.

We can detect explicit adult or suggestive content, violence, weapons, drugs, tobacco, alcohol, hate symbols, gambling, disturbing content, and rude gestures in both images and videos, and get back a confidence score for each detected label.

For videos, we can also return the timestamps for each detection. Moderation labels are organised in a hierarchical taxonomy that provides both top-level categories, such as ‘Suggestive’, and nuanced second-level categories that identify the specific type of content, such as female swimwear or partial nudity.


​Using this information, you we can create granular business rules for different geographies, target audiences, time of day, and so on.

bottom of page