top of page

Let's combat online abuse together

We’re able to detect and prevent cyberbullying, hate speech, sexual harassment, suicidal behaviour, topics, entities, sentiment, and more, in nearly 30 languages.


Content analysed is hateful and Isolated before it is published


Text depicting a belief that race, skin colour, language, nationality, national or ethnical origins justifies contempt for a person or group of people.


Text demonstrating an extreme intolerance towards a person or group of people.


Text expressing hatred of, or prejudice against women or girls. It relates to sexism, promoting an inferior status for women and rewarding those who accept it.


Text that is spam, unwanted, unsolicited digital communication often sent out in bulk.


Text that offends, that hurts someone's dignity.

Body Shaming

Text that subjects someone to criticism, ridicule or mockery for supposed bodily faults or imperfections.

Sexual Harassment

Text depicting harassment of sexual nature.


Text expressing hatred of, or prejudice against women or girls. It relates to sexism, promoting an inferior status for women and rewarding those who accept it.


Text containing external URL or links, that is taken to offensive or scam websites etc.


Text indicating an intention to harm someone, to make them do something against their will.


Text describing a prejudice-based intolerance towards homosexuality, gays, lesbians, bisexuals.

Moral Harassment    

Text meant to annoy someone, to intimidate or humiliate them, with the purpose of degrading someone's life through targeting their physical or mental health.


Text indicating a scam or fraudulent or deceptive act or activity


Content analysed is "unnecessary noise" (spam, ads, scam...).

We're on a mission to combat online abuse

Hubly provides Hub community owners with a range of powerful AI tools and reporting to help detect and isolate abusive content should it arise within their community. We call this Hubshield.



Hubshield proactively detects, isolates and quarantines inappropriate, unwanted, or offensive content containing harmful, racist, trolling  and abusive social content before it is published.


Spot abuse instantly

We tag instances of hate speech, personal attacks,sexual harassment..


Developer friendly

Send text request, receive annotations in a JSON response.


Natural language

Our AI supports more languages than any other vendor. 


Enterprise & law enforcement ready

On-premise & embedded deployment. Compliance. Packet signing. 



We’re seriously cheap.We have a generous free plan.



We scan and detect for potential unsuitable images


to use

configurable to return what you need. Can’t be easier than that


Ready to

As soon as you create a Hub our AI abuse detection software is working 24/7 

How hubshield works...

Hub owners, Administrators and moderators can then review, suspend and ban members where necessary. The machine learning algorithms constantly adapt and learn, and the data is used to improve and produce useful insights, analytics and reporting.

Screenshot 2021-11-24 at 12.17.08.png

Strategy Tip: You can use Hubshield as a content filter to review online content before it is published to other forms of Social Media. Your Hub becomes the first point of entry for your organisation, business and team for all corporate narrative. 


What functionality does Hubly's AI provide?

Detecting problematic content

The main purpose of the AI is to detect problematic content. However, under the bonnet our system is designed to function as a complete NLU (Natural Language Understanding) system. Currently we're able to detect problematic content, such as:
Personal attacks and cyberbullying
Hate speech
Sexual advances
Criminal activity (selling, procuring restricted items like drugs, firearms, etc.)
Suicidal thoughts
Extract topics (formats: IAB, IPTC, Wikidata, native)
extract named entities
Detect aspect-based sentiment for consumer goods and services
split sentences
Tokenize Chinese, Japanese, Thai text
Decompound German, Dutch, Norwegian words
Provide parse trees
Extract noun phrases, verb phrases, prepositional phrases
Locate a URL of an image best representing a fragment of text
Compare components of a name and validate that the name is real
All functions are available to all the supported languages.

The Language Model API allows browsing the language data, up to the level of word senses, in all the languages supported by the system.



Image and Video moderation

We can proactively detect inappropriate, unwanted, or offensive content containing nudity, suggestiveness, violence, and other such categories.

We can detect explicit adult or suggestive content, violence, weapons, drugs, tobacco, alcohol, hate symbols, gambling, disturbing content, and rude gestures in both images and videos, and get back a confidence score for each detected label.

For videos, we can also return the timestamps for each detection. Moderation labels are organised in a hierarchical taxonomy that provides both top-level categories, such as ‘Suggestive’, and nuanced second-level categories that identify the specific type of content, such as female swimwear or partial nudity.

​Using this information, you we can create granular business rules for different geographies, target audiences, time of day, and so on.

bottom of page