Elon Musk and Stephen Hawking both suggest that it is highly likely that AI will be a threat to humanity, but Securly, the student safety company, is intent on using it in a far more positive manner.
Securly’s AI technology continuously scans emails, web searches, social media posts and more to identify and flag for mentions or suggestions of suicide, self-harm, bullying, or violence, and alerts school leadership and safeguarders in real time as it detects and categorises issues.
Its quarterly social impact report describes how its AI based filtering is used to monitor 55 million daily online interactions of 10 million students, and in 3 months, identified nearly 25000 emails, searches and social media posts, that contained posts for messages mentioning self-harm, depression, violence or suicide. Further analysis shows the technology combined with school leadership partnered to save more than 100 lives.
So how does this AI technology work? Take a look at our white paper which describes how Securly’s Natural Language Process (NLP) engines operate.
For more information on the Securly cloud filtering solution please get in touch