Trust & Safety
Trust & Safety
At Regalust, nothing is more important than the safety of our community and the trust of our users. Our core values, such as inclusivity and freedom of expression, are only possible when our platform maintains its safety, integrity, and trust among our adult users, as well as our business and content partners. We are deeply committed to eliminating illegal content, including non-consensual intimate material and child sexual abuse material (CSAM). Every online platform shares this responsibility, requiring collective action, cooperation, and constant vigilance.
Over the years, we have implemented numerous measures to protect our platform from illegal and abusive content. Our Trust & Safety policies continue to evolve, aiming to better identify, remove, review, and report illegal material—both before it is made available on our platform and once it is reported to us. While leading non-profit and advocacy groups recognize our efforts, we remain aware that this must be an ongoing process, with continuous innovation.
Further Protections for Our Community
To enhance the safety of our community, we have implemented additional measures, such as banning content downloads and restricting uploads to verified users only. These steps allow us to ensure greater control over the content being shared on our platform. As technology and new tools become available, we continue to improve our verification and moderation systems through collaboration and partnerships.
Trusted Flaggers and Reporting Features
Members of our Trusted Flagger Program, which now includes dozens of non-profit organizations, can alert us to potential CSAM or non-consensual content. In addition, Regalust Network of Sites users can report potentially abusive content using our flagging features available on content, comments, and profiles. This system allows users to easily alert us about content that may be illegal or violate our Terms of Service. These flags and reports are confidential, and our team reviews all flagged content promptly.
Content Moderation
We use a combination of automated tools, artificial intelligence, and human moderation to protect our community from harmful content. While all content is reviewed before going live on our platform, we also have ongoing monitoring efforts. Our team proactively sweeps uploaded content for potential violations, auditing the site for any breaches in our moderation process that might allow harmful material to remain on the platform.