Last Updated: March 19, 2026
At Naviya, we are committed to providing a safe, respectful, and creative environment for all our users. Given the sensitive nature of AI-generated content, we have established robust content monitoring and moderation policies to prevent the dissemination of illegal, harmful, or inappropriate content.
Users are strictly prohibited from creating, sharing, or promoting content that includes, but is not limited to:
To maintain community safety, we employ a multi-layered approach to content review:
Our systems utilize advanced AI-driven automated filtering technologies to scan text and images in real-time. This includes:
Content flagged by our automated systems or reported by users is reviewed by our dedicated moderation team. This ensures that nuanced cases are handled with human judgment and care.
We empower our community to help keep Naviya safe. Users can report any content, characters, or interactions that they believe violate our policies through the "Report" button available within the application. All reports are investigated promptly.
When a violation of our content policy is identified, we take appropriate actions, which may include:
As AI technology evolves, so do our monitoring techniques. We continuously update our filtering models and moderation guidelines to address emerging risks and ensure the highest standards of safety for our community.
If you have any questions regarding our Content Monitoring Policy, please contact us at support@naviya.chat.