Child Safety Standards
Last Updated: November 27, 2024
At Glide2, the safety and protection of all users—especially children and young people—is our highest priority. We are committed to maintaining a platform that is free from child sexual abuse material (CSAM), exploitation, harassment, and other harmful content.
This page outlines our comprehensive safety standards, technical measures, and enforcement policies that make Glide2 a safe space for content creation and consumption.
1. Zero Tolerance for CSAM and Child Exploitation
ZERO TOLERANCE POLICY
Glide2 maintains an absolute zero-tolerance policy for Child Sexual Abuse Material (CSAM) and Child Sexual Abuse and Exploitation (CSAE). Any content that sexualizes, exploits, or endangers minors is strictly prohibited and will result in immediate account termination and reporting to law enforcement authorities.
We define prohibited content as including, but not limited to:
- Any visual depiction of sexually explicit conduct involving minors
- Content that sexualizes, grooms, or solicits minors
- Content facilitating or promoting child exploitation
- Sharing, requesting, or trading CSAM in any form
- Adult sexual content featuring individuals who appear to be minors
- Drawn, animated, or digitally created content depicting minors in sexual situations
Immediate Actions Taken:
- Instant removal of violating content
- Permanent account suspension
- IP address ban and device fingerprinting
- Preservation of evidence for law enforcement
- Reporting to National Center for Missing & Exploited Children (NCMEC)
- Cooperation with local and international law enforcement agencies
2. AI-Powered Content Moderation System
Glide2 employs a sophisticated multi-layered content moderation system powered by artificial intelligence and human oversight to detect and prevent harmful content before it reaches our platform.
2.1 Ollama-Based AI Moderation
Our content moderation pipeline uses Ollama, a locally-hosted AI model infrastructure running specialized vision and language models. This approach provides:
- Privacy-First Analysis: Content is analyzed on our secure servers without third-party data sharing
- Real-Time Scanning: All uploaded videos are scanned before publication
- Multi-Modal Detection: Simultaneous analysis of video frames, audio, captions, and metadata
- Contextual Understanding: AI models trained to understand context and nuance
2.2 What Our AI Detects
Visual Content Analysis:
- Nudity and sexually explicit content
- Violence and graphic imagery
- Self-harm and dangerous activities
- Hate symbols and extremist content
- Age-inappropriate content featuring minors
Text and Audio Analysis:
- Grooming language and predatory behavior
- Solicitation of minors
- Threats and harassment
- Sharing of personal information (doxxing)
- Coordinated harmful campaigns
2.3 Human Review and Oversight
While our AI systems provide the first line of defense, human moderators review:
- Content flagged by AI with medium-to-high confidence scores
- User reports of harmful content
- Appeals of content removal decisions
- Edge cases requiring contextual judgment
Our moderation team is trained in child safety protocols and receives ongoing education on emerging threats and exploitation tactics.
3. In-App Reporting Mechanisms
Every user has the power to report content that violates our Community Guidelines or endangers children. Our reporting system is designed to be fast, accessible, and confidential.
3.1 How to Report Content
On Videos:
- Tap the three-dot menu (⋯) on any video
- Select "Report"
- Choose the reason (e.g., "Child Safety," "Sexual Content," "Harassment")
- Provide additional details (optional but helpful)
- Submit report—your identity remains confidential
On Comments:
- Long-press any comment
- Select "Report Comment"
- Choose the violation type
- Submit
On User Profiles:
- Visit the user's profile
- Tap the three-dot menu
- Select "Report User"
- Specify concerns (e.g., "Predatory Behavior," "Impersonation of Minor")
3.2 Priority Handling for Child Safety Reports
Reports related to child safety receive the highest priority:
- Immediate Review: CSAM reports are reviewed within 1 hour
- Expedited Action: Suspected CSAM is removed within 2 hours of confirmation
- 24/7 Monitoring: Dedicated safety team operates around the clock
- Confidential Reporting: Reporter identity is protected from reported users
3.3 Anonymous Reporting Options
Users can report content without fear of retaliation:
- All in-app reports are anonymous to the reported party
- Users can report content without being logged in (web version)
- External reporting via email: safety@glide2.app
4. User Blocking and Restriction Features
Glide2 provides robust tools to help users—especially young people and parents—control their experience and protect themselves from unwanted interactions.
4.1 Blocking Users
When you block another user:
- They cannot view your profile or content
- They cannot send you messages or comments
- They cannot follow you or see your activity
- Existing follows are removed
- They are not notified of the block
4.2 Privacy Settings for Minors (Ages 13-17)
Accounts for users aged 13-17 have enhanced default privacy settings:
- Private by Default: Teen accounts are set to private upon creation
- Comment Restrictions: Only approved followers can comment
- Message Restrictions: Only friends can send direct messages
- Discoverability Limits: Reduced visibility in search and recommendations
- Download Restrictions: Others cannot download videos from teen accounts
- Location Disabled: Location sharing is disabled by default
4.3 Restricted Words Filter
Users can enable automatic filtering of comments containing:
- Profanity and offensive language
- Sexual solicitation
- Spam and scam attempts
- Custom keyword lists (user-defined)
4.4 Mute and Restrict Options
Less severe than blocking, these features allow users to:
- Mute: Hide content from specific users without blocking them
- Restrict: Limit interactions while keeping the relationship intact
5. Age Verification and Parental Consent Systems
5.1 Age Verification at Registration
All new users must provide their date of birth during account creation:
- Minimum Age: Users must be at least 13 years old
- Age-Gating: Users under 13 are prevented from creating accounts
- AI-Assisted Verification: Profile photos are analyzed to detect age misrepresentation
- Behavioral Signals: System monitors for patterns indicating false age declaration
5.2 Parental Consent (Ages 13-15 in Certain Jurisdictions)
In jurisdictions requiring parental consent for users under 16 (e.g., GDPR territories), we implement:
- Parental email verification required for accounts aged 13-15
- Parents receive notification of account creation
- Parents can request account deletion or content restrictions
- Enhanced privacy settings cannot be disabled without parental approval
5.3 Ongoing Age Monitoring
We continuously monitor for signs of age falsification:
- Users suspected of being underage may be asked to provide ID verification
- Accounts confirmed to belong to users under 13 are immediately deleted
- Appeals process available for false positives
6. Cooperation with Law Enforcement
Glide2 works proactively with law enforcement agencies worldwide to combat child exploitation and bring offenders to justice.
6.1 Mandatory Reporting
When we detect CSAM, we immediately:
- File a report with the National Center for Missing & Exploited Children (NCMEC)
- Preserve all evidence including content, metadata, and user information
- Coordinate with local law enforcement jurisdictions
- Provide technical assistance for ongoing investigations
6.2 Proactive Partnerships
We maintain relationships with:
- National Center for Missing & Exploited Children (NCMEC)
- Internet Watch Foundation (IWF)
- Federal Bureau of Investigation (FBI)
- INTERPOL and international law enforcement agencies
- Technology Coalition and other industry safety groups
6.3 Hash Database Integration
Glide2 uses industry-standard hash databases to prevent re-upload of known CSAM:
- PhotoDNA technology integration (Microsoft)
- Comparison against NCMEC and IWF hash databases
- Automatic blocking of known CSAM before upload completes
7. How to Contact Us
If you have concerns about child safety on our platform, need to report CSAM, or have questions about our safety practices, please contact us immediately:
Child Safety Contact Information
General Child Safety Concerns:
safety@glide2.app
CSAM Reporting (Urgent):
csam@glide2.app
Monitored 24/7 - Response within 1 hour
General Support:
support@glide2.app
Parent/Guardian Inquiries:
parents@glide2.app
If you believe a child is in immediate danger:
Do not wait. Contact local emergency services (911 in the US) or your country's emergency hotline immediately. You can also report to the National Center for Missing & Exploited Children's CyberTipline at www.cybertipline.org.
8. GDPR Considerations for Minors' Data
Glide2 is fully compliant with the General Data Protection Regulation (GDPR) and provides enhanced protections for the personal data of children and young people.
8.1 Special Category Data
Children's personal data is considered a special category requiring additional safeguards:
- Age-appropriate privacy notices written in clear, plain language
- Parental consent mechanisms for users under 16 (or lower age as per local law)
- Enhanced data minimization—we collect only what is strictly necessary
- Shorter data retention periods for minor accounts
8.2 Parental Rights Under GDPR
Parents and guardians have the right to:
- Access: Request a copy of their child's personal data
- Rectification: Correct inaccurate information
- Erasure: Request deletion of their child's account and data
- Object: Object to certain processing activities
- Portability: Receive data in a machine-readable format
8.3 Data Processing Transparency
For minor users, we provide:
- Clear explanations of what data we collect and why
- Simplified privacy dashboards showing data usage
- Easy-to-use data export and deletion tools
- Regular reminders about privacy settings and controls
8.4 Profiling and Automated Decision-Making
For users under 18:
- Limited algorithmic profiling for content recommendations
- No targeted advertising based on behavioral data
- No sharing of data with third-party advertisers
- Transparency about how content recommendations work
8.5 Right to Be Forgotten
Minors have an enhanced right to erasure:
- Content posted as a minor can be deleted even after turning 18
- Complete account deletion available at any time
- Data is permanently deleted within 30 days (except where legally required to retain)
- No "shadow profiles" or data retention for marketing purposes
To exercise GDPR rights for minor accounts, contact: gdpr@glide2.app
9. Continuous Improvement and Transparency
Child safety is not a one-time effort—it requires constant vigilance, adaptation, and transparency.
9.1 Regular Safety Audits
- Quarterly reviews of content moderation effectiveness
- Independent third-party safety assessments (annually)
- Red-team testing for potential safety vulnerabilities
- User feedback integration into safety protocols
9.2 Transparency Reporting
We publish transparency reports that include:
- Number of CSAM reports filed with NCMEC
- Content removal statistics by category
- Account suspension data
- Average response times for safety reports
- Law enforcement requests and responses
9.3 Industry Collaboration
We actively participate in:
- Technology Coalition working groups
- GIFCT (Global Internet Forum to Counter Terrorism)
- Sharing of best practices with other platforms
- Research partnerships with child safety organizations
10. Updates to This Policy
We may update these Child Safety Standards as we enhance our protections or as regulations evolve. Material changes will be communicated through:
- In-app notifications to all users
- Email notifications to registered users
- Updates to this page with a revised "Last Updated" date
- Announcements on our blog and social media channels
Our Commitment
At Glide2, we believe that creating a safe platform for young people is not just a legal obligation—it is a moral imperative. We are committed to continuous investment in technology, training, and partnerships to ensure that Glide2 remains a place where creativity thrives and safety is never compromised.
Together, we can build a safer internet for everyone.
Version: 1.0
Last Updated: November 27, 2024
Next Review: February 27, 2025