Building a Privacy-First Video Platform: 6 Months, 365 Pages of Docs, and What I Learned
The complete technical story of building Glide2 from scratch—starting with GDPR compliance, not features. A solo developer's journey through moderation systems, mobile architecture, and privacy by design.
Why I Built This
Let me start with the truth: I'm not trying to "kill TikTok." That's not realistic, and honestly, it's not the point.
I built Glide2 because I was frustrated. Every time I opened TikTok, I wondered: Where is my data going? Who's seeing it? What's being collected? And more importantly—why can't there be a short-form video platform that's transparent about these things?
So I started building. Six months later, I have a full-stack video platform that's about 85% complete on the backend and 70% done on mobile. It's running in production on Railway, I've uploaded over 20 videos to test it, and it's working on my phone right now through Expo.
But here's what makes this different from another "I built a TikTok clone" post: I didn't start with the video features. I started with compliance. GDPR compliance. Content moderation systems. Parental consent workflows. Audit logs. Data export tools.
Most startups treat privacy and compliance like homework due at midnight—something you cram in before launch because the lawyers said you have to. I built it first. It's the foundation everything else sits on.
This post is the technical story of how I did it. No marketing fluff. Just architecture decisions, trade-offs, and honest reflections on what worked and what I'd do differently.
The Problem I'm Actually Solving
Short-form video platforms have a trust problem. Users don't know:
- What data is being collected
- How content moderation decisions are made
- Where their videos end up
- Who can see their information
This isn't about being "anti-TikTok." It's about creating an alternative for people who want transparency. People in the EU who care about GDPR. Parents who want safe platforms for their kids. Creators who want to understand the rules before their content gets removed.
The technical challenge was: How do you build TikTok's features with GDPR's constraints?
That's what I've spent six months figuring out.
The Stack (And Why I Chose It)
I'm a solo developer. That shaped every decision.
Backend:
- Node.js + Express + TypeScript
- PostgreSQL with Prisma ORM
- Deployed on Railway
Mobile:
- React Native with Expo (SDK 54)
- TypeScript strict mode
Infrastructure:
- Cloudinary for video storage and CDN
- Redis (Upstash) for caching
- Socket.io for real-time notifications
- Gmail SMTP for emails
Why these choices?
Railway over AWS: I needed to ship fast without drowning in DevOps. Railway gives me managed PostgreSQL, automatic deployments from GitHub, and costs $5-20/month for beta scale. If I need more compute power (especially for AI workloads), I have a clear migration path to Azure. But I'm not optimizing for problems I don't have yet.
Prisma over raw SQL: Type safety was non-negotiable. Prisma gives me TypeScript models that match my database exactly, automatic migrations, and Prisma Studio for debugging. The performance overhead is minimal for my scale.
Expo over bare React Native: I chose developer velocity over control. Expo handles OTA updates, push notifications, and camera/video APIs out of the box. When I need to eject, I can. But right now, speed matters more than customization.
Cloudinary over self-hosted storage: Video storage and CDN delivery are hard problems. Cloudinary costs more than S3, but it includes automatic video processing, thumbnail generation, and—crucially—AI-powered content moderation through their API. That one feature justified the cost.
Redis as optional: I implemented Redis caching for the video feed, but the app gracefully degrades if Redis is unavailable. Early on, I'm optimizing for resilience over raw performance.
The Database: 32 Tables of Complexity
This isn't a simple app. The database schema reflects that.
32 tables. 16 enums for type safety.
Here are the ones that tell the story:
Core Tables
- Users (70+ fields): Everything from basic profile info to privacy settings to moderation status
- Videos: Metadata, processing status, moderation results, view counts, visibility settings
- Comments: Nested structure with moderation flags
- Likes, Follows, Shares, VideoViews: The social graph
Moderation Tables
- ModerationQueue: Videos awaiting manual review
- ModerationActions: History of every moderation decision
- UserStrikes: WARNING → MINOR → MAJOR → SEVERE escalation
- AppealRequests: Users can appeal moderation decisions
- Reports: User-submitted reports for videos/comments/users
Compliance Tables
- ConsentRecords: GDPR consent tracking per user
- ParentalConsentRequests: Minors under 13 need parent approval
- AuditLogs: Every sensitive action logged for DSA/GDPR compliance
- Sessions: Refresh token management with device tracking
- EmailVerificationTokens: Email verification flow
Privacy Tables
- PrivacySettings: Account visibility, comment permissions, interaction controls
- BlockedUsers: Who's blocking whom
- RestrictedUsers: Limited visibility (softer than blocking)
- MutedCreators: Hide content without unfollowing
Design decisions I'm proud of:
Soft deletion everywhere. No hard deletes until a GDPR deletion request is processed. Every table has a deletedAt timestamp. This gives users a grace period and makes debugging easier.
Separate strike tracking. User strikes are in their own table with expiration dates. A background job expires old strikes automatically. Users can see their strike history. This makes moderation transparent.
Audit logs for accountability. Every time we delete user data, update privacy settings, or take moderation actions, it's logged. If a user asks "why was my video removed?" we have receipts.
Consent as first-class data. GDPR consent isn't a checkbox—it's a ConsentRecord with timestamps, IP addresses, and consent type. We can prove compliance.
The Moderation Pipeline (The Hardest Part)
Content moderation is where most platforms fail users. The rules are opaque. Decisions feel arbitrary. Appeals go into a black hole.
I wanted to do better.
Here's how it works in Glide2:
1. AI Moderation (Automatic)
When a video is uploaded to Cloudinary, their moderation API scans it for:
- Violence
- Nudity/sexual content
- Offensive language
- Graphic content
This happens asynchronously after upload. The user's video starts uploading immediately—they're not waiting for AI results. But if the AI flags something, the video goes into the moderation queue before going live.
2. Manual Moderation Queue
Flagged videos land in a queue that moderators (right now, just me) review. The moderator sees:
- The video
- AI confidence scores
- Why it was flagged
- User's history
They can:
- Approve (video goes live)
- Reject (video stays down + user gets a strike)
- Escalate (needs more review)
3. Strike System
Strikes escalate:
- WARNING: Informal notice, no penalty
- MINOR: Video removed, 1 strike
- MAJOR: Temporary suspension
- SEVERE: Permanent ban
Strikes expire after 90 days (configurable). A background cron job handles expiration automatically.
4. Appeals
Users can appeal any moderation decision. Appeals go into a separate queue with:
- Original decision
- User's explanation
- Moderation history
If the appeal is approved, the strike is removed and the video can be restored.
Why this matters:
Most platforms make moderation a black box. I'm making it transparent. Users know:
- Why their content was flagged (AI or manual)
- What rule they broke
- How to appeal
- When strikes expire
It's not perfect—moderation never is—but it's fair.
Privacy & Compliance (The Foundation)
This is where I spent three months that most startups skip.
GDPR Compliance
Data Export Tool:
Users can download everything we have about them:
- Profile data
- Videos (links to Cloudinary)
- Comments
- Likes, follows, shares
- Privacy settings
- Moderation history
- Audit logs
It's delivered as a JSON file. No questions asked.
Data Deletion Requests:
Users can request deletion. We:
- Soft delete immediately (account disappears from the app)
- Keep data for 30 days (in case they change their mind)
- Hard delete after 30 days (except audit logs required by law)
- Log every step in AuditLogs
Consent Tracking:
Every user has a ConsentRecord:
- When they consented
- What they consented to (data processing, marketing emails, etc.)
- IP address at time of consent
- Consent version (in case terms change)
Parental Consent for Minors
Users under 13 need parental approval. Here's the flow:
- User enters date of birth during registration
- If under 13, we ask for parent's email
- Parent receives verification email with:
- What Glide2 is
- What data we collect
- How to approve or deny
- Parent clicks link and approves/denies
- If approved, minor can use the app
- If denied or no response in 7 days, account is locked
Parents can revoke consent anytime. If they do, the minor's account is immediately locked.
This isn't just COPPA compliance—it's the right thing to do.
Age Verification with Risk Scoring
We don't just trust users when they enter their age. We track:
- IP address
- Device fingerprint
- Behavioral patterns
If something seems off (e.g., claimed age doesn't match behavior), we flag it for review. This helps catch kids lying about their age.
Privacy Settings (Granular Control)
Users control:
- Account visibility (Public, Private)
- Who can comment (Everyone, Followers, None)
- Who can send messages
- Who can mention them
- Who can duet/stitch their videos
- Whether their liked videos are visible
These aren't buried in settings—they're prominent and explained clearly.
Authentication & Security
JWT with Refresh Tokens
I use a dual-token system:
- Access token: Short-lived (15 minutes), used for API requests
- Refresh token: Long-lived (7 days), stored in database with device info
When the access token expires, the mobile app automatically uses the refresh token to get a new one—without the user noticing.
Why this matters for mobile: Users expect apps to "just work." They don't want to log in every 15 minutes. The refresh token system makes auth seamless.
Token Refresh Interceptor (Mobile)
On the mobile side, I built an Axios interceptor that:
- Detects 401 (Unauthorized) responses
- Queues the failed request
- Uses refresh token to get new access token
- Replays all queued requests with new token
- If refresh fails, logs user out
This was surprisingly complex to get right, but it's critical for UX.
Rate Limiting
Different endpoints have different limits:
- Auth endpoints (login, register): 5 requests per 15 minutes
- Content creation (upload video, post comment): 10 per hour
- Read operations (view feed, search): 100 per 15 minutes
This protects against abuse without hurting legitimate users.
Session Management
Every login creates a Session record with:
- Refresh token
- Device info (OS, app version)
- IP address
- Last used timestamp
Users can see all active sessions and revoke them (like "Log out everywhere" on Netflix).
Real-Time Notifications
Users expect instant notifications when someone likes their video or comments. I built this with Socket.io.
How It Works
- When user logs in, mobile app opens WebSocket connection
- Connection is authenticated with JWT
- Backend sends events to user's specific room (based on user ID)
- Mobile app receives event, shows notification, invalidates React Query cache
Event Types
notification:new: Someone liked, commented, followed, or repliedmoderation:action: Video was removed or strike issuedappeal:resolved: Appeal was approved/denied
Connection Management
The mobile app handles:
- Auto-reconnect on network loss
- Token expiration (reconnect with new token)
- App backgrounding (close connection to save battery)
- App foregrounding (reconnect)
Real-time is expected but adds significant complexity. The upside? Users feel like the app is alive.
Video Upload & Processing
The Upload Flow
- User records video (or picks from gallery)
- Video stays local while they add caption, hashtags, category
- They hit "Post"
- Video uploads to backend with metadata
- Backend uploads to Cloudinary
- Cloudinary processes: compression, thumbnails, multiple qualities
- AI moderation runs (async)
- If approved, video goes live in feed
Progress Tracking
The mobile app shows upload progress from 0-100%. This requires:
- Axios
onUploadProgresscallback - Backend progress relay (tricky with multi-part uploads)
- State management in React
Transcription System (Work in Progress)
I built an in-house transcription system using OpenAI Whisper for auto-generated captions. This aligns with the privacy-first philosophy—I'm not sending videos to third-party captioning services.
Current challenge: Transcription slows down uploads. I'm investigating whether to:
- Make it async (captions appear later)
- Optimize the Whisper model
- Queue transcription jobs
This is active work for the next few weeks.
Why Cloudinary?
I could have used S3 + CloudFront + FFmpeg for video processing. It would be cheaper.
But Cloudinary gives me:
- Automatic adaptive bitrate streaming
- Thumbnail generation
- Video optimization
- AI moderation API
- Global CDN
For a solo dev, the time saved is worth the cost. I'm paying for infrastructure I don't have to build.
Mobile App Architecture
The TikTok-Style Feed
The core UX is a vertical scrolling feed with auto-play. Technically, this is:
FlatListwithpagingEnabledonViewableItemsChangedto detect active video- Active video plays, all others pause
- Pause on app background (battery management)
- Pause on navigation away
This sounds simple but has edge cases:
- What if user swipes too fast?
- What if video fails to load?
- What about memory management with many videos in the list?
I spent a week tuning this to feel smooth.
State Management Strategy
I use different tools for different state:
Server state (videos, users, comments): React Query (TanStack Query)
- Automatic caching
- Automatic refetching
- Optimistic updates for likes
- Infinite scroll pagination
Auth state: Context API + AsyncStorage
- User data
- Tokens
- Login/logout functions
- Persists across app restarts
UI state: Component state (useState, useReducer)
- Modal visibility
- Form inputs
- Loading states
Why no Redux? React Query handles most of what Redux would do—caching, async state, updates. For auth, Context is simpler. I'm avoiding complexity I don't need.
Parental Consent Interceptor
When the backend returns 403 with PARENTAL_CONSENT_REQUIRED, the mobile app:
- Catches it in Axios interceptor
- Navigates to "Pending Approval" screen
- Explains situation to minor
- Prevents access to main app
This was tricky to build—interceptors and navigation don't naturally play well together.
What I'd Do Differently
Let me be honest about mistakes and tech debt.
TypeScript Config
I started with strict: false in tsconfig to move fast. This was a mistake. Now I'm living with type issues I would have caught early. If I started over: strict mode from day one.
Prisma Client Instantiation
I create a new Prisma client in every service file. This works but it's not ideal. Should have centralized it. It's on my refactor list.
Testing Coverage
I started writing tests late. Should have done it earlier. I have basic test files but coverage is maybe 20%. For production, I need 80%+.
API URL Management
The mobile app has the Railway URL hardcoded temporarily. Should be environment-based. Easy fix but it's tech debt.
Redis Dependency
I built Redis as optional, which was smart. But I'm not sure I'm using it optimally. Feed caching helps but I haven't load-tested it properly.
These aren't deal-breakers. Every project has debt. The key is knowing where it is so you can prioritize paying it down.
Costs & Scale Considerations
Let's talk money.
Current monthly costs (beta):
- Railway (database + hosting): $15-25
- Cloudinary (video storage): $20-40 (depends on uploads)
- Upstash Redis: $0-10
- Email (Gmail SMTP): $0
- Total: ~$50/month
At what scale does this break?
Railway can handle:
- ~500 concurrent users
- Moderate database load
- Light compute (no heavy AI)
If transcription becomes compute-heavy, I'll need to migrate AI workloads to Azure or use queued jobs on Railway's paid tier.
Cloudinary pricing scales with storage and transformations. At 1,000 videos/month, I'm looking at $100-150/month. At 10,000 videos, it could be $500+.
Why I'm not over-optimizing:
Beta traffic will likely be modest. I'm targeting 5-30 engaged testers initially. Real usage will show me where bottlenecks are. Premature optimization is waste.
When I hit scale problems, I'll have data to guide decisions.
Current State of the Project
Backend: ~85% complete
- All core features working
- Deployed to Railway
- Processing 20+ test videos
- Real-time notifications working
Mobile: ~70% complete
- Full auth flow
- Video recording and upload
- Feed with social features
- Privacy settings
- Notifications
What's left:
- Caption performance optimization
- Google Play Console compliance checklist (14 sections)
- iOS testing (need to get an iPhone)
- Polish based on alpha feedback
- Final security audit
What's Next
I'm targeting a private alpha in early January with 10-15 testers. Developers, privacy-conscious users, people who will give brutal feedback.
Public beta in late January targeting EU users through:
- Reddit communities (r/privacy, r/degoogle, r/europrivacy)
- BetaList
- Privacy-focused platforms
- Building-in-public content (like this post)
I'm not chasing viral growth. I'm looking for the right 100-500 users who actually care about privacy and transparency.
Why I'm Sharing This
Two reasons:
- Accountability. Building in public keeps me honest. If I say I'm launching in January, I'm launching in January.
- Feedback. If you've built something similar, I want to learn from you. If you spot architectural issues, tell me now—not after 1,000 users find them.
I've documented this entire journey—365 pages across 65+ files. Database schemas, API specs, deployment guides, compliance frameworks. If there's specific implementation details you want to see, let me know.
Let's Talk
Questions I'd love your input on:
- Have you built content moderation systems? What did you learn?
- How would you handle video transcription at scale without sacrificing privacy?
- What's the biggest technical risk you see in this architecture?
- If you were building this, what would you do differently?
You can follow along as I finish the build and launch beta at glide2.app.
Thanks for reading. Now back to fixing that caption performance issue.
Questions? Reach out at hello@glide2.app