Socialync Logo
Socialync

Trust & Safety

Last Updated: April 30, 2026

Socialync is a social media management platform. People use us to schedule and publish content to Meta, Instagram, TikTok, YouTube, X, LinkedIn, Threads, Bluesky, and other connected services. That means we sit between creators and the platforms they post to — and we take that position seriously.

This page explains how we keep that pipeline trustworthy: how to report abuse, how fast we respond, the safety controls we have in place, and how we work with connected platforms and law enforcement.

Report abuse

Email: safety@socialync.io

For child sexual abuse material (CSAM): also report to the NCMEC CyberTipline at report.cybertip.org.

What to include in a report

  • The Socialync account, profile, or post URL involved (if known)
  • Links to the published content on the destination platform (Instagram URL, TikTok URL, etc.)
  • A short description of what's wrong (impersonation, NCII, deepfake, harassment, scam, etc.)
  • If the content depicts you and you have not consented, please say so explicitly
  • Any deadline or legal context (court order, takedown notice, etc.)

Response SLA

  • Acknowledgement: we acknowledge reports promptly during business hours
  • Non-consensual intimate imagery (NCII): we act on valid reports within 48 hours, in line with the US Take It Down Act
  • CSAM: immediate action — content removal, account termination, NCMEC report, evidence preservation
  • Other abuse categories: reviewed in good faith and actioned as warranted

What's not allowed on Socialync

The full list lives in our Acceptable Use Policy. Here's the headline:

  • Non-consensual intimate imagery, including AI-generated
  • Deepfakes of real people without their documented consent
  • Impersonation of real people, brands, or entities
  • Child sexual abuse material or sexualization of minors in any form
  • Harassment, threats, doxing, and incitement to violence
  • Spam, scams, phishing, and coordinated inauthentic behavior
  • IP infringement and unauthorized use of copyrighted material
  • AI-generated posts published without prior human review and approval

Our position on AI-generated content

Socialync builds AI-assisted features and supports the Model Context Protocol (MCP). We think AI can make creators faster and better. But we don't believe in fully autonomous publishing, and our product is built to enforce that.

AI agents and MCP integrations create drafts. A human approves drafts. No AI agent connected to Socialync — through MCP or otherwise — can publish directly. Every AI-proposed post lands in your drafts approval queue and waits for a human to review and approve it. Bypassing or scripting around this step (e.g. auto-approving drafts programmatically) violates our Terms of Service.

We do this because the alternative — fully autonomous bots posting to real audiences — is how synthetic-media abuse scales. Keeping a human in the loop on every AI-generated post is how we stay a tool that legitimate creators trust and that platform trust & safety teams can work with.

Safety controls we have in place

Account-level

  • Email verification for new accounts
  • OAuth-only connections to social platforms — we never ask for or store your platform passwords
  • OAuth tokens encrypted at rest (AES-256), refreshed regularly, deleted on disconnect
  • Account age requirement: 18+ to create a Socialync account

Posting pipeline

  • Per-account rate limiting on posting and AI generation endpoints
  • Mandatory human approval step for AI-generated and MCP-generated drafts
  • Per-platform compliance: every post is checked against the destination platform's basic policy requirements (length, media format, age-restriction flags) before transmission
  • Connection-level audit logs: every published post records which user account triggered it and through which channel (manual, scheduled, MCP-approved)

Abuse detection

  • Internal abuse signals on volume, velocity, and content patterns
  • Cooperation with connected platforms when their trust & safety teams flag abuse originating through Socialync
  • Manual review queue for reports submitted to safety@socialync.io

Working with connected platforms

Socialync only connects to social platforms through their official APIs and OAuth flows. We comply with the developer policies and content rules of every platform we integrate with.

If a platform's trust & safety team identifies abuse originating through Socialync, we cooperate. Specifically, in response to good-faith inquiries we can:

  • Confirm whether a given post was published through Socialync, and through which channel
  • Identify the responsible Socialync account
  • Suspend or terminate that account where warranted
  • Take down or unschedule still-pending posts queued by that account

Platform trust & safety teams: please reach safety@socialync.io for inbound requests. We respond promptly and we take repeated platform-policy violations seriously.

Working with law enforcement

We respond to valid legal process — subpoenas, court orders, search warrants, and government requests — in line with applicable law. For emergencies involving imminent risk to a person's safety, contact safety@socialync.io with "EMERGENCY" in the subject line and we will prioritize the request.

For CSAE-related matters, we additionally report to the National Center for Missing & Exploited Children (NCMEC) via the CyberTipline.

Counter-notice

If your content or account has been actioned and you believe the action was incorrect, you may submit a counter-notice to safety@socialync.io. We review counter-notices in good faith and reverse actions where the original report was mistaken or unsupported.

Related documents