Acceptable Use Policy
Last Updated: April 30, 2026
This Acceptable Use Policy (AUP) is a plain-language summary of the content rules that apply to everything you create, schedule, or publish through Socialync. It sits alongside our Terms of Service, which is the binding legal version. If anything here conflicts with the Terms of Service, the Terms of Service control.
We wrote this page so that you, our partner platforms, and trust & safety teams can quickly understand what Socialync allows and what gets you removed.
The short version
You may use Socialync to:
- Schedule and publish content you have the rights to share
- Use AI-assisted features to help draft, edit, or repurpose your own content
- Manage multiple brands, profiles, and connected accounts that you control or are authorized to operate
You may not use Socialync to:
- Post AI-generated content without a human reviewing and approving it first
- Create or distribute non-consensual intimate imagery, including AI-generated
- Make deepfakes of real people without their documented consent
- Impersonate another person, brand, or organization
- Run spam, scams, fraud, phishing, or coordinated inauthentic behavior
- Harm minors in any way, or sexualize minors in any form
1. AI-Directed Posts Require Human Approval
If you are posting or scheduling content yourself, no extra approval step applies. Manual posting and manual scheduling work the way they always have — write a post, schedule it, ship it.
If an AI is directing the post — generating the caption, drafting the copy, picking the image, or instructing Socialync to schedule or publish on your behalf — a human in your account must review and approve it before it goes anywhere. This applies whether the AI is trying to publish immediately or just trying to add posts to your schedule.
Socialync enforces this at the product layer: AI agents and MCP integrations create drafts. A draft is not a scheduled or published post — a human has to open it, review it, and approve it. You can see the approval queue at mcp-drafts-approval.
What this means in practice:
- Writing a post yourself and scheduling it for next Tuesday — allowed
- Bulk-scheduling a queue of posts you wrote yourself — allowed
- Connecting an AI agent that drafts posts and drops them in your approval queue — allowed
- Asking AI to draft 30 captions, then approving them in a batch after reviewing — allowed
- Connecting an AI agent that publishes or schedules directly without a human review step — not allowed
- Building a script or bot that auto-clicks "approve" on AI-generated drafts — not allowed
- Disabling, bypassing, or circumventing the drafts-approval step in any way — not allowed
2. Synthetic Media and Deepfakes
You may not create, schedule, or publish through Socialync:
- Non-consensual intimate imagery (NCII) of any kind, including AI-generated, computer-generated, or synthetic depictions of real persons in intimate or sexual contexts. Zero tolerance.
- Deepfakes of real, identifiable people — AI-generated, manipulated, or synthetic media depicting real persons in a misleading, defamatory, or harmful way without that person's documented consent.
- False depictions of public figures in criminal, sexual, violent, or otherwise harmful contexts, including content presented as real that is not.
- Impersonation of any real person, brand, or entity without their authorization.
If you are publishing AI-generated or AI-modified content where it depicts people, scenes, or events, you are responsible for disclosing that it is AI-generated where applicable law (e.g., the EU AI Act) or platform policy (e.g., TikTok, Meta, YouTube labeling rules) requires it.
3. Child Safety
Socialync has zero tolerance for child sexual abuse and exploitation (CSAE). We prohibit child sexual abuse material (CSAM), content that sexualizes minors in any form, grooming, and any content that endangers a minor's safety. We preserve evidence and report violations to the National Center for Missing & Exploited Children (NCMEC).
See our full Safety Standards page for details.
4. Harassment, Threats, and Hate
You may not use Socialync to publish content that:
- Incites violence against any individual or group
- Threatens, harasses, or stalks individuals
- Targets people based on race, ethnicity, religion, gender, sexual orientation, disability, or similar characteristics with hateful or dehumanizing content
- Doxes or otherwise shares private information about others without consent
5. Spam, Manipulation, and Inauthentic Behavior
You may not use Socialync to:
- Run spam campaigns, including high-volume duplicate or near-duplicate posting designed to manipulate platform algorithms
- Engage in coordinated inauthentic behavior across multiple accounts
- Operate fake accounts, sockpuppets, or bot networks
- Artificially inflate engagement, follower counts, or reach
- Run engagement pods, follow/unfollow schemes, or other manipulation tactics that violate connected-platform policies
6. Illegal and Regulated Content
You may not use Socialync to promote or distribute:
- Illegal goods or services
- Fraud, scams, phishing, or financial deception
- Content that infringes intellectual property rights, including copyrighted material, trademarks, or rights of publicity that you do not have permission to use
- Content that violates applicable privacy laws, including data scraped or collected without consent
- Content that is illegal in the jurisdiction of your audience or where it is published
7. Connected-Platform Policies Apply
Every post you publish through Socialync also has to comply with the rules of the platform it lands on (Meta, Instagram, TikTok, YouTube, X, LinkedIn, Threads, Bluesky, etc.). Repeated platform-policy violations through Socialync can result in your Socialync account being suspended or terminated, even if the content does not otherwise violate this AUP.
8. Reporting and Enforcement
If you see content distributed through Socialync that violates this AUP, our Terms of Service, or the law, please report it:
Email: safety@socialync.io
Response SLA: Acknowledgement promptly; valid reports of non-consensual intimate imagery actioned within 48 hours.
Violations of this AUP may result in:
- Removal of the offending content
- Suspension of the offending account
- Permanent termination, with forfeiture of any prepaid subscription period, for severe or repeated violations
- Reporting to the affected platform's trust & safety team
- Reporting to law enforcement and relevant authorities (e.g., NCMEC for CSAE)
9. Changes to This Policy
We may update this AUP as platforms, laws, and abuse patterns evolve. Material changes will be reflected in the "Last Updated" date at the top of this page and, where required, communicated to you directly.
10. Related Documents
- Terms of Service — binding legal terms
- Trust & Safety — how we operate and respond to reports
- Safety Standards — child safety / CSAE policy
- AI Features & Disclosure — how AI features work and what data goes where
- Privacy Policy
