ai marketing os — build your own automated content system
most companies spend $3,000–$10,000 a month on content. they get inconsistent output, slow turnarounds, and copy that doesn't sound like them.
this is the alternative.
one research session per week. 13 pieces of content come out the other end — newsletter, linkedin posts, twitter threads, instagram carousels. all on-brand. all platform-native. posted automatically.
we built this for a client. then we open-sourced it.
this guide shows you exactly how to build it for your business — from scratch, using ai, in a weekend.
the before/after numbers are up front right after the system overview, then the full build guide.
how the system works
three layers. each one does one job.
┌─────────────────────────────────────────────┐
│ layer 1: the brain │
│ your claude project │
│ 25 context files that know your brand │
│ better than most employees do │
└──────────────────┬──────────────────────────┘
│
│ you generate content here
│ (claude.ai — free or pro)
▼
┌─────────────────────────────────────────────┐
│ layer 2: content output │
│ newsletter · linkedin · twitter · ig │
│ one research session → 13 pieces │
└──────────────────┬──────────────────────────┘
│
│ you approve, then trigger
│
▼
┌─────────────────────────────────────────────┐
│ layer 3: the mcp server │
│ self-hosted publishing infrastructure │
│ linkedin · twitter · beehiiv · notion │
└─────────────────────────────────────────────┘the intelligence lives in layer 1. the automation lives in layer 3. layer 2 is you — reviewing and approving before anything goes live.
the math — before vs after
if you only scan one section, make it this — typical content spend vs this system:
before:
- content agency: $2,000–5,000/month
- inconsistent output: 4-6 pieces/week if lucky
- avg turnaround: 3-5 days per piece
after:
- claude free: $20/month
- mcp server hosting (railway): $5/month
- your time: 30-60 min/week (research + review)
- output: 13 pieces/week, always on-brand
the files you build this weekend are the asset that compounds. better data in → better content out. every week.
part 1: the brain — your claude project
a claude project is a persistent workspace where claude remembers context across every conversation. you give it files. it reads them before every message. it becomes an expert on your brand.
the system runs on 25 files organized into five tiers.
your-brand-marketing-os/
│
├── core/ ← tier 1: who you are (3 files)
│ ├── business_context.md
│ ├── icp_profile.md
│ └── brand_voice.md
│
├── platform-prompts/ ← tier 2: how to write (8 files)
│ ├── linkedin_thought_leader.md
│ ├── linkedin_value_post.md
│ ├── linkedin_simplified.md
│ ├── linkedin_lead_magnet.md
│ ├── twitter_thread.md
│ ├── twitter_personal.md
│ ├── newsletter.md
│ └── instagram_carousel.md
│
├── examples/ ← tier 3: what good looks like (8 files)
│ ├── linkedin_thought_leader_examples.md
│ ├── linkedin_value_examples.md
│ ├── linkedin_simplified_examples.md
│ ├── linkedin_lead_magnet_examples.md
│ ├── twitter_thread_examples.md
│ ├── twitter_personal_examples.md
│ ├── newsletter_examples.md
│ └── instagram_examples.md
│
├── boost/ ← tier 4: amplifiers (3 files)
│ ├── authority_framework.md
│ ├── psychology_framework.md
│ └── conversion_framework.md
│
└── system/ ← tier 5: operations (3 files)
├── content_waterfall.md
├── sources.md
└── README.mdbuild each file once. update it as your brand evolves. claude reads all 25 before every content request.
building the files — exact prompts
every file below has an exact prompt you paste into claude (or any ai) to generate the first draft. answer the questions the ai asks. refine the output until it's accurate.
tier 1: core context
these three files are the foundation. build them first. everything else references them.
`business_context.md`
what it is: your business in precise terms. model, positioning, distribution, goals, differentiation. not a pitch — a precise description a new team member would need to understand the company.
prompt to build it:
I need to create a business context profile for my company
that will be used as a permanent context file for an AI
content system. This file will be loaded before every
content generation request.
Interview me to build this. Ask me one question at a time.
Cover:
- Company name, website, industry, business model
- Core offer and what makes it different (specific, not vague)
- Target markets (who buys, from where, at what stage)
- Distribution channels (how you reach customers)
- Revenue streams (current and planned)
- Geographic focus
- Short-term goals (next 6 months)
- Long-term vision (2-3 years)
- Competitive positioning (who you compete with and why you win)
- Key metrics you care about
After the interview, synthesize my answers into a structured
markdown document called business_context.md. Use plain
language. Be specific — include names, numbers, and real
positioning language where I've given it. No generic filler.`icp_profile.md`
what it is: a deep psychological profile of your ideal customer. not demographics — beliefs, fears, motivations, language. this is what makes your content feel like it was written for one specific person.
prompt to build it:
I need to create an ideal customer profile (ICP) document
for my AI content system. This is not a demographic sheet —
it's a psychological profile. Claude needs to understand
my audience at a deep level to write content that resonates.
Ask me these questions one at a time:
1. Who is your primary customer? (role, company type, stage)
2. What does a typical day look like for them?
3. What problem brings them to you specifically?
4. What have they already tried that didn't work?
5. What does success look like for them after working with you?
6. What do they secretly fear? (not just about work — about
their identity and status)
7. What do they read, watch, and follow?
8. What words do they use to describe their own problem?
(use their exact language, not your words for it)
9. What objections do they have before buying?
10. What makes them trust or distrust a vendor like you?
11. Do you have a secondary audience? Describe them.
After the interview, build a structured icp_profile.md that
includes: psychographic DNA, private fears, decision triggers,
trust signals, trust destroyers, content consumption patterns,
and the language they use. Write it so that an AI reading
this file will know exactly who it's writing for.`brand_voice.md`
what it is: how your brand sounds. not adjectives ("friendly, professional") — actual rules. what you say, what you never say, how you format, what your tone is across different contexts.
prompt to build it:
I need to create a brand voice profile for my AI content
system. This file will be the rulebook that governs every
piece of content the AI produces.
Ask me:
1. Share 3 pieces of content you've written or approved that
sound most like your brand.
2. Share 1-2 examples of content that sounds WRONG for your brand.
3. How would you describe your voice to a new writer in 3 words?
4. What topics or tones are off-limits?
5. How do you handle jargon — do you use industry terms freely,
explain them, or avoid them?
6. Do you use humor? What kind?
7. How do you want readers to feel after reading your content?
8. What does your brand believe that others in your space don't?
From my answers, build a brand_voice.md that includes:
- Voice identity (3 archetypes my brand combines)
- Tone spectrum (what I am vs what I'm not, in a comparison table)
- Voice pillars (5-6 named principles with examples)
- Language rules (specific words/phrases to use and avoid)
- Formatting philosophy for each platform
- The voice test (3-5 questions to check any piece of content)
Include actual example sentences — not just descriptions.tier 2: platform cognitive architectures
these files teach claude how to write for each platform. each one is a cognitive architecture — a set of rules, constraints, and a specific identity for that content type.
build these after the core files. they reference the core files.
`linkedin_thought_leader.md`
what it is: the rules for authority-building linkedin posts. data-driven, opinionated, no fluff openers, ends with conviction.
prompt to build it:
I need to create a LinkedIn thought leader post prompt file
for my AI content system.
This file defines a cognitive architecture — an identity and
rule set that Claude adopts when writing thought leadership
posts for LinkedIn.
Build a detailed prompt file that covers:
IDENTITY: What role does Claude adopt? (e.g., "Finance Authority
Voice" or "B2B Sales Expert Who Has Seen It All")
Adapt to my business: [DESCRIBE YOUR BUSINESS AND EXPERTISE]
RULES:
- Opening: must start with a bold, specific claim or data point.
Never "I've been thinking about X lately."
- Word count: 150-280 words
- Data requirement: minimum 2 specific numbers per post
- Must have a clear point of view — no both-sidesing
- Ends with conviction — a statement, not a question
- Paragraphs: 1-3 lines max, generous line breaks
STRUCTURAL PATTERN (document this):
Line 1: Bold hook
Lines 2-3: Context — why this matters now
Paragraph 2: The deeper pattern or mechanism
Paragraph 3: The non-obvious implication
Paragraph 4 (optional): How this lands for [MY AUDIENCE]
Final line: Sharp conclusion
BANNED OPENERS: (list 8-10 openers never to use)
BANNED CLOSERS: (list 5-6 closers never to use)
QUALITY GATES: 5 questions to check before outputting
Include examples of good and bad posts for my industry.
Write this as a full system prompt / cognitive architecture
file, not a checklist.`linkedin_value_post.md`
what it is: educational framework posts. frameworks, mental models, step-by-step explanations. the post a reader saves and references later.
prompt to build it:
Build a LinkedIn value post cognitive architecture for my
AI content system.
My business: [DESCRIBE YOUR BUSINESS]
My audience: [DESCRIBE WHO READS YOUR CONTENT]
This file defines how Claude writes educational, framework-based
LinkedIn posts — the kind readers save and share.
Cover:
IDENTITY: "Finance Intelligence Educator" or similar — adapt to
my domain. Claude adopts this identity fully.
MANDATORY RULES:
- Word count: 180-300 words
- Must include a numbered or bulleted framework (3-7 items)
- Each framework item needs: title + explanation + example
- Must include 1-2 specific data points to anchor the framework
- Must end with a practical "here's how to use this" close
- Hashtags: 4-5 always including my brand hashtag
CONTENT CATEGORIES for this post type:
(list 5 categories relevant to my domain)
- Framework posts: "here's how [experts] think about X"
- Pattern recognition: "every time X happens, Y follows"
- Concept demystification: "most people think X means Y"
- Decision framework: "here's how I evaluate X"
- Historical parallel: "the last time we saw X..."
LIST FORMATTING RULES:
- When to use numbered vs bulleted lists
- How many items per list
- How to write each list item
QUALITY GATES: 5 questions to check before outputting
Include 2 example posts in my domain with structure notes.`linkedin_simplified.md`
what it is: complex ideas made simple. analogies, plain language, "here's what this actually means" energy. for the smart non-expert.
prompt to build it:
Build a LinkedIn simplified/explainer post cognitive architecture
for my AI content system.
My business: [DESCRIBE YOUR BUSINESS]
My content topics: [LIST YOUR MAIN TOPICS]
This file governs how Claude writes posts that take complex
concepts and make them instantly understandable to smart people
who don't have deep expertise in my field.
IDENTITY: "The Translator" — someone who can explain the hardest
concept in the time it takes to drink a coffee.
RULES:
- Word count: 120-200 words
- MUST include at least one everyday analogy ("think of it like...")
- MUST define any jargon in the same sentence it's introduced
- MUST end with a practical "so what" for a non-expert
- Heavy use of line breaks, short sentences
- Never condescending — assume intelligence, not knowledge
STRUCTURAL PATTERN:
[Opener: relatable situation or surprising contrast]
[The jargon term + plain English definition in one sentence]
[The analogy: "think of it like..." — max 3 sentences]
[Why it matters right now — current event connection]
[The "so what" — practical takeaway for non-expert]
ANALOGY RULES:
- Must be physical and tangible
- Must be universal (no specialized knowledge needed)
- Must be directionally accurate
- Max 2 sentences
FORBIDDEN PATTERNS: (list what never to do)
Include 3 example posts with structure notes.`linkedin_lead_magnet.md`
what it is: value-first conversion posts. delivers real value, ends with a specific cta. not an ad — content that earns the ask.
prompt to build it:
Build a LinkedIn lead magnet post cognitive architecture for
my AI content system.
My offer: [WHAT YOU'RE DRIVING PEOPLE TO — newsletter, free
resource, consultation, etc.]
My business: [DESCRIBE YOUR BUSINESS]
IDENTITY: "Conversion-Obsessed Content Strategist" — gives away
80% of value freely, earns the CTA.
MANDATORY RULES:
- Word count: 150-250 words
- Post MUST stand alone as useful content — value even without CTA
- CTA only at the very end — never lead with the ask
- CTA must be specific: tell them exactly what they get
- Social proof element required (subscriber count, outcomes, etc.)
- One CTA per post — never two asks
- Hashtags: 4-5
STRUCTURAL PATTERN:
[Hook: specific promise or surprising insight]
[Value delivery: the free useful content]
[Proof/credibility signal]
[Bridge: natural transition to CTA]
[CTA: specific, benefit-driven, low-friction]
CTA DESIGN RULES:
- Specific benefit ("5-minute weekly brief" not "great content")
- Low friction ("free" + "link in bio")
- Resonant with what just appeared in the post
CONTENT TYPES:
- Preview + subscribe (taste of your regular content)
- Framework + resource (deliver framework, offer full version)
- Problem → solution → subscribe
- Curiosity gap + CTA
QUALITY GATES: 5 questions before outputting`twitter_thread.md`
what it is: long-form value on twitter. each tweet one complete thought. last tweet always a cta. built for shares and follows.
prompt to build it:
Build a Twitter/X thread cognitive architecture for my AI
content system.
My business: [DESCRIBE YOUR BUSINESS]
My Twitter handle: [YOUR HANDLE]
IDENTITY: "Twitter Conversion Machine" — compresses maximum value
into minimum space.
MANDATORY RULES:
- Thread length: 4-8 tweets
- Tweet 1: bold claim, specific number, or counterintuitive statement
(under 200 chars for the hook)
- Each tweet: one idea only — never two ideas in one tweet
- Data in every 2nd tweet minimum
- Final tweet: CTA — specific, single ask, "free" + link
- Numbering: 1/ 2/ 3/ etc.
- Hashtags: max 2, on final tweet only
TWEET 1 HOOK TEMPLATES (list 4-5 patterns):
- "[Specific number] just happened. Here's why it matters more
than the headline is saying:"
- "Everyone says [X]. The real story is [Y]. A thread:"
- "Every time [X] has happened, [Y] followed within [timeframe].
We're there. Thread:"
BRIDGE TWEET PATTERN (for mid-thread synthesis):
"So we have: X + Y + Z.
What does this combination tell us?"
CTA TWEET RULES:
- Specific benefit
- "Free" always if it's free
- Single action
- Under 40 words
QUALITY GATES: 6 questions before outputting`twitter_personal.md`
what it is: personal brand tweets. builder observations, contrarian takes, learning in public. the workshop, not the showroom.
prompt to build it:
Build a Twitter personal brand cognitive architecture for my
AI content system.
My personal brand angle: [ARE YOU A FOUNDER? EXPERT? BUILDER?
WHAT'S YOUR STORY?]
My business: [DESCRIBE YOUR BUSINESS]
IDENTITY: "The Builder Who Thinks Out Loud" — shares real process,
unfiltered observations, frameworks in progress.
TWEET CATEGORIES:
1. Builder observations (behind the scenes, data from your own work)
2. Contrarian takes (non-obvious reads, show your reasoning)
3. Learning in public (frameworks in progress, books you're reading)
4. Quick commentary (fast reactions to events, 1 data + 1 read)
5. Content/platform meta (observations about how your space works)
VOICE RULES FOR PERSONAL TWEETS:
- First person "I" is fine
- Fragments are fine ("No. That's not the story.")
- Mild irony is fine
- Work in progress framing is fine
- But: every claim still needs reasoning or data behind it
STILL AVOID:
- Generic motivation content
- Engagement bait without substance
- Vague claims without reasoning
- Thread padding
FORMAT OPTIONS:
- Single tweet under 200 chars (quick takes)
- Mini-thread 2-4 tweets (observations)
- Longer thread 5-8 tweets (deep analysis)
QUALITY GATES: 5 questions before outputting`newsletter.md`
what it is: the full weekly newsletter. this is the longest prompt file. it defines every section, every rule, quality gates, voice.
prompt to build it:
Build a complete weekly newsletter cognitive architecture for my
AI content system.
My newsletter: [NAME, FREQUENCY, AUDIENCE]
My topics: [WHAT YOUR NEWSLETTER COVERS]
My unique angle: [WHAT MAKES YOUR NEWSLETTER DIFFERENT]
This file needs to be comprehensive — it defines every section,
every rule, and the editorial voice for the newsletter.
Build the file with:
IDENTITY: The editorial persona (e.g., "sharp analyst-journalist
hybrid" — adapt to my domain)
NEWSLETTER STRUCTURE (non-negotiable):
- Subject line rules (with examples and anti-examples)
- Preview text rules
- Section 1: Quick Take (theme of the week, not events)
- Section 2: Top Story (with data requirements)
- Section 3: Key Themes (3-4 themes, format per theme)
- Section 4: Numbers Table (what data to track for my industry)
- Section 5: [MY AUDIENCE] Lens (how global/industry events
affect my specific reader)
- Section 6: Watch Next Week (forward-looking, 3 items)
- Section 7: [MY BRAND] Insight (original synthesis — the most
important section)
PER-SECTION RULES: For each section above, include:
- What it is
- What it is NOT
- Format rules
- Data requirements
- Example of good and bad
QUALITY GATES: 10 questions to check before outputting
VOICE CALIBRATION: How the newsletter voice differs from LinkedIn`instagram_carousel.md`
what it is: visual content system. slide-by-slide templates, color system, typography, export specs. full visual identity.
prompt to build it:
Build an Instagram carousel post system for my AI content system.
My brand visual identity: [DESCRIBE YOUR COLORS, FONTS, AESTHETIC]
My Instagram handle: [@HANDLE]
My website: [URL]
This file governs how Claude generates HTML-based Instagram
carousel posts (exported as 1080×1350 PNG slides).
Include:
BRAND IDENTITY SECTION:
- Exact color tokens (background, text, accent, borders)
- Typography (display font for headings, body font for text)
- Visual personality (2-3 words that describe the aesthetic)
- What the brand visually is NOT
SLIDE ARCHITECTURE:
- Aspect ratio: 4:5 (1080×1350px)
- HTML preview width: 420px
- Background pattern rule: alternate light/dark slides
- Required elements on every slide (progress bar, swipe arrow)
SLIDE-BY-SLIDE TEMPLATES:
For a 7-slide carousel, define:
- Slide 1: Hero/hook (gradient background, wordmark, headline)
- Slide 2: Top story or main point (dark background)
- Slide 3: Theme 1 (light background)
- Slide 4: Theme 2 (dark background)
- Slide 5: Theme 3 or data (light background)
- Slide 6: [MY AUDIENCE] angle (dark background)
- Slide 7: CTA (gradient, brand colors, website URL)
REUSABLE COMPONENTS: (HTML snippets for each)
- Wordmark
- Stat pills
- Data rows
- Mechanism chains
- CTA button
EXPORT PROTOCOL: Playwright script for 1080px PNG export
CONTENT QUALITY GATES: 12 questions before generatingtier 3: examples files
examples files show claude what good looks like. they're 3-5 real examples of each content type with structure notes.
the fastest way to build these: write or find 3-5 of your best-performing posts for each content type. paste them in. add notes on what works.
prompt for each examples file:
I'm building an examples file for my AI content system.
This file will show Claude the exact formatting patterns,
rhythm, and structure of high-quality [CONTENT TYPE] posts
for my brand.
Here are [3-5] examples of [CONTENT TYPE] posts that represent
my brand at its best:
[PASTE YOUR EXAMPLES]
For each example, add structure notes that explain:
- Why the opening works
- How the data is placed
- The rhythm and line break pattern
- What makes the CTA or closing land
- The formatting decisions (bullets, spacing, length)
Format this as a clean markdown file called
[content_type]_examples.md that Claude can read and pattern-match from.if you don't have existing content yet, use this alternate prompt:
I'm building examples for [CONTENT TYPE] posts in my industry.
My business: [DESCRIBE]
My audience: [DESCRIBE]
My topics: [LIST]
Generate 3 high-quality example posts that represent the ideal
output for this content type. Make them specific to my industry
with real-seeming data points.
Then add structure notes under each example explaining the
formatting and strategic decisions made.
Format as [content_type]_examples.mdtier 4: boost frameworks
these three files are amplifiers. they're not active by default — you invoke them by asking claude to apply them to existing content.
`authority_framework.md`
amplifies authority signals in any content piece.
prompt to build it:
Build an authority positioning framework for my AI content system.
My domain: [YOUR INDUSTRY/EXPERTISE]
My credentials and experience: [WHAT MAKES YOU CREDIBLE]
This file is applied on top of existing content to strengthen
authority signals. Cover:
THE 5 PILLARS OF AUTHORITY IN MY DOMAIN:
1. The non-obvious insight (saying something accurate others miss)
2. The named framework (your systematic methodology)
3. The historical parallel (pattern recognition across time)
4. The intellectual courage signal (clear conclusions, not hedging)
5. The [MY AUDIENCE] expertise signal (deep domain knowledge)
For each pillar:
- What it is
- How to apply it to content
- 4-5 specific language patterns to use
AUTHORITY LANGUAGE AUDIT TABLE:
Weak phrase → Strong replacement (10-15 pairs)
THE AUTHORITY TEST: 6 questions to check boosted content
Name 5 frameworks specific to my domain that I could develop
and reference as "my methodology."
Format as a complete boost layer file.`psychology_framework.md`
adds psychological triggers to content. ethical amplification — makes real information land harder.
prompt to build it:
Build a psychological trigger framework for my AI content system.
My audience: [DESCRIBE]
My content goal: [AWARENESS / CONVERSION / TRUST-BUILDING]
This file amplifies the psychological impact of existing content.
Cover these 7 triggers:
1. Pattern interrupt (surprise that forces attention)
2. Loss aversion (framing around what they're missing)
3. Cognitive fluency (specificity that makes things feel true)
4. Social proof cascading (what others are doing)
5. Authority via specificity (detail signals expertise)
6. Curiosity gap engineering (incomplete patterns demand completion)
7. Identity affirmation (content that reinforces who they are)
For each trigger:
- What it is in plain language
- Before/after example in my domain
- 4-5 activation phrases
- When to use it and when not to
APPLICATION PROTOCOL: 6-step process for applying this to content
ETHICAL BOUNDARY: What this framework never does
Format as a complete boost layer file.`conversion_framework.md`
strengthens ctas and conversion moments in any content.
prompt to build it:
Build a conversion psychology framework for my AI content system.
My offer: [WHAT YOU'RE CONVERTING PEOPLE TO]
My funnel stage: [AWARENESS / CONSIDERATION / DECISION]
This file is applied to content to strengthen conversion
moments. Cover:
CTA DESIGN PRINCIPLES:
- Specificity over vagueness
- Friction reduction techniques
- Urgency that's real (not manufactured)
- The benefit-first CTA structure
CONVERSION LANGUAGE PATTERNS:
- 8-10 high-converting phrases for my offer
- What to never say
THE RECIPROCITY PRINCIPLE: How much to give before asking
COMMITMENT AND CONSISTENCY: How to sequence content toward conversion
THE QUALITY TEST: 5 questions to check any CTA
Format as a complete boost layer file.tier 5: system files
`content_waterfall.md`
maps one research session to every piece of content.
prompt to build it:
Build a content waterfall system file for my AI content system.
My platforms: [LIST ALL PLATFORMS YOU PUBLISH TO]
My content cadence: [HOW OFTEN YOU PUBLISH]
My core content types: [LIST ALL TYPES]
This file defines how one research session becomes a full week
of content across all platforms. No copy-pasting — each piece
is native to its platform.
Build:
THE WATERFALL ARCHITECTURE:
Tier 1 → Tier 2 → Tier 3 → Tier 4 mapping
(research → newsletter → platform posts → repurposing)
WATERFALL MAPPING TABLE:
For each source content type, map to each platform output
REFORMATTING RULES:
- What can be repurposed
- What cannot be copy-pasted
- How to adapt the same insight for each platform
WEEKLY CONTENT CALENDAR:
Day-by-day schedule of what gets published where
THE ONE-WEEK CONTENT PACKAGE:
Input (research time) → Output (number of pieces)
Format as a complete system operations file.`sources.md`
defines your source hierarchy and research protocol.
prompt to build it:
Build a sources and research protocol file for my AI content system.
My industry: [YOUR INDUSTRY]
My topics: [WHAT YOU COVER]
This file defines what sources Claude can cite, how to
evaluate data quality, and research protocols.
Build:
SOURCE TIER HIERARCHY:
- Tier 1: Primary sources (cite directly, trust fully)
[List 10-15 authoritative sources in your space]
- Tier 2: Secondary sources (use for context, verify with Tier 1)
- Tier 3: Context only (never cite as primary)
KEY DATA POINTS TO TRACK:
For each of my main topics, what are the 5-7 specific metrics
or data points I want to monitor and include in content?
DATA QUALITY PROTOCOL:
Before publishing any number, check:
1. Source tier?
2. Recency?
3. Units correct?
4. Context provided?
5. Direction of change included?
PHRASES FOR DATA UNCERTAINTY:
When to say "reportedly" vs "according to" vs "approximately"
ABSOLUTE PROHIBITIONS: What never to do with data
Format as a complete research standards file.`README.md`
the master file. tells claude which files to use for which requests.
prompt to build it:
I have built a complete AI content system with 24 files organized
into 5 tiers. Build a README.md that serves as the master
activation guide for Claude.
Here are all my files:
[LIST ALL YOUR FILES WITH ONE-LINE DESCRIPTIONS]
Build a README that covers:
WHAT THIS SYSTEM IS: 2-3 sentences
FILE DIRECTORY: Complete table of all files with purpose
ACTIVATION RULES: For each content type, exactly which files
Claude should use:
- [Content type] → PROMPT: [file] + EXAMPLES: [file] + CONTEXT: [files]
THE CONTENT INTELLIGENCE WORKFLOW: Step-by-step
BOOST FRAMEWORK ACTIVATION: Which phrases trigger which boost files
CONTENT QUALITY NON-NEGOTIABLES: Rules that apply regardless of
which files are active
FILE MAINTENANCE: When to update which files as your brand evolvespart 2: generating content
once your claude project has all 25 files, content generation is just asking the right question.
how to generate each content type:
"LinkedIn thought leader post about [topic]"
→ Claude uses: linkedin_thought_leader.md + examples
→ Context: business_context.md + icp_profile.md + brand_voice.md
"Twitter thread about [topic]"
→ Claude uses: twitter_thread.md + examples
→ Context: business_context.md + icp_profile.md + brand_voice.md
"Write the weekly newsletter"
→ Claude uses: newsletter.md + sources.md + content_waterfall.md
→ Context: all core files
"LinkedIn post about [topic] — boost authority"
→ Base files + authority_framework.md applied after
"Create the full weekly content package"
→ Claude sequences: newsletter → 4 linkedin posts → twitter content
→ Uses content_waterfall.md to map everything outthe output is ready to review and approve. nothing posts until you say so.
part 3: the mcp server — auto-publishing
once you've approved your content in claude, you trigger the mcp server to publish it.
the mcp server is a typescript/node.js server that exposes publishing tools to claude over http + sse (server-sent events). it stores research logs and content drafts in mongodb, and syncs analytics reports to notion.
github repo: https://github.com/TheBuilderCompany/AI-Marketing-OS-MCP
fork it. it's yours. no licensing fees, no subscriptions.
what the server does
the server exposes four mcp tools that drive the full workflow:
| tool | what it does |
|---|---|
| `execute_research` | pulls headlines from curated rss feeds (reuters, bloomberg, ft), stores a research log, returns a summary for claude to work from |
| `save_draft_content` | saves newsletter, linkedin variants, x thread, and instagram copy as a content draft in mongodb — status: `draft` |
| `publish_approved_content` | loads a draft by id and publishes to x, linkedin, and beehiiv. add your api keys to activate. |
| `sync_analytics_to_notion` | aggregates metrics, stores analytics entries, creates a report page in notion via the official sdk |
claude calls these tools the same way it calls any other function. you tell it "research this week's news" and it calls `execute_research`. you tell it "save these drafts" and it calls `save_draft_content`. you tell it "publish the approved linkedin post" and it calls `publish_approved_content`.
data lives in mongodb. every research session, every draft, every analytics sync is stored and retrievable.
setup: step by step
step 1: clone and install
git clone https://github.com/TheBuilderCompany/AI-Marketing-OS-MCP.git Marketing_MCP
cd Marketing_MCP
npm installstep 2: configure your environment
cp .env.example .envopen `.env` and fill in your values:
# mongodb — required. local or atlas.
MONGODB_URI=mongodb://localhost:27017/marketing-os
# or: mongodb+srv://user:pass@cluster.mongodb.net/marketing-os
# mcp auth — set this in production so the server isn't open
MCP_AUTH_SECRET=your_secret_here
# linkedin
LINKEDIN_CLIENT_ID=your_client_id
LINKEDIN_CLIENT_SECRET=your_client_secret
LINKEDIN_ACCESS_TOKEN=your_access_token
# twitter / x
TWITTER_API_KEY=your_api_key
TWITTER_API_SECRET=your_api_secret
TWITTER_ACCESS_TOKEN=your_access_token
TWITTER_ACCESS_SECRET=your_access_secret
# beehiiv (newsletter)
BEEHIIV_API_KEY=your_key
BEEHIIV_PUBLICATION_ID=your_pub_id
# notion (analytics sync)
NOTION_API_KEY=your_key
NOTION_DATABASE_ID=your_database_id`MONGODB_URI` is the only key required to start. add the rest as you enable each platform.
step 3: get your api keys
- mongodb: local: install mongodb community and use `mongodb://localhost:27017/marketing-os` — cloud: create a free cluster at mongodb.com/atlas → get the connection string
- linkedin: go to linkedin.com/developers → create an app → request "share on linkedin" and "sign in with linkedin" permissions → generate an access token via oauth 2.0 — note: linkedin requires a company page for org posting
- twitter / x: go to developer.twitter.com → create a project + app → enable "read and write" permissions → generate access token and secret from the developer portal — free tier supports posting
- beehiiv (newsletter): go to beehiiv.com → settings → integrations → api → generate an api key → find your publication id in the url when logged in
- notion (analytics): go to notion.so/my-integrations → create integration → copy the api key → share your analytics database with the integration → copy the database id from the database url
step 4: build and run
# build typescript
npm run build
# start the server
npm startfor development with live reload:
npm run devserver runs on `http://localhost:3000` by default. health check: `GET /health`
for production, deploy to railway or render:
# railway
railway login
railway init
railway up
# set your .env variables in the railway dashboard
# render
# connect your github repo in the render dashboard
# set environment variables in render → environment
# build command: npm run build
# start command: npm startfull production deployment guide is in `DEPLOYMENT.md` in the repo.
step 5: connect to claude
in claude.ai → settings → integrations → add mcp server:
server url: http://localhost:3000 (local)
or
server url: https://your-app.railway.app (production)if you set `MCP_AUTH_SECRET`, claude will need to send it as a bearer token. add it in the integration settings when prompted.
claude now has access to all four publishing tools.
using the mcp server in claude
once connected, you work with it conversationally:
"research this week's news and give me a brief"
→ claude calls execute_research
→ pulls headlines from reuters, bloomberg, ft rss feeds
→ returns a research summary you can build content from
"save this week's content as a draft"
→ claude calls save_draft_content
→ newsletter + all linkedin variants + twitter thread
→ saved to mongodb with status: draft
"publish the approved linkedin post from draft id [X]"
→ claude calls publish_approved_content
→ loads the draft, posts to linkedin and x
→ returns confirmation
"sync this week's analytics to notion"
→ claude calls sync_analytics_to_notion
→ aggregates metrics, creates a notion report page
→ your content performance is logged automaticallythe server handles the api calls. claude handles the content. you handle the approval. nothing goes live without your say-so.
what one week looks like
monday 8am — trigger fires (manual or n8n scheduler)
→ claude calls execute_research
→ headlines pulled from reuters, bloomberg, ft
→ research brief returned
monday 9am — you review the brief (5 minutes)
→ claude generates all 13 content pieces
→ claude calls save_draft_content
→ everything saved to mongodb as drafts
monday 10am — you review drafts in notion or direct in claude
→ approve, edit, or reject each piece
content posts on approved schedule:
sunday — newsletter (beehiiv via publish_approved_content)
monday — linkedin thought leader + twitter thread
wednesday — linkedin value post
thursday — linkedin simplified
friday — linkedin lead magnet
weekly — analytics sync via sync_analytics_to_notionone research session. one review. one week of content.
what's next
this system is the foundation. once it's running, you can extend it:
- add n8n for full automation (research fires automatically, drafts land in slack, you approve with one click)
- add perplexity api for automated research synthesis
- add analytics feedback loop (what performed → influences next week's content)
- add more platforms (youtube scripts, podcast notes, email sequences)
the architecture supports all of it. the 25 files are the brain. everything else is plumbing.
built by the builder company
built by the builder company — https://thebuildercompany.in
if you want us to build your version of this system — the 25 files, the mcp server configured for your stack, and the n8n workflow wired up — reach out: https://thebuildercompany.in/contact
we built this for a client. we can build yours.