v0 · Experimental Standard

AI Collaboration Operating SOP v0

This is the high-level operating standard document for how Manager(chulbuji) uses AI not as a simple tool, but as a thinking partner and execution partner. This is not a detailed execution manual — it is a document that makes the current collaboration structure and operating philosophy visible.

SOP v0 is not a fixed rulebook

This document is the high-level standard for how Manager(chulbuji) collaborates with AI to plan, execute, record, and build assets from projects. It does not include internal execution manuals such as detailed prompts or automation scripts.

The v0 label does not mean "not yet complete." It means "this will keep being revised as we operate." As confirmed findings accumulate through experimentation, this document is updated alongside them.

AI collaboration operation is not a finished system. The experiment itself — checking whether this approach works — is the operation.

Who is responsible for what

AI collaboration operates efficiently when roles are clearly defined. When roles overlap or become ambiguous, judgment slows down and outputs lose consistency.

Manager (chulbuji)
Sets the direction and makes the final call. Which experiments to run, what to make public, which projects to focus on — these are the manager's decisions. AI supports this judgment but does not replace it.
Meta-chulbuji
Handles thought organization, structuring, specialist AI orchestration, output review, and asset-building oversight. Turns vague ideas into clear direction, and connects execution results to recorded assets.
Specialist AI
Supports domain-specific execution. Focused on generating concrete outputs — content writing, marketing copy, development code, research summaries, copywriting. Deployed and operated under Meta-chulbuji's direction.
Record System
Builds assets through Log, Insight, SOP, and Board. Records not only successful experiments but also trial-and-error and the reasoning behind decisions. The more records accumulate, the stronger the starting point for the next experiment.

From divergence to asset-building

AI collaboration operation follows a consistent flow — from the moment an idea emerges to the moment it becomes a recorded asset. The more familiar this flow becomes, the faster both experimentation and asset accumulation move.

  1. Diverge Surface thoughts, problem awareness, and ideas without constraint. At this stage, volume matters more than correctness.
  2. Organize Work with Meta-chulbuji to structure scattered thinking. Clearly define the core problem, goals, and constraints.
  3. Converge Narrow down to one direction to experiment with. Decide what to execute first based on the judgment criteria.
  4. Execute Deploy specialist AI to produce outputs. The manager maintains direction and reviews intermediate outputs.
  5. Review Confirm whether results align with the original intent and whether they can continue to the next stage. If there are problems, return to the Organize stage.
  6. Build Assets Record in Log, Insight, SOP, and Board. Preserve not only successful results but also the reasoning process and trial-and-error.
  7. Next Experiment Connect the recorded assets as the starting point for the next experiment. The quality of experiments improves with each repetition.

Which experiments to start first

Moving every idea into an experiment disperses focus. The following five criteria are used to judge experiment priority. All five do not need to be met. The manager makes the call.

  • Does it connect to existing assets?

    Can the experiment build on records already accumulated, projects already running, or structures already built? Experiments that leverage existing assets move faster than ones that start from scratch.

  • Can it be tested small?

    Can part of it be validated first without completing the whole? Fast validation potential takes priority over the full picture.

  • Can AI collaboration accelerate execution?

    Is there a meaningful expectation that working with AI produces faster or better results than working alone? Confirm whether AI can genuinely contribute to this area.

  • Is there a path to monetization?

    Direct revenue, increased site asset value, funding for the next experiment — does some form of monetization path appear? An experiment can still run without it, but the priority drops.

  • Does failure still leave an asset?

    Even if the experiment doesn't produce the intended result, does it leave something worth recording as a Log or Insight? Failures that aren't recorded tend to repeat.

What is made public, and what is not

chulbuji.com is a public site that records the process of experimentation. But not everything is made public. The criteria for disclosure and non-disclosure are as follows.

Public
  • Operating philosophy and direction
  • AI collaboration structure and role division
  • Generalizable insights
  • Asset-building criteria and recording approach
  • Project status and decision-making process
  • Experiment results (both success and failure)
Private
  • Raw prompt text
  • Auto-trading detailed strategy
  • Revenue figures and account information
  • Operating channel details
  • Detailed monetization strategy
  • Internal execution manuals

The boundary between public and private is not fixed. Depending on operating judgment, the disclosure scope of some content may change.

This document keeps changing

v0 is a starting point. As experiments accumulate, roles become clearer, and new patterns are confirmed, this document is updated alongside them.

Updates happen in two ways. Small revisions are applied directly to the current document. When there is a major directional shift, it is separated into a new version (v1, etc.) and the previous version is preserved as a record.

This document itself is an output of AI collaboration operation. Judgments and structural changes that arise during the writing process are recorded separately as Log or Insight entries.