This document is a small experiment in defining command-line options once and
projecting them into several surfaces. The point is not to replace clap
today. The point is to test a better source of truth.
The pressure is straightforward:
-
the current CLI parser is hand-authored Rust
-
the CLI reference is hand-authored prose
-
future Python tooling may want an
argparseprojection too -
stale help text, defaults, examples, and rationale are exactly the kind of drift that literate source should reduce
This is a better fit for X-macro style reuse than classic GoF-style pattern generation. A command-line option is one concept with multiple projections: syntax, parser semantics, examples, rationale, and machine-readable facts.
That said, the cost is real and easy to underestimate. A projection system does not remove complexity; it moves part of it into the source-of-truth layer. That can create:
-
cognitive overhead for readers
-
unfamiliarity for contributors
-
tighter coupling between concerns that were previously independent
-
weak intermediate abstractions that are neither obvious nor general
-
extra policy decisions about when to use the meta-layer and when to write code directly
That last point matters more than it first appears. If every new option forces a fresh judgment call between "use the option spec" and "just write it directly", the system adds decision tax and reduces uniformity at the same time. That is a bad trade. A partial abstraction is often worse than either:
-
direct hand-written code and docs, or
-
a fully adopted projection system for a clearly bounded surface
So this experiment should be judged harshly. If it merely relocates drift into "is this option worth the macro?" discussions, it failed.
The experiment stays intentionally narrow:
-
one renderer script in Python 3.14+
-
one sample command spec for
wb-tangle -
four projections:
-
a Clap-like Rust snippet
-
an
argparsesnippet -
an AsciiDoc table
-
a JSON facts file
If this proves useful, the same intermediate model can later project into the
real clap code, MCP schemas, docs checks, and structured agent-facing facts.
Failure criteria
This experiment is only worth keeping if it reduces more pain than it creates. It should be considered a failure if one or more of these become true:
-
the intermediate declaration is harder to read than the hand-written
claporargparsecode it replaces -
contributors need project-specific lore before they can add or modify an option
-
the team keeps debating per-option whether the projection layer is "worth it"
-
the generated views are still routinely edited manually, defeating the point
-
the projections start chasing speculative future uses instead of solving current drift
The best outcome is not "macro all the things". The best outcome is to discover whether there is a narrow, high-drift surface where a shared declaration really is simpler than duplication.
Overview
Files
The script is deliberately stdlib-only. That keeps the experiment cheap to run in CI, in a dev shell, or inside a container.
# <<@file scripts/option_spec/render.py>>=
"""Render a small CLI option-spec into several projections.
The schema is intentionally narrow. The goal is to validate the "define once,
project everywhere" shape before touching the real weaveback CLI.
"""
=
:
:
:
: | None
:
: | | | None
:
:
:
: | None = None
: = False
: = False
:
:
:
:
:
: =
=
=
=
=
=
return
return
return
return
return
return None
return
return
=
=
=
return +
=
=
: =
== :
== :
return
=
= or
=
= f
+= f
return
=
return +
=
=
=
=
# @
The sample spec keeps the experiment honest by covering two different option shapes:
-
--config, a typed value option with a default -
--force-generated, a boolean flag with safety-oriented rationale
# <<@file scripts/option_spec/specs/tangle.toml>>=
[command]
name = "tangle"
summary = "Run all tangle passes from weaveback.toml or an alternate config file."
rationale = "This is the narrowest useful pilot because tangle already mixes parser semantics, recovery flags, and documentation drift."
examples = [
"wb-tangle",
"wb-tangle --config alt.toml",
"wb-tangle --force-generated",
]
[[options]]
id = "config"
long = "config"
short = "c"
value_kind = "path"
default = "weaveback.toml"
help_short = "Path to the tangle config file."
rationale = "Lets the same binary operate on alternate roots and experimental pass graphs."
doc = "Reads the pass list and parser settings from a specific config file instead of the default weaveback.toml."
examples = ["wb-tangle --config docs-only.toml"]
[[options]]
id = "force_generated"
long = "force-generated"
value_kind = "bool"
default = false
help_short = "Overwrite generated files even if they differ from the stored baseline."
rationale = "Makes recovery explicit when literate source is authoritative and generated files have drifted locally."
doc = "This is an escape hatch for regeneration workflows, not the default safety path."
examples = ["wb-tangle --force-generated"]
# @
The test stays cheap too: invoke the script in a temporary directory and check for the expected projections.
# <<@file scripts/option_spec/tests/test_render.py>>=
= .
= / / /
= / / / /
=
=
=
=
=
# @
This is not yet the real CLI source of truth. It is a proving ground for three questions:
-
does a compact intermediate model stay readable?
-
do the projections actually reduce drift?
-
is the rationale field useful enough to justify the extra structure?
If the answers are yes, the next step is not "macro everything". The next step
is to move one real command at a time behind the intermediate model and let
facts from that model feed a future docs-check command.