Integrations
X-40™ can be integrated in two primary ways: Trace Mode (token-level telemetry such as logprobs) and Sidecar Mode (works even when token logprobs are not available). This is intentional: it prevents vendor lock-in and avoids false promises.
X-40™ is validated and benchmarked with OpenAI GPT-4.1 as the default, telemetry-friendly model configuration.
- GPT-4.1 (default)
- GPT-5.2 / GPT-5.1 in telemetry mode (reasoning effort set to none)
If a team wants GPT-5.2/5.1 with heavier reasoning modes, we typically switch to Sidecar Mode unless token-level telemetry remains available under that configuration.
Many LLM APIs do not expose token-level telemetry (logprobs) in a consistent, product-friendly way. To avoid attracting the wrong leads, we do not promise Trace Mode unless telemetry is available.
Instead, X-40™ runs as a sidecar: you call your model, then send X-40 the minimal telemetry and/or outputs you choose to share for governance.
- output text (or hashed output, in privacy-max setups)
- confidence scores / top-class margins if available
- refusal signals / safety events
- batch drift metrics for production pipelines
X-40™ is also designed for ML inference governance where the “output” is a prediction, risk score, or classification. Typical telemetry we govern:
- probability/confidence score
- top-1 vs top-2 margin (decision separation)
- batch drift over time (rolling distribution shifts)
- stability envelopes for “safe automation vs verify” routing
Examples: credit risk scoring, fraud detection,compliance triage, claims processing,trade/risk dashboards.
X-40’s differentiator is dual-evidence governance. The structural evidence channel is powered by QEIv15™ anchors (Φ, κ, ΔS families) via ResearchCore.
It is configurable because some deployments prioritize latency or privacy boundaries. In high-stakes environments, we recommend enabling it by default.
- If you use OpenAI GPT-4.1 (or GPT-5.2/5.1 in telemetry mode) and want maximal automation: Trace Mode + Dual Evidence.
- If your provider does not expose token-level telemetry or you want strict privacy boundaries: Sidecar Mode.
- If you run ML pipelines: ML telemetry governance (confidence/margins/drift + policy routing).