X-40
Trace Mode (logprobs)Sidecar Mode (no logprobs required)Enterprise ML pipelinesPowered by QEIv15™

Integrations

X-40™ can be integrated in two primary ways: Trace Mode (token-level telemetry such as logprobs) and Sidecar Mode (works even when token logprobs are not available). This is intentional: it prevents vendor lock-in and avoids false promises.

OpenAI — supported today (Trace Mode)

X-40™ is validated and benchmarked with OpenAI GPT-4.1 as the default, telemetry-friendly model configuration.

Supported Trace Mode configurations
  • GPT-4.1 (default)
  • GPT-5.2 / GPT-5.1 in telemetry mode (reasoning effort set to none)

If a team wants GPT-5.2/5.1 with heavier reasoning modes, we typically switch to Sidecar Mode unless token-level telemetry remains available under that configuration.

Other LLM vendors — Sidecar Mode (recommended)

Many LLM APIs do not expose token-level telemetry (logprobs) in a consistent, product-friendly way. To avoid attracting the wrong leads, we do not promise Trace Mode unless telemetry is available.

Instead, X-40™ runs as a sidecar: you call your model, then send X-40 the minimal telemetry and/or outputs you choose to share for governance.

  • output text (or hashed output, in privacy-max setups)
  • confidence scores / top-class margins if available
  • refusal signals / safety events
  • batch drift metrics for production pipelines
Enterprise ML pipelines (non-LLM)

X-40™ is also designed for ML inference governance where the “output” is a prediction, risk score, or classification. Typical telemetry we govern:

  • probability/confidence score
  • top-1 vs top-2 margin (decision separation)
  • batch drift over time (rolling distribution shifts)
  • stability envelopes for “safe automation vs verify” routing

Examples: credit risk scoring, fraud detection,compliance triage, claims processing,trade/risk dashboards.

QEIv15™ evidence channel (configurable, not “optional”)

X-40’s differentiator is dual-evidence governance. The structural evidence channel is powered by QEIv15™ anchors (Φ, κ, ΔS families) via ResearchCore.

It is configurable because some deployments prioritize latency or privacy boundaries. In high-stakes environments, we recommend enabling it by default.

Workflows: law, finance, and ML governance routing
Which integration should you choose?
  • If you use OpenAI GPT-4.1 (or GPT-5.2/5.1 in telemetry mode) and want maximal automation: Trace Mode + Dual Evidence.
  • If your provider does not expose token-level telemetry or you want strict privacy boundaries: Sidecar Mode.
  • If you run ML pipelines: ML telemetry governance (confidence/margins/drift + policy routing).