bezant
Typed async access to the Interactive Brokers Client Portal Web API — Rust-first, with HTTP / CLI / MCP / TypeScript surfaces auto-generated from the same vendored OpenAPI spec.
Bezant turns IBKR’s 154-endpoint CPAPI into five ergonomic surfaces that all ship from the same vendored OpenAPI 3.1 spec:
| Crate / package | Install | What it’s for |
|---|---|---|
bezant-core | cargo add bezant-core | Typed async Rust client. Keepalive, WebSocket streaming, pagination, symbol cache, 11 typed error variants, is_retryable() predicate, Subscription cancel handles |
bezant-server | cargo install bezant-server | axum HTTP sidecar exposing CPAPI as plain REST+JSON for any language. Production-hardened: CF Access cookie filtering, edge-cookie strip, request-ID propagation, graceful shutdown, concurrency cap |
bezant-cli | cargo install bezant-cli | bezant accounts, bezant positions DU123, bezant quote AAPL, bezant orders DU123 — --output {json,table} |
bezant-mcp | cargo install bezant-mcp | Model Context Protocol server — Claude / Cursor / Continue can call IBKR tools |
| TypeScript client | npm install from repo | Auto-generated TS client for Node / Deno / browser |
All five drive off the same vendored IBKR OpenAPI spec. Re-run
./scripts/codegen.sh when IBKR ships a new revision and every surface
updates together — verified by 14 normaliser-invariant tests + a
CI drift check.
What’s special about it
- Production-grade IBKR deploy story. Out of the box, every cloud
IBKR API client hits the same wall:
api.ibkr.com(Akamai-fronted) rejects datacenter egress IPs. bezant ships a documented Cloudflare Zero Trust + residential-Pi recipe that bypasses it without code changes — same image runs on Railway or a Pi at home with no fork. - Single-tenant proxy by design.
bezant-serveris honest about its trust model: one shared cookie jar, one IBKR account. No surprising fan-out semantics, no opaque session sharing. - Edge-aware cookie handling. Drops
CF_Authorization,CF_AppSession, AWS ALB OIDC, OAuth2 Proxy, Vercel JWT, Pomerium cookies before they reach IBKR — Akamai 401s on unrecognised cookies and we don’t want your bot to inherit that surprise. - Per-request observability. Every typed handler is
#[tracing::instrument]’d, every request gets a UUIDx-request-idechoed in the response, every mapped 4xx/5xx logs at the boundary with the typed error variant. - Diagnostic probe.
/debug/probe(token-gated) walksauth_status → ssodh_init → tickle → accountsand pins the first diverging step in a top-level verdict. Built specifically to discriminate “proxy regression” vs “upstream IBKR rejection” so you don’t waste hours debugging code that’s working.
Where to go next
| Goal | Page |
|---|---|
| Get something running locally | Quickstart |
| Understand the layered design | Architecture overview |
| Deploy to production | Cloudflare Zero Trust + Pi |
| Use the Rust crate | Rust crate |
| Use the HTTP sidecar from non-Rust | HTTP sidecar |
| Use the CLI | Command-line |
| Wire up an MCP client | MCP server |
| Refresh the spec / regen clients | Codegen pipeline |
| Contribute | Contributing |
Status
Alpha — v0.3. Production-deployed against IBKR live + paper accounts; the public API surface will continue to evolve until v1.0. See the ROADMAP for what’s shipped and what’s next.
Not affiliated with Interactive Brokers
Bezant is an independent open-source project. Trading involves substantial risk; this software is provided without warranty. See the license.
Quickstart
The fastest path from zero to a live IBKR call.
Prerequisites
- An IBKR account (paper is fine for everything below).
- The IBKR Client Portal Gateway running locally. The repo ships a
Docker compose file that packages it alongside
bezant-server, so:
git clone https://github.com/isaacrowntree/bezant
cd bezant
docker compose up
Open https://localhost:5000, log in with your IBKR credentials + 2FA. That’s the Gateway. From here, Bezant keeps the session alive automatically.
macOS gotcha — port 5000. macOS Sonoma and later run an AirPlay Receiver on
:5000by default. If your Docker compose comes up buthttps://localhost:5000returns a mysterious403withServer: AirTunes, that’s why. Either:
- Disable it in System Settings → General → AirDrop & Handoff → AirPlay Receiver, or
- Edit
docker-compose.ymlto remap the host port:"5001:5000"instead of"5000:5000", then open https://localhost:5001 instead.
Sanity-check via curl
curl http://localhost:8080/health
# {"authenticated":true,"connected":true,"competing":false,"message":null}
curl http://localhost:8080/accounts
# [ ... your accounts ]
Everything from here is optional sugar on top.
Rust
cargo add bezant-core tokio --features tokio/full
Or in Cargo.toml:
[dependencies]
bezant-core = "0.3"
tokio = { version = "1", features = ["full"] }
The crate publishes its lib as bezant, so you use bezant::* regardless
of the manifest entry. There’s also a bezant::prelude for the common
imports:
#![allow(unused)] fn main() { use bezant::prelude::*; }
use std::time::Duration; #[tokio::main] async fn main() -> bezant::Result<()> { let client = bezant::Client::new("https://localhost:5000/v1/api")?; let _keepalive = client.spawn_keepalive(Duration::from_secs(60)); client.health().await?; let accounts = client .api() .get_all_accounts(bezant::api::GetAllAccountsRequest::default()) .await?; println!("{accounts:#?}"); Ok(()) }
TypeScript / Node
npm install github:isaacrowntree/bezant#main:clients/typescript
import { Configuration, TradingPortfolioApi } from "bezant-client";
const config = new Configuration({
basePath: "https://localhost:5000/v1/api",
});
const accounts = await new TradingPortfolioApi(config).getAllAccounts();
console.log(accounts);
CLI
cargo install bezant-cli
bezant health
bezant accounts --output table
bezant positions DU123456 --output table
bezant quote AAPL
bezant orders DU123456 --output table
--output {json,table} controls the format; default is json for
piping into jq. Tabular endpoints (accounts, positions, orders)
get a comfy-table renderer when you pass --output table.
MCP (Claude Desktop / Cursor / Continue)
cargo install bezant-mcp
Add to your client config:
{
"mcpServers": {
"bezant": {
"command": "bezant-mcp",
"env": {
"IBKR_GATEWAY_URL": "https://localhost:5000/v1/api"
}
}
}
}
The LLM now has six IBKR tools: health, list_accounts,
account_summary, positions, conid_for, tickle.
Architecture overview
Bezant is five surfaces over one vendored spec. Understanding how they compose is the key to picking the right one for your use case.
┌────────────────────────────────────────────────────────────────────────┐
│ vendored OpenAPI 3.1 spec (bezant-spec) │
│ ─ normalise (13 steps in scripts/normalize-spec.py) │
│ ─ upgrade 3.0 → 3.1 │
└──────────┬───────────────────────────────────┬─────────────────────────┘
│ │
│ oas3-gen │ openapi-generator-cli
▼ ▼
┌─────────────────────┐ ┌─────────────────────┐
│ bezant-api (Rust) │ │ TypeScript client │
│ 167 methods │ │ npm / Deno / fetch │
│ 1030 types │ └─────────────────────┘
└──────────┬──────────┘
│
▼
┌─────────────────────┐
│ bezant (facade) │ keepalive · health · pagination · SymbolCache
│ │ · WsClient · tracing spans · typed errors
└──────────┬──────────┘
│
▼
┌─────┴─────┬──────────┬──────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌────────┐ ┌────────┐ ┌──────────┐
│ your │ │ bezant │ │ bezant │ │ bezant │
│ Rust bot│ │-cli │ │-server │ │-mcp │
└─────────┘ └────────┘ └────────┘ └──────────┘
(HTTP) (MCP)
│ │
▼ ▼
any lang LLMs
What lives where
bezant-spec— the IBKR OpenAPI spec as IBKR publishes it, plus the normalisation script. Nothing else touches the raw spec directly.bezant-api— auto-generated Rust client. Don’t hand-edit; re-run./scripts/codegen.shto refresh.bezant(core) — the ergonomic layer you actually want to use from Rust. Wrapsbezant-apibut adds session management, pagination, WebSockets, typed errors.bezant-server— an axum HTTP sidecar. Mostly an untyped pass-through (it forwards the Gateway’s JSON verbatim) — this is deliberate, see Why pass-through.bezant-cli— clap wrapper over the facade. No TCP listener; spawns one Gateway connection per invocation.bezant-mcp— rmcp-backed server exposing CPAPI as MCP tools.
Why so many surfaces?
Because the same spec gives us two axes of generation for free:
- Transport axis — Rust native (
bezant-api), HTTP REST (bezant-server), stdio MCP (bezant-mcp), CLI (bezant-cli), TypeScript fetch (clients/typescript). - Abstraction axis — raw CPAPI access (
bezant-api) vs ergonomic facade (bezant) vs pass-through proxy (bezant-server).
Each surface picks a point on these axes that suits a specific consumer:
- Rust bot directly linking the library →
bezant+bezant-api - Node / Python bot hitting HTTP →
bezant-server - Shell / cron jobs →
bezant-cli - LLM chat workflows →
bezant-mcp - Browser / Deno → TypeScript client
The spec is the contract
Anything that needs to change the wire format changes the vendored spec, then re-runs codegen. The generated crate is never hand-edited. This keeps all five surfaces in lock-step and means an IBKR spec update propagates to every surface with one command.
Rust crate (bezant)
Full rustdoc is deployed alongside this book — see Rust API reference.
Feature tour
Typed client with sane defaults
use std::time::Duration; #[tokio::main] async fn main() -> bezant::Result<()> { let client = bezant::Client::builder("https://localhost:5000/v1/api") .timeout(Duration::from_secs(30)) .accept_invalid_certs(true) // Gateway uses a self-signed cert .user_agent("my-bot/0.1") .build()?; // keeps /tickle firing every 60s in the background let _keepalive = client.spawn_keepalive(Duration::from_secs(60)); // returns typed errors: NotAuthenticated, NoSession, Http, Api, ... let status = client.health().await?; println!("gateway: {status:?}"); Ok(()) }
Paginated positions helper
Skip writing the /positions/{page} loop yourself:
#![allow(unused)] fn main() { use bezant::Client; async fn demo(client: Client) -> bezant::Result<()> { let positions: Vec<bezant::Position> = client.all_positions("DU123456").await?; println!("{} open positions", positions.len()); Ok(()) } }
Symbol → conid cache
#![allow(unused)] fn main() { use bezant::Client; async fn demo(client: Client) -> bezant::Result<()> { let cache = bezant::SymbolCache::new(client); let aapl = cache.conid_for("AAPL").await?; // network call let aapl2 = cache.conid_for("AAPL").await?; // cached assert_eq!(aapl, aapl2); Ok(()) } }
WebSocket streaming
#![allow(unused)] fn main() { use bezant::{Client, WsClient, MarketDataFields, WsMessage}; async fn demo(client: Client) -> bezant::Result<()> { let mut ws = WsClient::connect(&client).await?; ws.subscribe_market_data(265598 /* AAPL */, &MarketDataFields::default_l1()).await?; while let Some(msg) = ws.next_message().await? { match msg { WsMessage::MarketData { conid, payload } => println!("{conid}: {payload}"), WsMessage::Order(o) => println!("order update: {o}"), _ => {} } } Ok(()) } }
Raw access to every CPAPI endpoint
The ergonomic facade covers the 80% use case. For the long tail (155 endpoints) drop straight into the generated client:
#![allow(unused)] fn main() { use bezant::Client; async fn demo(client: Client) -> bezant::Result<()> { let resp = client .api() .get_portfolio_summary(bezant::api::GetPortfolioSummaryRequest { path: bezant::api::GetPortfolioSummaryRequestPath { account_id: "DU123456".into(), }, }) .await?; Ok(()) } }
Error handling
bezant::Error is #[non_exhaustive] and covers:
| Variant | Meaning |
|---|---|
InvalidBaseUrl | The base URL passed to Client::new didn’t parse |
Http | Transport failure (DNS, TLS, timeouts) |
Api | Anything the generated client bubbled up |
NotAuthenticated | Gateway returned 401 — user hasn’t logged in |
NoSession | Gateway is reachable but reports connected: false |
Other(String) | Misc failures that don’t fit above |
Client code should pattern-match on the variants it cares about and use
_ => ... for the rest (important because the enum is #[non_exhaustive]).
Runnable examples
Clone the repo and try the bundled examples against the local Docker gateway without writing any code:
docker compose up -d
# open https://localhost:5000 once in your browser to log in
cargo run -p bezant-core --example health
IBKR_ACCOUNT_ID=DU123456 cargo run -p bezant-core --example list_positions
IBKR_SYMBOL=AAPL cargo run -p bezant-core --example stream_quotes
Source: crates/bezant-core/examples/.
HTTP sidecar (bezant-server)
A thin axum binary that exposes the CPAPI as plain REST+JSON. Most of its handlers are deliberately pass-through — they forward the Gateway’s JSON body verbatim — so any language can consume CPAPI without linking Rust.
Endpoints
REST passthrough
| Method | Path | Upstream |
|---|---|---|
| GET | /health | POST /iserver/auth/status (projected) |
| GET | /accounts | GET /portfolio/accounts |
| GET | /accounts/:id/summary | GET /portfolio/{id}/summary |
| GET | /accounts/:id/positions?page=N | GET /portfolio/{id}/positions/{N} |
| GET | /accounts/:id/ledger | GET /portfolio/{id}/ledger |
| GET / POST | /accounts/:id/orders | GET / POST /iserver/account/{id}/orders |
| DELETE | /accounts/:id/orders/:order_id | DELETE /iserver/account/{id}/order/{oid} |
| GET | /contracts/search?symbol=X | POST /iserver/secdef/search |
| GET | /market/snapshot?conids=A,B&fields=… | GET /iserver/marketdata/snapshot?… |
| fallback | any other path | verbatim passthrough (drives /sso/Login etc.) |
Events capture (opt-in via BEZANT_EVENTS_ENABLED)
The server can optionally run an internal CPAPI WebSocket consumer that buffers order, PnL, and (lazily per-conid) market-data frames into per-topic ring buffers. Consumers poll cursor-paginated REST endpoints instead of opening their own WebSocket — events are captured server-side the moment they arrive, regardless of whether anyone is currently listening.
| Method | Path | Returns |
|---|---|---|
| GET | /events/orders?since=N&limit=N | order lifecycle frames (CPAPI sor) |
| GET | /events/pnl?since=N&limit=N | PnL frames (CPAPI spl) |
| GET | /events/marketdata?conid=N&since=N&limit=N | L1 market data; lazy upstream subscribe per conid |
| GET | /events/gap?since=N&limit=N | synthetic gap markers (WS reconnect, process restart) |
| GET | /events/_status | connector liveness + per-topic buffer sizes |
| GET | /events/{topic}/history?since_ts=…&limit=N | sqlite history (when BEZANT_EVENTS_DB_PATH is set) |
Wire semantics:
- 200 —
{events, next_cursor, reset_epoch}. Usenext_cursoras the nextsince=. - 204 — caller is caught up; cursor stays put.
- 412 —
{head_cursor, reset_epoch, code: "cursor_expired"}. The caller’s cursor is older than the ring buffer’s head; reset tohead_cursor - 1and emit a synthetic gap on the consumer side. - 503 —
{code: "events_disabled"}when capture is off.
reset_epoch bumps on every WS reconnect or process restart. Any change
in epoch is the consumer’s signal that “you missed something” — the
connector also injects a synthetic event into every active topic ring
so a polling consumer sees the gap on its next read.
Error envelope
Non-success responses come back as:
{ "code": "not_authenticated", "message": "gateway is not authenticated …" }
Status codes map:
| Variant | HTTP |
|---|---|
not_authenticated | 401 |
no_session | 503 |
upstream_http_error | 502 |
upstream_api_error | 502 |
invalid_base_url | 400 |
internal | 500 |
Configuration
Env-first, clap-exposed. See bezant-server --help.
| Variable | Default |
|---|---|
IBKR_GATEWAY_URL | https://localhost:5000/v1/api |
BEZANT_BIND | 0.0.0.0:8080 |
BEZANT_KEEPALIVE_SECS | 60 |
BEZANT_VERIFY_TLS | false (accepts the Gateway’s self-signed cert) |
BEZANT_DEBUG_TOKEN | unset (/debug/* 404s without it) |
BEZANT_EVENTS_ENABLED | false |
BEZANT_EVENTS_DB_PATH | unset (sqlite history disabled) |
BEZANT_EVENTS_ORDERS_CAP | 1000 |
BEZANT_EVENTS_PNL_CAP | 5000 |
BEZANT_EVENTS_MARKETDATA_CAP | 2000 per conid |
Deployment shape
The Docker compose file in the repo root is the canonical shape:
┌────────────┐ stdin/stdout ┌──────────────┐ HTTPS + cookie ┌──────┐
│ your app │ ──────────────► │ bezant-server│ ────────────────► │ IBKR │
│ (any lang) │ ◄────────────── │ │ ◄──────────────── │ GW │
└────────────┘ HTTP/JSON └──────────────┘ └──────┘
Tip: keep the sidecar on 127.0.0.1 in production. It holds a live IBKR
session cookie — anyone who reaches its port can make trades.
Command-line (bezant-cli)
Ships as the bezant binary. Every subcommand prints JSON on stdout;
pass --pretty for indented output. Errors exit non-zero with a bezant:
prefix on stderr.
Install
cargo install --git https://github.com/isaacrowntree/bezant bezant-cli
Subcommands
bezant health # auth + session status
bezant tickle # extend the session manually
bezant accounts --pretty # list accounts
bezant summary DU123456 --pretty # portfolio summary
bezant positions DU123456 --pretty # paginated positions (all pages)
bezant conid AAPL # ticker → conid lookup
Scripting
Every subcommand produces stable JSON, so jq is your friend:
bezant accounts | jq -r '.[].accountId'
bezant positions DU123456 | jq 'map(select(.position > 0))'
Environment
| Variable | Default |
|---|---|
IBKR_GATEWAY_URL | https://localhost:5000/v1/api |
BEZANT_REJECT_INVALID_CERTS | unset (accepts self-signed) |
RUST_LOG | warn |
MCP server (bezant-mcp)
A Model Context Protocol server that exposes IBKR read-only endpoints as structured tools an LLM can call. Runs over stdio, so it plugs into any MCP-compatible client (Claude Desktop, Cursor, Continue, Claude Code).
Why MCP
LLM-driven trading assistants only work if they read your live account state. MCP gives the model a narrow, typed API: the model asks for “account_summary”, the protocol delivers fresh JSON from IBKR. No more hallucinated NAV numbers.
Tool surface (v0.1)
All read-only. Order placement lives behind a feature flag in later releases — MCP tools are powerful and we don’t want a chat window accidentally firing orders.
| Tool | Purpose |
|---|---|
health | Is the Gateway authenticated + connected? |
list_accounts | All IBKR account IDs on the Gateway session |
account_summary | NAV, cash, buying power, margin detail |
positions | Every open position for an account (pagination handled) |
conid_for | Resolve ticker → IBKR contract id (memoised) |
tickle | Manually extend the session |
Install
cargo install --git https://github.com/isaacrowntree/bezant bezant-mcp
Configure Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"bezant": {
"command": "bezant-mcp",
"env": {
"IBKR_GATEWAY_URL": "https://localhost:5000/v1/api"
}
}
}
}
Restart Claude Desktop. Ask: “What accounts do I have?” and the LLM
should call list_accounts automatically.
Configure any other MCP client
Spawn bezant-mcp as a stdio subprocess. Environment variables from the
shell are inherited. Logs go to stderr — don’t redirect stdout; that’s
the MCP protocol channel.
Safety
- Every tool is read-only in v0.1.
- The server inherits
bezant’s session keepalive, so it won’t spam IBKR with reconnects. - Tool descriptions explicitly warn the LLM when it needs to call
healthbefore anything else. - Future order-placement tools will require
BEZANT_MCP_ALLOW_ORDERS=1plus fresh confirmation for every call — no silent trading.
TypeScript client
Generated from the same vendored OpenAPI 3.1 spec as bezant-api, via
openapi-generator-cli -g typescript-fetch. Lives in
clients/typescript.
Install
Until it’s on npm:
npm install github:isaacrowntree/bezant#main:clients/typescript
Usage
import {
Configuration,
TradingAccountsApi,
TradingPortfolioApi,
} from "bezant-client";
const config = new Configuration({
basePath: "https://localhost:5000/v1/api",
});
const accounts = await new TradingAccountsApi(config).getAllAccounts();
const summary = await new TradingPortfolioApi(config).getPortfolioSummary({
accountId: "DU123456",
});
TLS gotcha
The Gateway ships a self-signed cert. Browsers reject it; Node / Deno reject it by default.
- Node (dev only!):
NODE_TLS_REJECT_UNAUTHORIZED=0 npm run ... - Production: put Bezant behind a reverse proxy that terminates TLS with a trusted cert, or install the Gateway’s cert into the system trust store.
When to use this over bezant-server
- TypeScript client when you want typed methods / models in your frontend or Node app.
bezant-serverwhen you want language-agnostic REST, don’t mind JSON-typed responses, or need the facade’s features (keepalive, pagination) without reimplementing them in JS.
Codegen pipeline
Every surface except the facade is auto-generated. Here’s the pipeline in one picture:
api.ibkr.com/gw/api/v3/api-docs
│
▼ scripts/refresh-spec.sh (curl + jq)
┌─────────────────────────┐
│ ibkr-openapi.json │ ← vendored 3.0 spec (IBKR upstream format)
└───────────┬─────────────┘
│
▼ scripts/normalize-spec.py (13 normalisation steps)
┌─────────────────────────┐
│ ibkr-openapi.json │ ← still 3.0, but repaired
└───────────┬─────────────┘
│
▼ scripts/upgrade-to-3.1.py
┌─────────────────────────┐
│ ibkr-openapi-3.1.json │ ← 3.1; fed to every generator
└─────┬───────────────────┘
│ │
│ oas3-gen │ openapi-generator-cli
▼ ▼
┌──────────────────┐ ┌─────────────────────┐
│ bezant-api │ │ clients/typescript │
│ (Rust generated) │ │ (TS generated) │
└──────────────────┘ └─────────────────────┘
Running it
./scripts/refresh-spec.sh # pull upstream (optional; run when IBKR revises)
./scripts/codegen.sh # normalise → 3.1 → oas3-gen → bezant-api
./scripts/codegen-ts.sh # openapi-generator-cli → clients/typescript
Why this many steps
Most OpenAPI toolchains assume the spec is well-formed. Real-world broker specs rarely are. IBKR’s spec ships 13 distinct categories of quirk that break codegen if you feed the raw spec to any generator. Documenting and normalising each one means the generators don’t need to be tuned per-quirk — and we can upstream each normalisation as a bug report against IBKR, with the eventual goal of deleting our normaliser entirely.
See Spec normalisation for the full list.
Extending to another language
Adding (say) a Go client is ~1 hour of work:
- Pick a generator —
oapi-codegenis idiomatic for Go. - Write a
scripts/codegen-go.shthat invokes the generator against the 3.1 normalised spec (crates/bezant-spec/ibkr-openapi-3.1.json). - If that generator hits new quirks, add steps to
scripts/normalize-spec.py— they benefit every language, not just Go.
The normalisation tax you pay once, benefits every generator forever.
Spec normalisation
scripts/normalize-spec.py takes the IBKR upstream spec and applies a
series of surgical transforms so every downstream generator can consume it
cleanly.
Each transform is a distinct, upstreamable fix — the end goal is that IBKR fixes these in their spec and we can delete the corresponding normalisation step.
The 13 steps (current)
- Strip null security scopes. IBKR emits
security[].scheme: [null]where OpenAPI 3.0 requires[]or[scope-string]. - Synthesise missing
operationIds. Progenitor and oas3-gen both require every operation to have one; IBKR omits them on ~50 operations. - Disambiguate duplicate
operationIds. IBKR ships at least one duplicate (getTradingSchedule× 2 on different paths). We append a path-derived suffix to later occurrences. - Desugar ambiguous enum variants. Enums whose values collapse into
non-unique Rust identifiers after sanitisation (
>=,<=,>,<,==) are downgraded to plaintype: stringwith the variants captured in thedescription. - Rewrite exotic content types. IBKR uses
application/jwtin a few places; we rewrite totext/plainwith a string-typed schema. - Reconcile enum values with declared
type. Example: a field declaredtype: numberwith enum["0", "1", "2"]gets the enum values coerced to numbers. - Demote misplaced path parameters. Several operations declare
in: pathparameters whose placeholder isn’t in the URL template. We demote them toin: query. - Drop unknown string formats.
format: "jwt"isn’t a standard string format; we strip it so generators don’t emit broken wrappers. - Demote cookie parameters to headers. Progenitor doesn’t support
in: cookie; we rewrite toin: header. - Collapse multi-content-type success responses. When IBKR offers a
200 response in both
application/jsonandapplication/pdf, we pick JSON and drop the rest so progenitor’s assertion holds. - Drop WebSocket upgrade operations. Operations with only
1xxresponses (e.g.101 Switching Protocols) can’t be modelled as REST. - Stringify numeric-array query parameters. oas3-gen’s
StringWithCommaSeparatoronly handles strings; array-of-integer query params get their items coerced to strings. - Widen
integerfields with float example values. IBKR declaresSMA,balance,accruedInterestetc. asintegerbut ships their example payloads as368538.0. The snapshot tests catch this and the normaliser widens the field tonumberautomatically.
The spec-example-widening story
Step 13 was discovered by the snapshot tests in bezant-core/tests/examples.rs.
Those tests round-trip real IBKR example payloads through the generated Rust
types. The first run failed on SMA: 368538.0 because the type was i32.
Rather than papering over it with a manual cast, we made the normaliser
smarter: walk every example, find every integer-typed field with a float
value, widen the schema to number. 37 fields get widened per codegen run
now.
This is the canonical pattern: a failing test should prompt a normalisation step, not a hand-patch. It catches future IBKR drift without human attention.
Testing strategy
34 tests across the workspace, all green in CI. Here’s where they live and what they cover.
┌──────────────────┬──────────────────────────────────────────────────────┐
│ Suite │ What it proves │
├──────────────────┼──────────────────────────────────────────────────────┤
│ bezant-spec (2) │ Vendored JSON parses; UPSTREAM_VERSION matches │
│ │ the embedded `info.version` │
├──────────────────┼──────────────────────────────────────────────────────┤
│ bezant (12) │ 6 ws::tests (URL rewriting, message classification) │
│ │ 6 facade tests against wiremock (auth, tickle, │
│ │ health, error mapping) │
│ │ 4 snapshot tests (deserialise real IBKR examples) │
├──────────────────┼──────────────────────────────────────────────────────┤
│ bezant-server │ 7 axum integration tests against wiremock (every │
│ (7) │ endpoint, including error paths) │
├──────────────────┼──────────────────────────────────────────────────────┤
│ bezant-cli (5) │ Spawn real compiled binary, exercise subcommands │
│ │ against wiremock, verify JSON output + exit codes │
├──────────────────┼──────────────────────────────────────────────────────┤
│ bezant-mcp (4) │ In-process MCP server over `tokio::io::duplex`, │
│ │ client lists tools + calls them, verify JSON round- │
│ │ trip and pagination │
└──────────────────┴──────────────────────────────────────────────────────┘
Snapshot tests from spec examples
The coolest part. scripts/extract-examples.py walks the vendored spec
and pulls every examples.*.value entry into a JSON fixture file.
crates/bezant-core/tests/examples.rs then round-trips each payload
through the corresponding Rust type.
This means:
- If IBKR changes a response shape, our tests break before our users do.
- If our spec normaliser accidentally collapses a type, the snapshot tests catch it.
- New coverage is ~30 seconds of work: add operation IDs to the
--onlylist inscripts/codegen.sh, re-run, done.
Mock gateway pattern
Every integration test shares this pattern:
#![allow(unused)] fn main() { let gateway = MockServer::start().await; Mock::given(method("POST")) .and(path("/v1/api/iserver/auth/status")) .respond_with(ResponseTemplate::new(200).set_body_json(json!({...}))) .mount(&gateway) .await; let client = bezant::Client::builder(format!("{}/v1/api", gateway.uri())) .accept_invalid_certs(true) .build()?; }
wiremock runs an actual HTTP server on a random port; bezant::Client
talks to it exactly like it would talk to the real Gateway. No mocks of
reqwest, no fake Response objects — real HTTP end-to-end.
Running locally
cargo test --workspace # all 34 tests
cargo test -p bezant-core # just the facade
cargo test -p bezant-server --test routes # just the axum integration
Adding new tests
See the existing patterns in:
crates/bezant-core/tests/facade.rs— wiremock integrationcrates/bezant-core/tests/examples.rs— spec-example round-tripscrates/bezant-server/tests/routes.rs— axum + wiremockcrates/bezant-cli/tests/cli.rs—assert_cmd+ wiremockcrates/bezant-mcp/tests/tools.rs— in-process MCP round-trip
Cloudflare Zero Trust + Pi (recommended)
Production deployment: Cloudflare Zero Trust + Pi
Real-world IBKR API deploys hit a wall: api.ibkr.com (fronted by
Akamai) rejects connections from cloud datacenter IPs. So your
bezant-server can’t run on Railway / Fly / Render / Heroku and reach the
upstream successfully — IBKR responds 401 to the SSO→CPAPI bridge call,
and every typed API call cascades into 401s.
The pattern that works in 2026:
Your bot (Railway/cloud)
│ HTTPS + Service Token
▼
Cloudflare Zero Trust (Tunnel + Access)
│ Cloudflare Tunnel
▼
Raspberry Pi at home
│ ┌───────────────────────────────┐
│ │ bezant-server (port 8080) │
│ │ CPGateway (port 5000) │
│ └───────────────────────────────┘
│ Cloudflare WARP egress
▼
api.ibkr.com (Akamai)
Why each piece:
- Pi at home — residential ISP IP would also be flagged by Akamai if it weren’t for WARP. (Don’t skip WARP.)
- Cloudflare WARP on the Pi — routes outbound to api.ibkr.com through Cloudflare’s edge IPs, which are reputationally clean.
- Cloudflare Tunnel — exposes the Pi to your bot without opening a port on your router or having a public IP.
- Cloudflare Zero Trust Access — gates the public hostname. Browsers (you) hit an SSO challenge; service-to-service calls (your bot) carry a Service Token in two headers.
One-shot setup
# 1. On the Pi (Raspberry Pi 4/5 with 4GB+ RAM, Pi OS Lite arm64):
sudo apt update && curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
# 2. Install Cloudflare WARP (residential→clean-IP egress):
curl -fsSL https://pkg.cloudflareclient.com/pubkey.gpg | \
sudo gpg --dearmor -o /usr/share/keyrings/cloudflare-warp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/cloudflare-warp-archive-keyring.gpg] \
https://pkg.cloudflareclient.com/ bookworm main" | \
sudo tee /etc/apt/sources.list.d/cloudflare-client.list
sudo apt update && sudo apt install -y cloudflare-warp
warp-cli --accept-tos registration new
warp-cli --accept-tos connect
# 3. Install Cloudflare Tunnel (cloudflared) — get a token from the
# Zero Trust dashboard → Networks → Tunnels → Create:
sudo cloudflared service install <YOUR_TUNNEL_TOKEN>
# 4. Run bezant-combined (CPGateway + bezant-server in one container):
docker run -d --name bezant --restart unless-stopped \
-p 127.0.0.1:8080:8080 \
-e BEZANT_DEBUG_TOKEN="$(openssl rand -hex 32)" \
ghcr.io/isaacrowntree/bezant-combined:latest
Cloudflare dashboard configuration
- Tunnel → add Public Hostname:
bezant.yourdomain.com→ serviceHTTP localhost:8080. - Access → Applications → add Self-hosted for the same
hostname. Add two policies:
- Browser (Allow): “Emails = [email protected]” — for you to do the IBKR login interactively.
- Service (Service Auth): generated Service Token — for your bot. Save the Client ID + Secret.
- Your bot calls
https://bezant.yourdomain.com/...with two headers:CF-Access-Client-Id: <client-id>.access CF-Access-Client-Secret: <secret>
Login flow
You’ll need to do an interactive IBKR login periodically — open
https://bezant.yourdomain.com/sso/Login in a browser, get challenged
by Cloudflare Access SSO, then by IBKR’s own login form, and approve
the 2FA push on your phone. Once that’s done, bezant-server’s
keepalive keeps the session warm and your bot’s API calls succeed.
How often you need to re-login depends on IBKR — community reports range from ~12h to a few days, and IBKR runs nightly maintenance that typically forces a fresh login once per trading day. Don’t assume a hard SLA; design your bot to handle a 401 by surfacing a “needs login” alert rather than crashing.
Security model
- Cloudflare Zero Trust is the primary perimeter. With a correctly configured Access policy, only your email-authenticated browser and your token-authenticated bot can reach the Pi.
- bezant-server’s
BEZANT_BINDdefaults to0.0.0.0:8080— that’s fine behind Cloudflare Tunnel + a127.0.0.1Docker port-bind (as in the snippet above). Don’t expose 8080 directly to the internet without Zero Trust in front. - Debug endpoints (
/debug/jar,/debug/probe) are off by default. SettingBEZANT_DEBUG_TOKENenables them, gated by anX-Bezant-Debug-Tokenheader (or?token=…query). With Zero Trust in front, this is defense-in-depth — leave it off until you’ve verified your Access policies are tight. - The shared cookie jar holds live IBKR session cookies. Anyone who can read it can resume the IBKR session and trade your account. bezant-server is single-tenant by design — don’t deploy this proxy multi-tenant.
Docker deployment
The repo ships a docker-compose.yml that brings up the IBKR Client Portal
Gateway and bezant-server together. This is the canonical local setup.
docker compose up
Then:
- https://localhost:5000 — log in to the Gateway once in a browser
- http://localhost:8080/health — sanity-check the sidecar
What the image contains
Dockerfilebuilds the Rust workspace withrust:1.89-bookworm.- Final runtime image is
gcr.io/distroless/cc-debian12:nonroot— about 20 MB, no shell, no package manager, minimal attack surface. - Only the
bezant-serverbinary is copied in. No IBKR Gateway inside the image — that runs in the sibling compose service.
Binding
Both services bind to 127.0.0.1 in the compose file:
ports:
- "127.0.0.1:5000:5000" # Gateway
- "127.0.0.1:8080:8080" # bezant-server
Keep it that way in production. The Gateway holds a live IBKR session cookie; the sidecar has no auth in front of it. Reaching either port means trading power. If you need remote access, tunnel through SSH or put a proper auth proxy (oauth2-proxy, caddy + basic auth, cloudflared tunnel) in front.
Image pinning
Pin the Gateway image by digest for reproducible deployments:
image: ghcr.io/gnzsnz/clientportal@sha256:<digest>
Update the pin when IBKR ships a new Gateway release.
Railway / cloud deployment
The Docker compose stack translates cleanly to any container platform. Notes for Railway (what we tested against) and general guidance.
Railway
Split into two services:
ib-gateway— use theghcr.io/gnzsnz/clientportalimage, pin by digest. Private networking only; don’t expose port 5000 publicly.bezant-server— build from this repo’sDockerfile. SetIBKR_GATEWAY_URL=https://ib-gateway.railway.internal:5000/v1/apiso it reaches the Gateway over Railway’s private network.
You will still need to log in to the Gateway once via a VNC / RDP tunnel to complete the initial IBKR 2FA. The Gateway keeps the session alive after that; Bezant keeps it tickled.
Combined-image deploy (single Railway service)
The ghcr.io/isaacrowntree/bezant-combined image runs CPGateway and
bezant-server together behind one entrypoint, which is what most
single-user deployments want. Two mandatory env vars when you run
this image behind a public hostname that differs from localhost:5000:
| Env var | Required | What it does |
|---|---|---|
IBKR_GATEWAY_URL | yes | Always https://127.0.0.1:5000/v1/api for the combined image — bezant-server talks to the in-container Gateway. |
PORTAL_BASE_URL | yes when public hostname ≠ localhost | The full origin (https://your-host.up.railway.app) the browser will see. The entrypoint substitutes this into CPGateway’s conf.yaml at boot. |
Why it matters. CPGateway’s CPAPI handlers refuse post-login requests with HTTP 401 when the browser-supplied
Origin/Refererdon’t match theportalBaseURLit was configured with. The default empty value works forlocalhost-to-localhost, but breaks the moment a reverse proxy puts you on a different hostname (Railway, fly.io, ngrok, …). On Railway the entrypoint will fall back tohttps://${RAILWAY_PUBLIC_DOMAIN}automatically; on other platforms setPORTAL_BASE_URLexplicitly.
Why pass-through
bezant-server’s handlers forward the Gateway’s JSON body verbatim —
they don’t decode into typed Rust structs and re-encode as JSON. Three
reasons:
- No schema drift on the hot path. If IBKR adds a new field to
portfolio/summary, your consumers see it immediately with zero code changes in bezant. - Smaller attack surface. Pass-through means the sidecar can’t accidentally strip fields or round-trip floats incorrectly.
- Faster. No double decode; just stream bytes.
The typed layer is bezant-api, which we consciously keep separate.
Rust consumers that want typed access link the crate directly; anyone
going over HTTP just needs JSON.
Secrets
- Gateway login — ideally stays in the Gateway image’s config, bound to the account owner’s 2FA device. Don’t paste IBKR passwords into container env vars.
RELEASE_PLZ_TOKEN/CARGO_REGISTRY_TOKEN— repo secrets for the GitHub Actions release workflow (optional until we publish to crates.io).
Health checks
The Gateway exposes /v1/api/iserver/auth/status. bezant-server exposes
/health. Hook your platform’s health check to /health — it returns
{"authenticated": true, ...} with HTTP 200 only when IBKR is happy.
Upgrading the vendored spec
IBKR revises their OpenAPI spec every few weeks. The routine is short.
./scripts/refresh-spec.sh # pulls latest from api.ibkr.com
./scripts/codegen.sh # normalise → upgrade → regenerate bezant-api
./scripts/codegen-ts.sh # regenerate TypeScript client
cargo test --workspace # snapshot tests catch breaking changes
Review the diff
After refresh:
git diff crates/bezant-spec/ibkr-openapi.json | less
Look for:
- New operations — may want to add convenience wrappers to
bezant-coreor CLI subcommands. - Removed operations — downstream consumers need migration notes; drop mentions in the MCP tool surface and CLI.
- Response shape changes — the snapshot tests catch obvious breakage. Subtle ones (renamed fields, loosened types) may need spec-normaliser tweaks.
When the build breaks
Common cause: the new spec has a quirk scripts/normalize-spec.py doesn’t
handle yet. Workflow:
- Try
cargo run -p xtask -- probeto see the first error. - If it’s an oas3-gen panic, grep rmcp/typify/progenitor-impl source for the assertion message to understand what’s being rejected.
- Add a targeted normalisation step with a clear comment about why.
- Include the new step in
CHANGELOG.mdunder “Added”. - Ideally: open an IBKR support ticket for the upstream bug so we can eventually delete the normalisation.
When tests break
The snapshot tests in crates/bezant-core/tests/examples.rs are the
canary. They load real IBKR example payloads and round-trip them through
the generated types. If a test fails:
- If the payload shape is genuinely wrong (e.g. an int field now ships a float), add a normaliser step that widens the type.
- If the payload is fine but the generator produced the wrong type,
file a bug against
oas3-gen. - If the example itself is corrupt in the spec, file against IBKR and tag the example name in the spec.
Rust API reference (rustdoc)
Every public item in every crate is documented. Three ways to read it:
- Live, browsable: the CI deploys
cargo docoutput alongside this book under/rustdoc/. You’re probably reading the book athttps://isaacrowntree.github.io/bezant/— jump to/rustdoc/bezant/for the facade crate. - Local:
cargo doc --workspace --no-deps --open - docs.rs (once published): will link here once the crates hit crates.io.
Per-crate entry points
bezant— ergonomic facadebezant_api— auto-generated clientbezant_spec— vendored specbezant_server— HTTP sidecar libbezant_mcp— MCP tool surface
Conventions
- Every public function has a one-line summary in the first sentence, a
longer explanation where useful, and
# Errors/# Panicssections per the Rust API Guidelines. - Examples compile as doctests (try
cargo test --workspace --doc). - Generated crates (
bezant-api) have auto-derived docs from the spec’sdescriptionfields — so the same docstrings ship to docs.rs that IBKR writes themselves.
Roadmap
v0.1 — alpha ✅ shipped (2026-04-21)
End-to-end rebalancing-bot use case.
-
Vendor + normalise IBKR OpenAPI spec (
bezant-spec) -
Codegen all 154 CPAPI endpoints via oas3-gen (
bezant-api) -
Ergonomic facade: Client, auth, keepalive, health (
bezant-core) -
HTTP sidecar exposing the facade over REST (
bezant-server) - Docker image bundling IBKR Gateway + bezant-server
- WebSocket client with cookie auth + typed subscribe helpers
- Pagination helpers + symbol → conid cache
- Tracing instrumentation across the facade
-
CLI (
bezant-cli) + MCP server (bezant-mcp) + TypeScript client - Snapshot tests driven by spec example payloads
- GitHub Actions CI (fmt, clippy, test, MSRV, audit, multi-arch Docker)
- Dual MIT / Apache-2.0 license
v0.2 — production hardening ✅ shipped (2026-05-03)
Goal: deployable to a real production trading bot, not just localhost dev.
- Cloudflare Zero Trust + residential-Pi deploy guide — bypasses IBKR’s Akamai datacenter-IP rejection, the silent killer of cloud- hosted CPAPI deploys
-
NameKeyedJarcookie store — replaces reqwest’s path-aware jar to fix duplicateJSESSIONIDaccumulation that CPGateway rejects -
Edge-cookie filter — drops
CF_Authorization/CF_AppSession/ AWS ALB / OAuth2 Proxy / Pomerium / Vercel cookies before they poison the upstream call (Akamai 401s on unrecognised cookies) -
/debug/probe+/debug/jardiagnostics, gated byBEZANT_DEBUG_TOKEN(constant-time compare, names+lengths only, never raw values) -
Strip
Authorization/X-Forwarded-*/Forwarded/X-Real-IPat the proxy boundary - Multi-arch Docker builds on native arm64 GitHub runners (~5 min vs ~20 min QEMU)
v0.3 — typed surface + observability ✅ shipped (2026-05-03)
Goal: library-quality ergonomics + production-debuggable runtime.
-
11 typed
Errorvariants replacingError::Other(String)—UpstreamStatus,Unknown,UrlNotABase,MissingQuery,Header,SymbolNotFound,BadConid,WsHandshake,WsTransport,WsProtocol,ResponseBuild -
Error::is_retryable()for backoff loops -
bezant::preludefor the typical bot use case -
#[non_exhaustive]onAuthStatus+TickleResponseso future fields aren’t SemVer breaks -
Per-request correlation IDs (
SetRequestIdLayer+PropagateRequestIdLayer) + handler#[tracing::instrument]+ keepalive task span -
Graceful shutdown (SIGTERM/SIGINT drain + awaited
keepalive.stop()) +ConcurrencyLimitLayer(256)+ reqwest pool tuning (pool_max_idle_per_host,tcp_keepalive,connect_timeout,pool_idle_timeout) -
KeepaliveHandle::Dropsends shutdown signal so a forgotten handle doesn’t keep tickling -
WebSocket
Subscriptionhandle — RAII cancel viaSubscription::cancel(&mut ws).awaitinstead of caller-tracked conids;WsClient::splitreturns concreteWsSink/WsRecv;WsMessage::topic()/as_value()accessors -
/debug/probeper-step timeout (5s) + body-preview redaction (session/token/secretkeys) + non-destructive ssodh skip -
bezant-cli --output {json,table}+quote SYMBOL+orders ACCOUNT+ cap warning onMAX_POSITION_PAGES - 14 spec-normaliser invariant tests + CI drift-check job
- Published to crates.io at v0.3.0
post-0.3 (unreleased) — events observability ✅ shipped (2026-05-06)
Goal: capture every order, fill, rejection, PnL update, and (per-conid) market-data tick the upstream WebSocket sees, and expose them via cursor-paginated REST so polling consumers don’t lose events between strategy ticks.
-
bezant-serverevents module — internalbezant::WsClientconsumer with reconnect + heartbeat-timeout, per-topic ring buffers (orders,pnl,marketdata:<conid>,gap),reset_epoch/cursor wire semantics so clients can detect gaps -
/events/*REST surface —orders,pnl,marketdata,gap,_statusendpoints with 200 / 204 / 412 cursor outcomes -
Lazy market-data subs —
/events/marketdata?conid=Nref-counts upstreamsmd+<conid>+{}subscribes; re-establishes across WS reconnects -
Optional sqlite history —
BEZANT_EVENTS_DB_PATHmirrors every captured event intoevents(...), served via/events/{topic}/history?since_ts=…. Per-topic retention with hourly prune (orders/pnl 90d, marketdata 14d, gap 365d) -
WsClient::connecthonoursaccept_invalid_certs— fixes reconnect loop against the Gateway’s expired self-signed cert -
pump_until_ready— waits for CPAPI’ssystem+successframe before subscribing; CPAPI silently drops pre-ready subscribes
v0.4 — feature flags + auto-reconnect 🔭 planned
Goal: smooth out remaining rough edges; expand for non-Rust ecosystems.
Library
-
Feature flags on
bezant-core(ws,keepalive-tokio) so callers don’t pay for tokio-tungstenite if they only want REST -
Async runtime decoupling —
spawn_keepaliveaccepts a runtime handle so async-std / smol consumers can use the crate -
bezant::ws::TickerManager— auto-reconnect on disconnect, re-subscribes existing topics, exposed as a background actor -
Retry middleware with exponential backoff on
is_retryable() - Typed error variants for common IBKR failure modes (insufficient funds, market closed, restricted account)
MCP + ecosystem
-
bezant-mcpmarket data + orders tools (currently read-only), gated behind--allow-ordersso registration itself is opt-in - MCP resources for accounts/positions so Claude can include state in context without explicit tool calls
-
Python bindings via pyo3 —
pip install bezantfor quant scripts
Robustness
- Live-account integration tests gated behind a feature flag, opt-in via env var
- OAuth 1.0a / 2.0 auth when IBKR opens it to retail accounts
-
Anyhow-free
bezant-core— redrivehelpers.rs/auth.rsoff the generated client’s typed Result so anyhow can become optional
v1.0 — stable
- Stable public API. SemVer discipline.
- Production-grade docs + examples for every surface.
- Reference rebalancing bot as a published companion crate.
- Options / futures / forex / fixed income convenience builders.
Contributing
PRs welcome. If you hit a new spec quirk that isn’t in
scripts/normalize-spec.py, please open an issue with the failing operation
ID or schema name and ideally the minimal reproducer — that lets us expand
both the normaliser and the upstream bug report against IBKR.
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
Added
/events/*capture surface onbezant-server. Optional internal CPAPI WebSocket consumer (BEZANT_EVENTS_ENABLED=true) that drivesbezant-core::WsClientagainst/v1/api/ws, decodes order/PnL/market- data frames, and serves them via cursor-paginated REST:/events/orders,/events/pnl,/events/marketdata?conid=…,/events/gap,/events/_status. Per-topic ring buffers (BEZANT_EVENTS_{ORDERS,PNL,MARKETDATA}_CAP) bound the in-memory footprint; reads return 200/204/412 with monotonic cursors and areset_epochthat bumps on reconnect so consumers can detect gaps.- Optional sqlite history.
BEZANT_EVENTS_DB_PATHmirrors every captured event into a sqlite log (events(id, cursor, topic, received_at, reset_epoch, payload)) served viaGET /events/{topic}/history?since_ts=…. Per-topic retention (orders/pnl 90d, marketdata 14d, gap 365d, default 30d) trimmed hourly by a background prune task. - Lazy market-data subscriptions.
/events/marketdata?conid=Nref-counts upstreamsmd+<conid>+{}subscriptions on first poll; re-establishes them across WS reconnects.
Fixed
bezant-core::WsClient::connecthonoursaccept_invalid_certs. Previously the WS handshake used tokio-tungstenite’s default rustls verifier, which rejected the Gateway’s expired self-signed cert even when the underlyingClienthad opted into accepting it on the REST side. The connector’s reconnect loop would spin forever onverifyhostname/certificate expirederrors. NowWsClient:: connectreadsClient::accepts_invalid_certs()and installs a permissive rustls verifier when set, matching reqwest’s behaviour.- CPAPI subscribe-pre-ready quirk. The events connector waits for
CPAPI’s initial
system+success“ready” frame before sendingsor+{}/spl+{}subscribes. Without this gate, subscribes sent immediately after the WS handshake were silently discarded by CPAPI and order/PnL frames never broadcast (only heartbeats). 5s timeout falls through to a best-effort subscribe + warn so a Gateway that never sends ready still tries.
[0.3.0] — 2026-05-03
The “polish before crates.io” release. v0.2 hardened the proxy and deploy pattern; v0.3 promotes the typed surface, observability, and ergonomic gaps that survived. Six commits, organised into five phases:
Added
- Typed
Errorvariants. ~25Error::Other(String)call sites promoted into 11 typed variants:UpstreamStatus { endpoint, status, body_preview },Unknown,UrlNotABase,MissingQuery,Header,SymbolNotFound,BadConid,WsHandshake,WsTransport,WsProtocol,ResponseBuild, plus a structuredDecode { endpoint, status, message }. Callers can branch on the cause for retry / recovery instead of substring-matching strings. Error::is_retryable()— backoff loops can decide on a typed predicate. Transient transport errors, upstream 5xx, 429, NoSession and WS transport are flagged retryable; everything else (caller input, auth, data-shape) is not.bezant::preludemodule re-exports the common surface (Client,Result,Error,SymbolCache,KeepaliveHandle,AuthStatus,TickleResponse,Position).use bezant::prelude::*;for the typical bot use case.- Per-request correlation IDs.
tower_http::request_id::SetRequestIdLayerPropagateRequestIdLayer; UUID minted per request, echoed in the response, recorded in thehttpparent span.
#[tracing::instrument]on every typed handler inbezant-server::routes, plus the keepalive task gets its ownbezant_keepalivespan viatracing::Instrument.- Graceful shutdown.
axum::serve(...).with_graceful_shutdown(shutdown_signal())drains in-flight requests on SIGTERM/SIGINT, then explicitly awaitskeepalive.stop()so the tickle task closes cleanly. tower::limit::ConcurrencyLimitLayer(256)caps simultaneous handlers — a misbehaving caller can’t exhaust upstream connections or get the IBKR account locked by hammering rate limits.KeepaliveHandleimpl Drop— sends the shutdown signal so a forgotten handle doesn’t keep tickling forever. Doc previously claimed “drop-to-stop” but the impl wasn’t there.- WebSocket
Subscriptionhandle.WsClient::subscribe_*now return aSubscriptionthat callers cancel viaSubscription::cancel(&mut ws).await— no more tracking (topic, conid) pairs by hand.cancel_payload()exposes the raw bytes for callers usingWsClient::splithalves. WsMessage::topic()+as_value()accessors for routing on message type without pattern-matching every variant.WsClient::splitreturns concreteWsSink/WsRecvtype aliases (futures_util::SplitSink/SplitStreamover the TLS stream) — callers can store the halves in struct fields withoutBox<dyn …>.bezant-cli --output {json,table}flag withcomfy-tablerendering for tabular endpoints (accounts, summary, positions, orders, health, quote). Non-tabular endpoints fall back to pretty-printed JSON.bezant quote SYMBOLsubcommand — symbol → conid via cache → snapshot for default level-1 fields.bezant orders ACCOUNTsubcommand — live + recently-filled orders; normalises both{"orders":[...]}and bare-array Gateway shapes.bezant-specpost-normalisation invariant tests — 14 Rust tests pin the postconditions each of the 13 Python normaliser steps establishes. CIspec-normalise-driftjob re-runs the Python normaliser against the vendored output and asserts byte-identical output (enforces idempotency permanently).
Changed
- Reqwest pool tuning.
connect_timeout(5s)(so a dead Gateway surfaces fast for liveness probes),pool_max_idle_per_host(32)(was unbounded; leak risk under bursty traffic),pool_idle_timeout(90s),tcp_keepalive(30s). AuthStatusandTickleResponsemarked#[non_exhaustive]so adding a field in a point release isn’t a SemVer break.ClientBuilder::default()returns a builder pointed atDEFAULT_BASE_URLfor the most common case.reqwest::StatusCodere-exported frombezant-coreso callers usingClient::http()don’t needreqwestin their ownCargo.toml.AppError::into_responselogs every mapped 4xx/5xx atwarn!/error!so production debuggability doesn’t depend on every handler emitting its own span event. Branches onreqwest::Error::is_timeout()/is_connect()for distinct 504 / 503 / 502 status codes./debug/probeper-steptokio::time::timeout(5s)— a hung Gateway no longer takes the whole probe with it./debug/probebody_preview redactssession,ssoConclusion, and any key containingtoken/secret(case-insensitive) before exposing them. Prevents debug-token holders from scraping live IBKR session material via the probe surface.bezant-clideprecates--reject-invalid-certsin favour ofBEZANT_VERIFY_TLS(matchesbezant-server). The double-negative was easy to leave invalid-cert acceptance on in production.bezant-clipaginated_positionsemits a stderr warning whenMAX_POSITION_PAGESis hit so the caller knows results may be truncated. Silently hitting the cap was a coverage gap.
Tests
- Total 132+ tests across the workspace (was 97 at v0.2 release):
- 5 inline error tests (
Send + Sync,is_retryablematrix, Display formatting). - 2 keepalive tests (
stopcleanly,Dropsends signal). - 4 redaction tests (token-key fields, nested objects, non-JSON pass-through).
- 3 WS message accessor + Subscription round-trip tests.
- 14 spec-normaliser post-condition tests.
- 4 new CLI tests (quote, orders,
--output tabletable form,--output tableJSON fallback).
- 5 inline error tests (
Security
- Bearer/Basic
Authorizationheaders no longer forwarded to CPGateway bypassthrough_any. CPGateway doesn’t consume them; forwarding is pure attack surface. - Caller-controlled
X-Forwarded-*/Forwarded/X-Real-IPno longer forwarded — caller could otherwise spoof their apparent source IP downstream. TraceLayer’s span records request path not uri — the URI carries?token=…for/debug/*calls and we don’t want it in span fields / log shippers.
[0.2.0] — 2026-05-03
This release hardens the production deploy story: a residential-Pi + Cloudflare Zero Trust + WARP pattern that bypasses IBKR’s Akamai datacenter-IP rejection. See the new “Production deployment” section in the README.
Added
/debug/probediagnostic endpoint walksauth/status→ssodh/init→tickle→portfolio/accountsagainst the Gateway and pins the first diverging step in a top-levelverdict(ok,auth_status_failed,needs_login,ssodh_failed,tickle_failed,accounts_failed). Skipsssodh_initwhen the session is already bridged so the probe is non-destructive./debug/jarlists shared cookie-jar entries by name + value length (never raw values).BEZANT_DEBUG_TOKENenv var gates both/debug/*endpoints. Off → 404; on → callers must present the token viaX-Bezant-Debug-Tokenheader or?token=…query string. Constant-time comparison against the configured token.BEZANT_VERIFY_TLSflips on Gateway TLS cert verification (defaults to off because the Gateway ships with a self-signed cert). Replaces the double-negativeBEZANT_REJECT_INVALID_CERTSwhose env-var bool parsing was a footgun.BEZANT_EDGE_COOKIE_PREFIXESallows extending the built-in edge-cookie filter (Cloudflare Access, AWS ALB OIDC, OAuth2 Proxy, Vercel, Pomerium) with custom prefixes for bespoke Zero-Trust fronts.- Per-arch native Docker builds (
ubuntu-24.04-armfor arm64) cut multi-arch image build time from ~20min to ~5min by skipping QEMU emulation. Manifests stitched in a merge job.
Changed
bezant-serverproxy now strips the full RFC 7230 §6.1 hop-by-hop header set on both request and response sides, plusauthorizationandx-forwarded-*/forwarded/x-real-ip(caller-controlled client-IP claims that CPGateway doesn’t consume).- Cloudflare Access cookies (
CF_Authorization,CF_AppSession) are filtered out of inbound cookie replay so they never reach IBKR upstream — Akamai 401s the SSODH bridge call when it sees an unrecognised cookie alongside the IBKR session cookies. Generalised to a built-in prefix list covering the major Zero-Trust providers. passthrough_any’s upstream body read is now capped at 25 MiB via a streaming counter (was unbounded; OOM risk under a hostile upstream). Inbound side is capped at 10 MiB declaratively viaRequestBodyLimitLayer.bezant-servermain.rs now stacks production middleware:TimeoutLayer(35s)(>reqwest’s 30s),RequestBodyLimitLayer(10MiB), and a privacy-preservingTraceLayerwhose spans record the request path never uri (to avoid logging?token=…query strings).forward()’s empty-body fallback for upstream chunked-decode errors is scoped to 1xx/204/304/3xx; on 2xx/4xx/5xx a decode failure surfaces as a real upstream error.- Content-Type rewrite + missing-Content-Type default no longer fire on responses where the body must be empty (RFC 9110 §8.3) nor on empty- body 2xx/4xx/5xx responses.
- Cookie-injection log demoted from
info!todebug!; path query string stripped from log lines. bezant-coreaddsError::BadRequest(String)for caller-input failures;bezant-servermaps it to HTTP 400 instead of 500.Error::Decodecarried byauth_statusnow includes the offending URL and HTTP status alongside the serde error.- Probe verdict logic now reads the full auth_status body (not the
512-byte preview) to decide
_authenticated, so a response whoseauthenticatedfield lands past the preview window doesn’t silently trigger the destructive ssodh path. - Cargo packaging metadata:
documentationkey on every published crate, per-crateLICENSE-MIT/LICENSE-APACHEfiles (cargo publish only includes per-crate dirs),[lints] workspace = trueon every member,includedirective onbezant-specto control package size.
Fixed
forward()’shad_content_typeflag was set before the response header was appended; ifHeaderValue::from_bytesrejected the upstream value the response went out with no Content-Type at all.- Multiple
Set-Cookieheaders from the Gateway now round-trip reliably. forward()no longer relies on(StatusCode, HeaderMap, Vec<u8>)’sIntoResponseadapter, which unconditionally insertedapplication/octet-stream.
Security
- HIGH:
/debug/jarno longer returns raw cookie values unauthenticated. The cookie jar holds live IBKR session cookies; an attacker reaching the bind address could resume the IBKR session and trade the account. Now name + value-length only, gated byBEZANT_DEBUG_TOKEN. - MEDIUM: Bearer/Basic
Authorizationheaders no longer forwarded to CPGateway. CPGateway doesn’t consume them; forwarding lets a caller probe whatever auth scheme upstream might (incorrectly) honour. - MEDIUM: Caller-controlled
X-Forwarded-For/Forwarded/X-Real-IPno longer forwarded — caller could spoof their apparent source IP to anything that audits on those headers downstream.
Tests
- 38 wiremock-driven integration tests in
crates/bezant-server/tests/routes.rscovering the regressions above plus probe verdict matrix, debug-token gating (404/401/header/query/length-only), Cloudflare-cookie filtering, multi-cookie replay, hop-by-hop strip, 5xx propagation, and Content-Type-on-204 RFC compliance. All wiremock-driven, no IBKR involvement.
[0.1.0] — 2026-04-22
Initial public release.
Added — crates
bezant-spec— vendored IBKR Client Portal Web API OpenAPI spec (3.0 source + 3.1-upgraded derivative) + 13-step normaliser + refresh tooling.bezant-api— auto-generated Rust client for 155 CPAPI paths (167 typed methods, 1030 types) emitted byoas3-genfrom the normalised 3.1 spec.bezant(frombezant-core) — ergonomic async facade withClient, auth, keepalive, health, pagination,SymbolCache, andWsClientWebSocket streaming (cookie auth reused from the REST session, typed subscribe helpers for market data / orders / PnL).bezant-server— axum HTTP sidecar exposing the facade over plain REST for consumers in any language, with a catch-all passthrough for the Gateway’s interactive login.bezant-cli— command-line tool wrapping the facade (bezant health,bezant accounts,bezant positions,bezant conid,bezant tickle).bezant-mcp— Model Context Protocol server exposing CPAPI as tools for LLM clients (Claude Desktop, Cursor, Continue, …).- TypeScript client generated via
openapi-generator-clifrom the same spec for Node / Deno / browser consumers. - Combined Docker image (
docker/combined/) that runs the Gateway andbezant-servertogether behind a tini-supervised entrypoint for single-service deploys (Railway, fly.io, bare VMs). Standalone images for each process are also published.
Added — ergonomics
Client::spawn_keepalive— drop-to-stop background task tickling/tickleso the 5-minute CPAPI session never expires.Client::auth_status+Client::health— typed distinction betweenNotAuthenticated,NoSession, and generic errors (auth_statusalso translates the Gateway’s pre-login 302 redirect — the spec claims 401 but the real Gateway never emits it).Client::all_positions— auto-paginated positions across all pages.Client::cookie_jar()— exposes the shared reqwest cookie jar so reverse proxies can inject incoming browser cookies and keep typed API calls authenticated.#[tracing::instrument]spans across every facade method.
Added — repo / release hygiene
- Runnable examples under
crates/bezant-core/examples/—health,list_positions,stream_quotes— copy-paste ready against the bundled Docker gateway via env vars. [package.metadata.docs.rs]on every library crate — docs.rs builds with--cfg docsrsfor future feature-gate markers.- Centralised lint floor via
[workspace.lints]—unsafe_code = forbid,missing_docs = warn,rust_2018_idioms/unreachable_pubon warn — inherited by every hand-written crate. - CI: fmt, clippy (warnings as errors), tests on stable + beta
(ubuntu + macOS), MSRV check at Rust 1.89, TypeScript client build,
cargo-denyaudit, docs build to GitHub Pages. - Snapshot tests driven by real IBKR example payloads in the spec.
- 34 tests across the workspace (unit, integration, snapshot).
Notes
- MSRV: Rust 1.89 (driven by transitive deps —
oas3-gen-support,progenitor,serde_with,time). - Rust codegen pivoted from
progenitortooas3-genafterprogenitorproduced 49 compile errors on IBKR’s spec;oas3-genbuilds cleanly after the 13-step normalisation pipeline. - Dual MIT / Apache-2.0 licensing throughout; the vendored IBKR spec itself remains IBKR’s IP and is included under fair-use conventions for interoperability.
Contributing to bezant
Thanks for your interest in bezant! This is an early-stage OSS project, so contributions of every size are welcome — from typos in the docs to auto-generating entire new client SDKs from the spec.
Quick start
git clone https://github.com/isaacrowntree/bezant
cd bezant
cargo test --workspace
./scripts/codegen.sh # regenerate Rust client from the spec
./scripts/codegen-ts.sh # regenerate TypeScript client
You’ll need:
- Rust 1.89+ (install via rustup)
- Python 3.9+ (for the spec normaliser)
- Java 17+ (for the TypeScript codegen via
openapi-generator-cli) oas3-gen—cargo install oas3-gen- jq — for the spec-refresh script (optional; only needed if you re-download the spec)
Repository layout
crates/
bezant-spec/ — vendored IBKR OpenAPI spec + refresh tooling
bezant-api/ — auto-generated Rust client (don't edit by hand)
bezant-core/ — ergonomic facade (hand-written)
bezant-server/ — axum HTTP sidecar (hand-written)
bezant-cli/ — CLI wrapping the facade
bezant-mcp/ — Model Context Protocol server
clients/
typescript/ — auto-generated TS client
scripts/
refresh-spec.sh — pull latest spec from api.ibkr.com
normalize-spec.py — work around IBKR spec quirks
upgrade-to-3.1.py — OAS 3.0 → 3.1 upgrade
codegen.sh — normalise → upgrade → oas3-gen
codegen-ts.sh — openapi-generator-cli → TS client
extract-examples.py — collect spec examples for snapshot tests
xtask/ — dev-only tools (spec probing, bisection)
docs/ — mdbook source for the docs site
Pull-request checklist
Before opening a PR:
-
cargo fmt --all— no formatting delta -
cargo clippy -p bezant-core -p bezant-spec --all-targets -- -D warnings(the generated crates have warnings we intentionally silence) -
cargo test --workspace - Docs updated if you changed public API surface
- Added a test (unit, integration, or snapshot) for any behaviour change
Spec changes
When IBKR updates their OpenAPI:
./scripts/refresh-spec.sh— pulls latest fromapi.ibkr.com../scripts/codegen.sh— re-normalises and regeneratesbezant-api.cargo test --workspace— snapshot tests catch breaking response shapes.- Commit
crates/bezant-spec/ibkr-openapi*.jsonand the regeneratedcrates/bezant-api/src/generated/tree.
If the codegen starts failing after a spec refresh, the first place to look
is scripts/normalize-spec.py — we handle 13+ upstream quirks there and
new spec versions sometimes add more.
Reporting bugs
Open an issue with:
- Your environment (
rustc --version, OS, Gateway version) - Minimal repro
- Whether this is a bezant bug or a suspected IBKR spec bug (they’re sometimes
hard to tell apart — share the spec version from
bezant_spec::UPSTREAM_VERSION)
Security
See SECURITY.md for the disclosure policy. Short version: do not open a public issue for vulnerabilities. Email the maintainer.
Security Policy
Supported Versions
Only the latest 0.x is receiving patches at this stage. Once we hit 1.0
we’ll maintain the previous minor release alongside.
Reporting a Vulnerability
Please do not open a public GitHub issue for security vulnerabilities.
Instead, email [email protected] with:
- A description of the vulnerability
- Steps to reproduce
- Any relevant versions (Rust toolchain, Gateway build, IBKR spec version)
- Your preferred disclosure timeline
You’ll get an acknowledgement within 72 hours. From there we’ll triage, produce a fix in a private branch, coordinate a disclosure date with you, and ship a patched release with an advisory (CVE requested where appropriate).
What counts
Bezant touches live brokerage accounts, so the blast radius of a bug can be large. We treat the following as in-scope:
- Authentication / session handling bugs that could leak credentials or let a malicious caller hijack an authenticated Gateway
- Injection vectors (URL, header, JSON) that let a caller reach unintended endpoints on the Gateway
- Data-integrity bugs in the spec normaliser that could turn a read-only tool call into a write
- Any way to bypass the deliberate feature gating around order placement
Out of scope:
- Rate-limit bypasses against IBKR itself (report those to IBKR)
- TLS issues in self-signed mode (we document that this is intentional for local dev against the Gateway’s default cert)
Disclosure policy
Responsible disclosure gets full credit in the release notes and CVE. We do not currently offer a paid bounty; if that changes the policy here will be updated.
License
bezant is dual-licensed under your choice of:
following the standard Rust ecosystem convention. Pick whichever works better for your project; contributors agree their contributions are available under the same terms.
Third-party code
- The vendored IBKR OpenAPI spec under
crates/bezant-spec/is Interactive Brokers’ intellectual property. It’s included here under fair-use conventions for API interoperability and is not covered by the Bezant license. If you redistribute Bezant, include the spec as-is and don’t modify it outside of the documented normalisation pipeline. - The auto-generated Rust code under
crates/bezant-api/src/generated/is produced from the vendored spec byoas3-gen. We consider the generated code to be under Bezant’s dual license; the IBKR-authored descriptions within rustdoc comments stay with IBKR.
Not affiliated with Interactive Brokers
Bezant is an independent open-source project. Trading involves substantial risk; this software is provided without warranty. See the license text.