Edgee AI Gateway is still under active construction. We’re building fast, shipping incrementally, and writing docs in parallel.
If you need something more mature today, jump to the Edgee Proxy documentation.
What the AI Gateway focuses on
These are the core capabilities we’re designing the gateway around:Unified API
One integration that can route across providers without rewriting your application logic.
Routing & reliability
Policy-based routing, fallbacks, and safer failure modes when providers rate-limit or degrade.
Observability & cost
The signals you need to run production AI: latency, errors, usage, and cost — exportable and actionable.
Privacy controls
Configurable logging and retention, plus provider-side ZDR where available.
What to expect (right now)
- A product in progress: features and APIs may evolve as we learn from real production use-cases.
- Clear defaults, configurable controls: the goal is to reduce “LLM glue code” while keeping you in charge.
- Docs expanding quickly: each feature page will get deeper guides, examples, and best practices as we ship.
Looking for the mature platform?
Edgee Proxy has a large set of production-ready capabilities and much deeper documentation today:- Start here: Edgee Proxy overview
- Implementation guides: Proxy getting started