AI Gateway feature
Edge Tools
In progress
Give your LLM calls real capabilities without hard-coding tool glue everywhere. Run shared tools we host, or deploy private tools at the edge.
We’re building the tool runtime and permissions model. Early design partners welcome.
Capabilities
- Shared tools operated by Edgee (common primitives)
- Private tools you deploy (per org)
- Tool allowlists/permissions and audit trails
- Consistent errors + timeouts + retries across providers
How it works
- You register tools and permissions (shared or private).
- Your app calls Edgee; models can request tool execution.
- Edgee executes the tool at the edge, applies policies, and returns tool outputs. No need to build your own tool runtime.
- The model completes with the tool results, while observability captures the full trace.
Less application glue
Define tools once at the gateway instead of re-implementing them across services.
Lower latency tool calls
Execute tools closer to users and providers to reduce round-trips.
Stronger control surface
Centralize permissions, audit logs, and safety policies for tool execution.
FAQ
Answers reflect current direction and may evolve as the platform ships.