AI Gateway feature

Edge Tools

In progress

Give your LLM calls real capabilities without hard-coding tool glue everywhere. Run shared tools we host, or deploy private tools at the edge.

We’re building the tool runtime and permissions model. Early design partners welcome.

Capabilities

  • Shared tools operated by Edgee (common primitives)
  • Private tools you deploy (per org)
  • Tool allowlists/permissions and audit trails
  • Consistent errors + timeouts + retries across providers

How it works

  1. You register tools and permissions (shared or private).
  2. Your app calls Edgee; models can request tool execution.
  3. Edgee executes the tool at the edge, applies policies, and returns tool outputs. No need to build your own tool runtime.
  4. The model completes with the tool results, while observability captures the full trace.

Less application glue

Define tools once at the gateway instead of re-implementing them across services.

Lower latency tool calls

Execute tools closer to users and providers to reduce round-trips.

Stronger control surface

Centralize permissions, audit logs, and safety policies for tool execution.

FAQ

Answers reflect current direction and may evolve as the platform ships.

Ship faster

Start with one key. Scale with policies.

Use Edgee’s unified access to get moving quickly, then add routing, budgets, and privacy controls as your AI usage grows.

Contact
Edge Tools — Edgee AI Gateway