On-Premise Deployment¶
Beta Documentation
On-premise deployment support is in beta. This guide is based on standard Docker and Linux deployment practices and has not been validated by Caprus AI. Network and firewall requirements vary significantly across environments. Steps may require adjustment for your specific configuration. Please report any issues or corrections to Caprus AI.
AgentCube connectors are standard Linux containers and run on any host with Docker installed. This guide covers deployment on a Linux server using Docker or Docker Compose, with nginx as a TLS-terminating reverse proxy.
Prerequisites¶
- Linux server (Ubuntu 22.04 LTS or RHEL 8+ recommended)
- Docker Engine 24.0 or later
- A valid TLS certificate for your connector hostname (Let's Encrypt or your CA)
- Outbound internet access to your Oracle EPM system
- A public HTTPS URL reachable by your AI platform (Claude.ai, Copilot Studio)
Network requirement
Claude.ai and Copilot Studio connect to your connector from the public internet. The connector must be accessible via a public HTTPS URL on port 443. If your server is behind a NAT or corporate firewall, configure port forwarding or a reverse tunnel accordingly.
See Container Images & Specifications for image names, GHCR authentication, and resource requirements.
Authenticate with Docker before pulling or running images:
Option A: Docker (Quick Start)¶
Suitable for evaluation and development.
Essbase¶
docker run -d \
--name agentcube-essbase \
--restart unless-stopped \
-p 8080:8080 \
-e ESSBASE_SERVER_URL=https://{essbase_host} \
-e ESSBASE_USERNAME={username} \
-e ESSBASE_PASSWORD={password} \
ghcr.io/caprusai/agentcube-essbase:{version}
Planning¶
docker run -d \
--name agentcube-planning \
--restart unless-stopped \
-p 8081:8080 \
-e PLANNING_URL=https://{planning_host}/HyperionPlanning \
-e PLANNING_USERNAME={username} \
-e PLANNING_PASSWORD={password} \
ghcr.io/caprusai/agentcube-planning:{version}
Verify the container is healthy:
Option B: Docker Compose (Recommended)¶
Docker Compose is recommended for production on-premise deployments. It manages restarts, environment files, and multiple connectors cleanly.
1. Create an environment file¶
Store credentials separately from docker-compose.yml. Create /etc/agentcube/essbase.env:
Set restrictive permissions:
2. Create docker-compose.yml¶
services:
agentcube-essbase:
image: ghcr.io/caprusai/agentcube-essbase:{version}
restart: unless-stopped
ports:
- "8080:8080"
env_file:
- /etc/agentcube/essbase.env
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
agentcube-planning:
image: ghcr.io/caprusai/agentcube-planning:{version}
restart: unless-stopped
ports:
- "8081:8080"
env_file:
- /etc/agentcube/planning.env
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
3. Start the services¶
Step 3: Configure TLS with nginx¶
The AgentCube container serves HTTP on port 8080. nginx acts as a reverse proxy to terminate TLS and forward requests to the container.
Install nginx¶
Obtain a TLS certificate¶
Using Let's Encrypt with Certbot:
Certbot configures nginx and sets up automatic renewal.
Manual nginx configuration¶
If managing certificates manually, create /etc/nginx/sites-available/agentcube-essbase:
server {
listen 443 ssl;
server_name {connector_hostname};
ssl_certificate /etc/ssl/certs/{connector_hostname}.pem;
ssl_certificate_key /etc/ssl/private/{connector_hostname}.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 120s;
}
}
server {
listen 80;
server_name {connector_hostname};
return 301 https://$host$request_uri;
}
Enable the configuration:
sudo ln -s /etc/nginx/sites-available/agentcube-essbase /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
If deploying both connectors on the same host, use a separate server_name and proxy_pass block for the Planning connector pointing to port 8081.
Option C: Kubernetes¶
For enterprise environments with an existing Kubernetes cluster.
Deployment manifest¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: agentcube-essbase
namespace: agentcube
spec:
replicas: 1
selector:
matchLabels:
app: agentcube-essbase
template:
metadata:
labels:
app: agentcube-essbase
spec:
containers:
- name: agentcube-essbase
image: ghcr.io/caprusai/agentcube-essbase:{version}
ports:
- containerPort: 8080
env:
- name: ESSBASE_SERVER_URL
value: "https://{essbase_host}"
- name: ESSBASE_USERNAME
value: "{username}"
- name: ESSBASE_PASSWORD
valueFrom:
secretKeyRef:
name: agentcube-essbase-secret
key: password
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "500m"
memory: "1Gi"
---
apiVersion: v1
kind: Service
metadata:
name: agentcube-essbase
namespace: agentcube
spec:
selector:
app: agentcube-essbase
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Create the credential secret separately:
kubectl create secret generic agentcube-essbase-secret \
--from-literal=password={password} \
-n agentcube
Expose via an Ingress with your cluster's ingress controller (nginx-ingress, Traefik, etc.) and configure TLS termination there.
Verify¶
See Verification for the expected response and full verification checklist.
Network Considerations¶
| Requirement | Details |
|---|---|
| Inbound port 443 | Must be open from the public internet for Claude.ai / Copilot Studio |
| Inbound port 80 | Recommended for HTTP → HTTPS redirect |
| Outbound to Oracle EPM | Connector must reach your Oracle Essbase or Planning Cloud URL |
| DNS | Connector hostname must resolve publicly to your server's IP |
| Firewall / NAT | If behind NAT, configure port forwarding for 443 to the host running nginx |