Install OpenClaw Subscription Plans
Connect your own server, bring your preferred AI credentials, and let us manage the rest. Every plan includes all messaging platforms, automatic updates, and full data sovereignty.
Prerequisites
Two things to prepare before signing up
LLM Provider Access
- Anthropic Claude API key
- OpenAI API key
- OpenRouter account
- Or self-hosted Ollama / llama.cpp
A Linux Machine with SSH
- Any VPS or dedicated server from a supported provider
- Or bare metal hardware you control
Supported Infrastructure
Base
server hosting billed separately
Everything one person needs to run an AI agent on a single box.
- Up to 1 managed server
- Unlimited agentsActual capacity depends on your server hardware
- All platform connections
- Core usage metrics
- Email support
- Teams
- Custom domain
- Activity logging
- Custom API
Pro
server hosting billed separately
Multi-server management with analytics and team collaboration.
- Up to 10 managed servers
- Unlimited agentsActual capacity depends on your server hardware
- All platform connections
- Analytics dashboard
- Priority email & chat support
- 1 team (up to 5 members)
- Custom domain
- Activity logging
- Custom API
Business
server hosting billed separately
Full-featured platform access with audit trails and API extensibility.
- Up to 50 managed servers
- Unlimited agentsActual capacity depends on your server hardware
- All platform connections
- Advanced analytics & reporting
- Priority support
- 5 teams (up to 25 members)
- Custom domain
- Activity logging
- Custom API
Agency
server hosting billed separately
White-label ready with unlimited capacity and hands-on support.
- All Business features, plus:
- Unlimited managed servers
- Unlimited teams & members
- Dedicated account manager
- Complete activity logging
- White-label features
- Pre-built agent configurations
Requirements
Minimum server specifications
Every managed server must meet these baseline specs. Heavier workloads and concurrent agent sessions benefit from additional resources. GPU hardware is recommended for local LLM inference.
CPU
2 vCPU cores minimum
Memory
2 GB RAM minimum
Storage
20 GB disk minimum
AI Providers
Bring your own AI credentials
Supply API keys from any of the following services. You keep full control over which models run on your infrastructure and how much you spend on inference.
Anthropic Claude
Deep analysis, long-context reasoning, and reliable code output
OpenAI
Versatile GPT models covering text, vision, and function calling
OpenRouter
Proxy layer that routes requests to any compatible model API
Ollama
Run Llama, Mistral, and other open models on your own hardware
llama.cpp
Lightweight C++ runtime for CPU and GPU inference without containers
FAQ
Pricing questions answered
Where do AI credentials come from?
What happens on my server during setup?
How much hardware does OpenClaw need?
What counts as a single gateway?
Are plan changes instant?
Do I have to sign a contract?
How do I pay?
Questions before you commit?
Reach out and we will match you with the tier that fits your infrastructure and team size.