On premise deployment
The content in this document is subject to change.
System requirements
Software
OS: Ubuntu 22.04.3 LTS
Web Server: Nginx 1.14.0 or later
Docker: 24.0.0 or later
Docker Compose: 2.23.0 or later
AWS CLI: 2.13.30 or later
Python: 3.10 or later
Hardware
Memory: 32GB RAM minimum
Disk Space: 20GB is required for safe installation. Recommended an additional 500 GB for database storage
Network
Allowed traffic through port 80/443
External Services
SendGrid API key
OpenAI or Azure OpenAI API key
GPT data privacy
It's recommended to use Azure's OpenAI service instead of OpenAI directly. The benefits of Azure’s OpenAI service include:
Faster response times
99.9% uptime SLA
We guarantee that Azure OpenAI Service will be available at least 99.9% of the time.
Stronger data privacy
Data, privacy, and security for Azure OpenAI Service - Azure AI services
Your prompts (inputs) and completions (outputs), your embeddings, and your training data:
are NOT available to other customers.
are NOT available to OpenAI.
are NOT used to improve OpenAI models.
are NOT used to improve any Microsoft or 3rd party products or services.
are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).
Your fine-tuned Azure OpenAI models are available exclusively for your use.
The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API).
Installation
There are three components that must be served by this machine:
The backend platform
The web app
The webchat library
These three components can be served directly from Nginx. As components (2) and (3) are pre-built, they only need to be served as static files from Nginx without any configuration.
💡 See Configuration for an example Nginx configuration.
You will be provided with the following files to scaffold the on-premise deployment:
A Python file, called proto_scaffold.py
A JSON file, called proto_credentials.json
Head to your desired installation directory, and place both of these files in this directory.
Run python proto_scaffold.py init
The command will not start Proto. It will create a folder with the relevant service configuration TOML files, authenticate your instance with our private docker container repository, and add a docker-compose.yml.
Configuration
In the newly created directory, you'll find a subdirectory named backend/config/. This contains all the configurable parameters for the backend platform. While many parameters might not require adjustments, the following are mandatory:
Example Nginx configuration
The backend platform will run its services on port 8080 of the containers. Each service should be prefixed with api/. The web app and the webchat library will be served from / and /webchat, respectively.
Managing platform state
Start the platform: run docker compose up -d in backend/
Stop the platform: run docker compose down in backend/
Update the platform from remote: python proto_scaffold.py update
Directly connect to database: psql -U postgres -d platform
Last updated