# On-Premise & Hybrid Hosting

{% hint style="info" %}
On-premise or hybrid hosting are available with the Enterprise Max add-on.
{% endhint %}

## **Hosting Options** <a href="#system-requirements" id="system-requirements"></a>

Get in touch with your Proto account representative to discuss which hosting solution is right for you, and its manual deployment procedure.

* **On-premise** means all database content (e.g. AI configurations, conversations, and profiles of registered people), bucket content (files & attachments), and the Proto AICX platform (web app) are all hosted independently by you.
* **Hybrid Hosting** means all database and bucket content is hosting independently by you, but hosting and provision of the Proto AICX platform (web app) is maintained by us.

***

## **System requirements** <a href="#system-requirements" id="system-requirements"></a>

#### Software <a href="#software" id="software"></a>

* **OS:** Ubuntu 22.04.3 LTS
* **Web Server:** Nginx 1.14.0 or later
* **Docker:** 24.0.0 or later
* **Docker Compose:** 2.23.0 or later
* **AWS CLI:** 2.13.30 or later
* **Python:** 3.10 or later

#### Hardware <a href="#hardware" id="hardware"></a>

* **Memory:** 32GB RAM minimum
* **Disk Space:** 20GB required for safe installation.\
  An additional 500 GB for database storage is recommended.

#### **Network** <a href="#network" id="network"></a>

* Allowed traffic through port 80/443

#### External Services <a href="#external-services" id="external-services"></a>

* SendGrid API key
* OpenAI or Azure OpenAI API key

***

## Chat-GPT data privacy <a href="#gpt-data-privacy" id="gpt-data-privacy"></a>

If you have AI agent with a Chat-GPT LLM enabled, it's recommended to use Azure's OpenAI service instead of OpenAI directly. The benefits of Azure’s OpenAI service include:

* Faster response times
* 99.9% uptime SLA
* Stronger data privacy

From Microsoft's legal resource for [data, privacy, and security for Azure OpenAI Service](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy):

> Your prompts (inputs) and completions (outputs), your embeddings, and your training data:
>
> * are NOT available to other customers.
> * are NOT available to OpenAI.
> * are NOT used to improve OpenAI models.
> * are NOT used to improve any Microsoft or 3rd party products or services.
> * are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).
> * Your fine-tuned Azure OpenAI models are available exclusively for your use.
>
> The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API).

***

## Installation <a href="#installation" id="installation"></a>

{% hint style="info" %}
These installation steps are specific to on-premise hosting and are for reference only.\
Proto personnel will provide direct guidance on your host migration process.
{% endhint %}

There are three components that must be served by your local server:

1. The backend platform
2. The Proto AICX platform (web app)
3. The webchat library

These three components can be served directly from Nginx. As components (2) and (3) are pre-built, they only need to be served as static files from Nginx without any configuration.

You will be provided with the following files to scaffold the on-premise deployment:

* A Python file: *proto\_scaffold.py*
* A JSON file: *proto\_credentials.json*

To scaffold:

1. Place both files in your desired installation directory.
2. Run python `proto_scaffold.py init`

The command will *not* start Proto. It will create a folder with the relevant service configuration TOML files, authenticate your instance with our private docker container repository, and add a docker-compose.yml.

***

## Configuration <a href="#configuration" id="configuration"></a>

In the newly created directory, you'll find a subdirectory named *backend/config/*. This contains all the configurable parameters for the backend platform. While many parameters might not require adjustments, the following are mandatory:

```
[app]
base_url = "https://{YOUR_URL}"
base_api_url = "https://{YOUR_URL}/api"

[cors]
allow_origins = ['https://{YOUR_URL}']

[sendgrid]
api_key = "{YOUR_SENDGRID_API_KEY}"
from_domain = "{YOUR_DESIRED_EMAIL_FROM}"
inbound_parse_key = "{YOUR_SENDGRID_INBOUND_URL_KEY_PARAM}"

[openai]
api_key = "{YOUR_OPENAI_API_KEY}"

[api]
session_middleware_secret_key = "{YOUR_RANDOM_SECRET_KEY_1}"
secret_key = "{YOUR_RANDOM_SECRET_KEY_2}"
algorithm = "HS256"
access_token_expire_minutes = 3600
frontend_url = "https://{YOUR_URL}"

...
```

### Nginx example <a href="#example-nginx-configuration" id="example-nginx-configuration"></a>

The backend platform will run its services on port 8080 of the containers. Each service should be prefixed with *api/*. The web app and the webchat library will be served from */* and */webchat*, respectively.

```
...

location / {
		# Serve web app
    root /home/ubuntu/proto/frontend;
    index index.html;
    try_files $uri $uri/ /index.html;
}

location /webchat {
		# Serve webchat
    root /home/ubuntu/proto/webchat;
    index index.html;
    try_files $uri $uri/ /index.html;
}

location /api/platform/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Protocol $scheme;
    
    proxy_pass http://platform:8080/;
}

location /api/chatbot/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Protocol $scheme;
    
    proxy_pass http://chatbot:8080/;
}

location /msgrouter_ws/ {
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;

    proxy_pass http://msgrouter_ws:9000;
}

...
```

### Managing platform state <a href="#managing-platform-state" id="managing-platform-state"></a>

* **Start the platform:** in *backend/* run `docker compose up -d`
* **Stop the platform:** in *backend/* run `docker compose down`
* **Update the platform from remote:** `python proto_scaffold.py update`
* **Directly connect to database:** `psql -U postgres -d platform`
