# Installation

## Using Docker

Install Docker Desktop: <https://www.docker.com/products/docker-desktop>

{% hint style="warning" %}
You must allocate a minimum of 4GB memory for the docker.
{% endhint %}

![](https://754235390-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MZ64BhrvkgPMyL9F3tk%2F-MbDVP8Bm5H3-i09hdwi%2F-MbDWwOAo7GQWWPAk4Fe%2Fimage.png?alt=media\&token=f420bc8d-3ecb-4bb8-8542-af38552c421a)

Download [cloudio\_demo.zip](https://drive.google.com/file/d/1Bhut2FAxQHjdzaYQvVPAxhagjoSxUntl/view?usp=sharing) & unzip

```
unzip cloudio_demo.zip
cd cloudio_demo
./start.sh
```

Launch CloudIO Apps using any modern browser <http://localhost>

Sign-in as demo / demo

![Docker Containers](https://754235390-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MZ64BhrvkgPMyL9F3tk%2F-MbDXFT1Ylo88A0UyRNw%2F-MbDXXWJqfPzzrcYkr5l%2Fimage.png?alt=media\&token=601d3993-5ca8-449d-8ed4-dba54df21a2d)

### docker-compose.yml

```yaml
version: "3.1"

services:
  traefik:
    image: traefik:2.3
    container_name: cloudio-traefik
    command:
      - --providers.file.directory=/storage/config
      - --providers.file.watch=true
      - --providers.docker=true
      - --providers.docker.exposedByDefault=false
      - --providers.docker.constraints=Label(`traefik.constraint-label-stack`,`cloudio`)
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - cloudio-config:/storage/config:ro
      - cloudio-certificates:/storage/certificates:ro
    depends_on:
      - cloudio
    networks:
      - gateway
      - cloudio
  mysql:
    build: mysql
    container_name: cloudio-mysql
    restart: unless-stopped
    ulimits:
      nofile:
        soft: 20000
        hard: 40000
    ports:
      - 3306:3306
    logging:
      driver: json-file
    networks:
      - cloudio
    volumes:
      - cloudio-mysql:/var/lib/mysql:rw
  zookeeper:
    image: "bitnami/zookeeper:3.5.7"
    container_name: cloudio-zookeeper
    restart: unless-stopped
    logging:
      driver: json-file
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
      - ZOO_LOG_LEVEL=WARN
    networks:
      - cloudio
    volumes:
      - cloudio-zookeeper:/bitnami/zookeeper:rw
    depends_on:
      - mysql
  kafka:
    image: "bitnami/kafka:2.7.0"
    container_name: cloudio-kafka
    restart: unless-stopped
    logging:
      driver: json-file
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper
    networks:
      - cloudio
    volumes:
      - cloudio-kafka:/bitnami/kafka:rw
  redis:
    image: redis:6
    container_name: cloudio-redis
    restart: unless-stopped
    networks:
      - cloudio
    volumes:
      - cloudio-redis:/data:rw
    depends_on:
      - kafka
  cloudio:
    build: cloudio
    container_name: cloudio
    restart: unless-stopped
    logging:
      driver: json-file
    networks:
      - cloudio
    labels:
      - traefik.enable=true
      - traefik.constraint-label-stack=cloudio
      - traefik.http.routers.cloudio.rule=PathPrefix(`/`)
      - traefik.http.routers.cloudio-secure.rule=PathPrefix(`/`)
      - traefik.http.routers.cloudio-secure.tls=true
    volumes:
      - cloudio-config:/storage/config:rw
      - cloudio-certificates:/storage/certificates:rw
    depends_on:
      - kafka
      - mysql
      - redis

networks:
  gateway:
  cloudio:

volumes:
  cloudio-mysql:
  cloudio-redis:
  cloudio-certificates:
  cloudio-config:
  cloudio-kafka:
  cloudio-zookeeper:
```

## Manual Installation

### Install Apache Kafka or use Confluent Cloud

{% hint style="info" %}
Refer to the following links to download and install Apache Kafka 2.7.1 <https://www.apache.org/dyn/closer.cgi?path=/kafka/2.7.1/kafka_2.13-2.7.1.tgz> <https://kafka.apache.org/quickstart> <https://www.confluent.io/confluent-cloud/pricing>
{% endhint %}

### Install MySQL or use any Cloud Service for MySQL

{% hint style="info" %}
Refer to the following link for MySQL installation <https://dev.mysql.com/doc/mysql-installer/en/>
{% endhint %}

### Install Redis or use any Cloud Service for Redis

{% hint style="info" %}
Refer to the following link to install Redis <https://redis.io/topics/quickstart>
{% endhint %}

### Install Load Balancer/Reverse Proxy or a related service from your Cloud provider

{% hint style="info" %}
You can use either a hardware or software load balancer for load balancing, reverse proxy and SSL termination. e.g. nginx, apache etc. Refer to the following link to install nginx <https://www.nginx.com/resources/wiki/start/topics/tutorials/install/>
{% endhint %}

{% hint style="success" %}
Configure your load balancer/reverse proxy to redirect the incoming HTTPS & WSS requests to the host(s)/port(s) on which the CloudIO Platform is configured to run.
{% endhint %}

#### Example NGINX Configuration with single instance of platform running on localhost:3090

```
   ...

    upstream wsbackend {
        server localhost:3090;
    }
    
    server {
        listen 80 default_server;
        server_name subdomain.example.com;
        return 301 https://$server_name;
    }
    
    server {
        listen       443 ssl http2 default_server;
        server_name  subdomain.example.com;

        ssl_certificate "/cloudio/ssl/bundle.crt";
        ssl_certificate_key "/cloudio/ssl/star_example_com.key";
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_ciphers PROFILE=SYSTEM;
        ssl_prefer_server_ciphers on;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location /ws/ {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_pass http://wsbackend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
        }

        location / {
            proxy_pass http://localhost:3090;
            proxy_set_header   Host               $host;
            proxy_set_header   X-Real-IP          $remote_addr;
            proxy_set_header   X-Forwarded-Proto  $scheme;
            proxy_set_header   X-Forwarded-For    $proxy_add_x_forwarded_for;
            proxy_connect_timeout       600;
            proxy_send_timeout          600;
            proxy_read_timeout          600;
            send_timeout                600;
        }
    }
    
    ...
```

{% hint style="warning" %}
Make sure to either use a trusted network or SSL/TLS-enabled Redis, Kafka & MySQL
{% endhint %}

### Install CloudIO Platform

Once you obtain a license from CloudIO, follow the instructions to download cloudio-platform.zip and unzip to a directory e.g. /mnt/cloudio and update the .env file with appropriate values for the following environment variables

### **Security Environment Variables**

<table><thead><tr><th width="263">Variable Name</th><th width="291">Description</th><th width="137">Mandatory</th><th width="176">Applicable Values</th><th>Default Value</th></tr></thead><tbody><tr><td>API</td><td>Set it to true to enable the API Service (UI Backend)</td><td>Yes</td><td>true, false</td><td></td></tr><tr><td>SCHEDULER</td><td>Set it to true to enable the scheduler service</td><td>Yes</td><td>true, false</td><td></td></tr><tr><td>WORKFLOW</td><td>Set it to true to enable the multi-node workflow service</td><td>Yes</td><td>true, false</td><td></td></tr><tr><td>IO_ENV</td><td>development/test/production</td><td>Yes</td><td>development/test/production</td><td>development</td></tr><tr><td>JWT_SECRET</td><td>Used to encode/decode JWT tokens</td><td>Yes</td><td></td><td>secret</td></tr><tr><td>ARGON_SECRET</td><td>Used for password hashing</td><td>Yes</td><td></td><td>secret</td></tr><tr><td>ARGON_ITERATIONS</td><td>Used for a number of iterations to generate password hashing</td><td>No</td><td></td><td></td></tr><tr><td>ARGON_MEMORY_SIZE</td><td>Used for an amount of memory to generate password hashing</td><td>No</td><td></td><td></td></tr><tr><td>INSTANCE_ID</td><td>A unique name for this instance</td><td>Yes</td><td></td><td>cloudio_node1</td></tr><tr><td>HOST</td><td>An IP address and port combination on which the web server listens for incoming connections. You can run multiple instances on the same host with different ports and/or on multiple hosts depending on the load. A single instance can scale upto a million requests per 20 minutes.</td><td>Yes</td><td></td><td>127.0.0.1:3090</td></tr><tr><td>API_RATELIMIT</td><td>Number of API calls allowed per IP address per hour</td><td>Yes</td><td></td><td>1000</td></tr><tr><td>AUTH_RATELIMIT</td><td>Number of sign-in API calls allowed per IP address per hour</td><td>Yes</td><td></td><td>12</td></tr><tr><td>STATUS_RATELIMIT</td><td>Number of stutus API calls allowed per IP address per hour</td><td>Yes</td><td></td><td>60</td></tr><tr><td>TMP_DIR</td><td>Temp directory path</td><td>Yes</td><td></td><td>tmp</td></tr><tr><td>ENCRYPTED_ARGS</td><td>Set it to N to encrypt few env variables(JWT_SECRET, ARGON_SECRET, DATABASE_URL, READONLY_DATABASE_URL REDIS_URL, SMTP_PASSWORD, SASL_PASSWORD, ADMIN_PASSWORD, DB_PKCS12_PASSWORD)</td><td>No</td><td>Y,N</td><td>N</td></tr><tr><td>ADMIN_EMAIL</td><td>For the first time installation, platform will create a admin user with this given email address</td><td>No</td><td></td><td></td></tr><tr><td>ADMIN_PASSWORD</td><td>For the first time installation, platform will create a admin user with this given password</td><td>No</td><td></td><td></td></tr><tr><td>JS_DIR</td><td>Directory for the js libraries</td><td>Yes</td><td></td><td>js</td></tr><tr><td>THUMBNAILS_DIR</td><td>Directory for storing thumbnails</td><td>Yes</td><td></td><td>thumbnails</td></tr><tr><td>MULTI_TENANT</td><td>To enable multi tenant setup</td><td>Yes</td><td></td><td>N</td></tr><tr><td>X_FRAME_OPTION</td><td>Set a value for this to add a X_FRAME_OPTION header for server requests</td><td></td><td>DENY,SAMEORIGIN,_</td><td>DENY</td></tr><tr><td>MFA</td><td>Set it to EMAIL to enable MFA for sign-in (send a code to email while signing in)</td><td>Yes</td><td>EMAIL,OFF</td><td>OFF</td></tr><tr><td>MD_DB_TYPE</td><td>Metadata database type</td><td>Yes</td><td>mysql,postgres,oracle</td><td>mysql</td></tr><tr><td>LIVE_INTERVAL_SECONDS</td><td>Set a value in seconds to send live updates to clients</td><td>Yes</td><td></td><td>15</td></tr><tr><td>SQL_TIMEOUT_SECONDS</td><td>Set a value in seconds to set the timeout for the SQL query</td><td>Yes</td><td></td><td>120</td></tr><tr><td>MAX_CONCURRENT_REQUESTS</td><td>Set a value to allow maximum concurrent client requests on the server </td><td>Yes</td><td></td><td>40</td></tr><tr><td>SCHEDULER_SLEEP_SECONDS</td><td>Set a value to set a sleep time before running  schedulers</td><td>Yes</td><td></td><td>60</td></tr><tr><td>PUBLIC_TINY_URL</td><td>Set it true to allow the public user to create a shared URL</td><td>Yes</td><td></td><td></td></tr><tr><td>AGENT</td><td>To enable Agent setup</td><td>No</td><td></td><td></td></tr></tbody></table>

### Database **Environment Variables**

<table><thead><tr><th width="264">Variable Name</th><th width="293">Description</th><th width="136">Mandatory</th><th width="176">Applicable Values</th><th width="100">Default Value</th></tr></thead><tbody><tr><td>DATABASE_URL</td><td>Specify the Database Url to connect metadata.  </td><td>Yes</td><td></td><td></td></tr><tr><td>READONLY_DATABASE_URL</td><td>Used for running ad hoc queries from SQL Worksheet</td><td>Yes</td><td>Same as DATABASE_URL with readonly database user</td><td></td></tr><tr><td>ROOT_DATABASE_URL</td><td>Same as DATABASE_URL with root database user</td><td>No</td><td>Same as DATABASE_URL with root database user</td><td></td></tr><tr><td>DB_ROOT_CERT_PATH</td><td>CA cert path</td><td>No</td><td></td><td></td></tr><tr><td>DB_PKCS12_PATH</td><td>Private key in PKCS12 format</td><td>No</td><td></td><td></td></tr><tr><td>DB_PKCS12_PASSWORD</td><td>Private key password if any</td><td>No</td><td></td><td></td></tr><tr><td>DB_ACCEPT_INVALID_CERTS</td><td>To accept invalid certs (self signed certs)</td><td>No</td><td>true,false</td><td></td></tr><tr><td>DB_SKIP_DOMAIN_VALIDATION</td><td>To skip domain validation</td><td>No</td><td>true,false</td><td></td></tr><tr><td>ALLOW_SQL_WORKSHEET_UPDATES</td><td>Whether or not to allow ad hoc updates via. SQL Worksheet. Set this to N in Production &#x26; UAT instance.</td><td>Yes</td><td>Y,N</td><td>N</td></tr></tbody></table>

### Azure **Environment Variables**

<table><thead><tr><th width="265">Variable Name</th><th width="293">Description</th><th width="138">Mandatory</th><th width="170">Applicable Values</th><th>Default Value</th></tr></thead><tbody><tr><td>AZURE_CLIENT_SECRET</td><td>Azure key vault  account client secret</td><td>No</td><td></td><td></td></tr><tr><td>AZURE_CLIENT_ID</td><td>Azure key vault  account client id</td><td>No</td><td></td><td></td></tr><tr><td>AZURE_TENANT_ID</td><td>Azure key vault  account tenant id</td><td>No</td><td></td><td></td></tr><tr><td>AZURE_KEY_VAULT_URL</td><td>Azure key vault url</td><td>No</td><td></td><td></td></tr><tr><td>AZURE_STORAGE_ACCOUNT</td><td>Azure storage account</td><td>No</td><td></td><td></td></tr><tr><td>AZURE_STORAGE_MASTER_KEY</td><td>Azure storage account master key</td><td>No</td><td></td><td></td></tr></tbody></table>

### Redis **Environment Variables**

<table><thead><tr><th width="266">Variable Name</th><th width="295">Description</th><th width="136">Mandatory</th><th width="179">Applicable Values</th><th>Default Value</th></tr></thead><tbody><tr><td>REDIS_PREFIX</td><td>To assign a prefix value for keys stored in Redis</td><td>No</td><td>dev</td><td>dev</td></tr><tr><td>REDIS_URL</td><td>URL of the Redis server</td><td>No</td><td></td><td></td></tr></tbody></table>

### Kafka **Environment Variables**

<table><thead><tr><th width="266">Variable Name</th><th width="317">Description</th><th width="134">Mandatory</th><th width="272">Sample Values</th><th>Default Value</th></tr></thead><tbody><tr><td>KAFKA_PREFIX</td><td>To assign a prefix value for topic names before creating in Kafka</td><td>No</td><td></td><td></td></tr><tr><td>BOOTSTRAP_SERVERS</td><td>Kafka bootstrap server URL. If using a cloud instance from confluent then provide appropriate values for the additional variables SECURITY_PROTOCOL, SASL_MECHANISMS, SASL_USERNAME &#x26; SASL_PASSWORD provided by confluent cloud when creating a new Kafka cluster</td><td>No</td><td><p>#Local Kafka</p><p>BOOTSTRAP_SERVERS=localhost:9092</p><p></p><p>#Kafka on Confluent Cloud</p><p>BOOTSTRAP_SERVERS=p...5.us-west-2.aws.confluent.cloud:9092 SECURITY_PROTOCOL=SASL_SSL SASL_MECHANISMS=PLAIN SASL_USERNAME=SR4C...OP4DIA SASL_PASSWORD=j4StZg8Kg7m...B5Kgant9A</p></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td></tr><tr><td>SECURITY_PROTOCOL</td><td></td><td>No</td><td></td><td></td></tr><tr><td>SASL_MECHANISMS</td><td></td><td>No</td><td></td><td></td></tr><tr><td>SASL_USERNAME</td><td></td><td>No</td><td></td><td></td></tr><tr><td>SASL_PASSWORD</td><td></td><td>No</td><td></td><td></td></tr></tbody></table>

### Log **Environment Variables**

<table><thead><tr><th width="265">Variable Name</th><th width="322">Description</th><th width="133">Mandatory</th><th width="172">Applicable Values</th><th>Default Value</th></tr></thead><tbody><tr><td>LOG_OUTPUT</td><td>file or console</td><td>Yes</td><td>console, file</td><td>file</td></tr><tr><td>LOG_SQLS</td><td>Set it true to log the SQL queries and it's params</td><td>Yes</td><td>true,false</td><td>false</td></tr><tr><td>LOG_VIEWER_KEY</td><td>Key to access a logs without a session</td><td><br>Yes</td><td></td><td>viG_D6Zo6mtXDAt_3Z</td></tr><tr><td>ENABLE_LOG_VIEWER_USING_KEY</td><td>Set it true to access the logs without a session using a unique key</td><td>Yes</td><td>true,false</td><td>true</td></tr></tbody></table>

### Email **Environment Variables**

<table><thead><tr><th width="262">Variable Name</th><th width="326">Description</th><th width="133">Mandatory</th><th width="174">Applicable Values</th><th>Default Value</th></tr></thead><tbody><tr><td>EMAIL_PROVIDER</td><td>Different email providers</td><td>Yes</td><td>GMAIL, SMTP</td><td></td></tr><tr><td>SMTP_HOST</td><td>SMTP Host Name to be used for sending email alerts</td><td>Yes</td><td></td><td></td></tr><tr><td>SMTP_PORT</td><td>SMTP port number</td><td>No</td><td></td><td></td></tr><tr><td>SMTP_USE_TLS</td><td>To enable SMTPS</td><td>No</td><td>true,false</td><td></td></tr><tr><td>SMTP_USERNAME</td><td>SMTP Username</td><td>if EMAIL Yes</td><td></td><td></td></tr><tr><td>SMTP_PASSWORD</td><td>SMTP Password</td><td>if EMAIL Yes</td><td></td><td></td></tr><tr><td>GMAIL_CREDENTIAL_FILE_PATH</td><td>GMAIL Service Account Credentials Path</td><td>if GMAIL Yes</td><td></td><td></td></tr><tr><td>SMTP_FROM</td><td>From email address to be used for the outbound emails</td><td>Yes</td><td></td><td></td></tr></tbody></table>

### Sample .env

{% code title=".env" %}

```bash
# CloudIO Services
API=true
SCHEDULER=true
WORKFLOW=true

# Redis
# REDIS_URL="rediss://:redis_password@localhost:6379/#insecure"
# REDIS_URL="redis://localhost:6379/"
REDIS_URL="1233b5c090a64...iI="

# Secrets
JWT_SECRET="589b75fc4506...Km3KN2p8A=="
ARGON_SECRET="ebc8b30629d84...LLChImG5034="

# CloudIO Server
DEFAULT_SUBDOMAIN=cloudio
IO_ENV=development

# CloudIO Server on HTTPS
# IO_ENV=production

# Log
LOG=io_common=debug,cloudio=trace,warn
BACKTRACE=full
LOG_OUTPUT=file # console
RUSTFLAGS="-Zinstrument-coverage"

# MySQL Database
DATABASE_URL="fd56327978faab...WdMuuwp54F78//ESzpfHefhlw=="
READONLY_DATABASE_URL="fd56327975fZK...BbFu3tLfHefhlw=="
DB_ACCEPT_INVALID_CERTS=true

# Local Kafka
BOOTSTRAP_SERVERS=localhost:9092

# Kafka on Confluent Cloud
#BOOTSTRAP_SERVERS=p...5.us-west-2.aws.confluent.cloud:9092
#SECURITY_PROTOCOL=SASL_SSL
#SASL_MECHANISMS=PLAIN
#SASL_USERNAME=SR4C...OP4DIA
#SASL_PASSWORD=j4StZg8Kg7m...B5Kgant9A

# On Mac (Optional)
# SSL_CA_LOCATION=/etc/ssl/cert.pem

# On Linux
# SSL_CA_LOCATION=probe

INSTANCE_ID=dev_node

# gmail
SMTP_HOST=smtp.gmail.com
SMTP_USERNAME=noreply@example.com
SMTP_PASSWORD=5e848...t1Go=
SMTP_FROM=noreply@example.com

HOST=127.0.0.1:3090

API_RATELIMIT=1000

ADMIN_PASSWORD="a8e79...1QWTA=="
ADMIN_EMAIL=admin...@example.com

ORG=cloudio
APP=cloudio
SECRET="super strong secret xyz##$^#%3245"

TMP_DIR=tmp

ALLOW_SQL_WORKSHEET_UPDATES=Y # N in UAT/PROD

```

{% endcode %}

### Running the Application

Change directory to where the cloudio-platfrom.zip is extracted and run **`./start.sh`** from command prompt. The platform will install all necessary database objects and create necessary kafka topics as needed at startup.

### Running it for the first time

When you start the server for the very first time, all the necessary tables would get created and populated with initial seed data. The platform will also create the initial `admin` user with full privileges. You must setup the following environment variables (only for the first time startup)

| Environment Variable | Description                                                                                                 |
| -------------------- | ----------------------------------------------------------------------------------------------------------- |
| ADMIN\_EMAIL         | Admin user's email address. This needs to be a valid email, otherwise you cannot reset/change the password. |
| ADMIN\_PASSWORD      | Password to be used for the newly created `admin` user                                                      |

### Encrypting Environment Variables

{% hint style="info" %}
AES 256 with IV is used for encryption/decryption
{% endhint %}

{% hint style="info" %}
You must setup the environment variable `SECRET` with a super secure key. Once set, you must not change the value as it may be used to encrypt your application data. We will provide a CLI option to change the SECRET, which will automate the processing re-encrypting the data with the new SECRET.
{% endhint %}

The following environment variables must be encrypted before starting the server. You can use the sub-command `encrypt` (see an example below) to encrypt all the required values

> Sample command to encrypt the REDIS\_URL environment value

```bash
./cloudio encrypt --value "redis://localhost:6379/"

Output:
-------

        Done ✨ You can use any of the following values
---------------------------------------------------------------
1233b5c090a64afa8032524e0c1698a4ZaLIIMOa9m/1OFpH0aFV12...=
190f684d4f2240a19bb1b86e8b58f41cApLuUI1m6isQVz413JcfAc...87I=
6950494897d14e27bd4983ba44fde2bdP0CPTF+6HCbbx5DQZ...B3tg=
---------------------------------------------------------------

// .env
// REDIS_URL="1233b5c090a64afa8032524e0c16...XdXEj3Cq/jkiI="
```

| Environment Variable to be encrypted |
| ------------------------------------ |
| JWT\_SECRET                          |
| ARGON\_SECRET                        |
| DATABASE\_URL                        |
| READONLY\_DATABASE\_URL              |
| REDIS\_URL                           |
| SMTP\_PASSWORD                       |
| SASL\_PASSWORD                       |
| ADMIN\_PASSWORD                      |
| DB\_PKCS12\_PASSWORD                 |

## High Volume Usage

If the server has to serve more than a million requests per hour, you must set up a scalable cluster for Kafka & Redis. The database must be scaled up according to the usage, and multiple platform instances must be run parallel to support the load.

## Backups

Make sure to set up regular backups for MySQL.

## Single Node Deployment

For simple deployment, you can disable Kafka, Blob Storage & Redis.

Use Cases for Single Node Deployment

* Development Instances
* Trial Instances
* Production Instances with less than 3000 users and when scaling/high availability is not necessary

#### Environment Variables Setup for Single Node Installation

```properties
# Setting WORKFLOW to false will disable multi-node workers
WORKFLOW=false

# Comment out REDIS_URL to disable Redis usage
# REDIS_URL

# Comment out BOOTSTRAP_SERVERS to disable Kafka usage
# BOOTSTRAP_SERVERS
```

## Multi-Node Deployment without Kafka

#### Environment Variables Setup for Multi-Node Installation without Kafka

```properties
# Setting WORKFLOW to false will disable multi-node workers
WORKFLOW=true

# Comment out REDIS_URL to disable Redis usage unless you need data
# caching in your applications logic
# REDIS_URL

# Comment out BOOTSTRAP_SERVERS to disable Kafka usage
# BOOTSTRAP_SERVERS

# Setting ENABLE_CLUSTER to true will allow multiple nodes
# to be up and running, forming a cluster
ENABLE_CLUSTER=true

# The leader node will listen to port 3030 on the private IP address
CLUSTER_HOST=:3030

# Set the hostname to "fargate" when deploying on AWS fargate. 
# The private IP will be fetched using the fargate metadata API during the startup
CLUSTER_HOST=fargate:3030
```
