Installation

Using Docker

Install Docker Desktop: https://www.docker.com/products/docker-desktop

You must allocate a minimum of 4GB memory for the docker.

Download cloudio_demo.zip & unzip

unzip cloudio_demo.zip
cd cloudio_demo
./start.sh

Launch CloudIO Apps using any modern browser http://localhost

Sign-in as demo / demo

docker-compose.yml

version: "3.1"

services:
  traefik:
    image: traefik:2.3
    container_name: cloudio-traefik
    command:
      - --providers.file.directory=/storage/config
      - --providers.file.watch=true
      - --providers.docker=true
      - --providers.docker.exposedByDefault=false
      - --providers.docker.constraints=Label(`traefik.constraint-label-stack`,`cloudio`)
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - cloudio-config:/storage/config:ro
      - cloudio-certificates:/storage/certificates:ro
    depends_on:
      - cloudio
    networks:
      - gateway
      - cloudio
  mysql:
    build: mysql
    container_name: cloudio-mysql
    restart: unless-stopped
    ulimits:
      nofile:
        soft: 20000
        hard: 40000
    ports:
      - 3306:3306
    logging:
      driver: json-file
    networks:
      - cloudio
    volumes:
      - cloudio-mysql:/var/lib/mysql:rw
  zookeeper:
    image: "bitnami/zookeeper:3.5.7"
    container_name: cloudio-zookeeper
    restart: unless-stopped
    logging:
      driver: json-file
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
      - ZOO_LOG_LEVEL=WARN
    networks:
      - cloudio
    volumes:
      - cloudio-zookeeper:/bitnami/zookeeper:rw
    depends_on:
      - mysql
  kafka:
    image: "bitnami/kafka:2.7.0"
    container_name: cloudio-kafka
    restart: unless-stopped
    logging:
      driver: json-file
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper
    networks:
      - cloudio
    volumes:
      - cloudio-kafka:/bitnami/kafka:rw
  redis:
    image: redis:6
    container_name: cloudio-redis
    restart: unless-stopped
    networks:
      - cloudio
    volumes:
      - cloudio-redis:/data:rw
    depends_on:
      - kafka
  cloudio:
    build: cloudio
    container_name: cloudio
    restart: unless-stopped
    logging:
      driver: json-file
    networks:
      - cloudio
    labels:
      - traefik.enable=true
      - traefik.constraint-label-stack=cloudio
      - traefik.http.routers.cloudio.rule=PathPrefix(`/`)
      - traefik.http.routers.cloudio-secure.rule=PathPrefix(`/`)
      - traefik.http.routers.cloudio-secure.tls=true
    volumes:
      - cloudio-config:/storage/config:rw
      - cloudio-certificates:/storage/certificates:rw
    depends_on:
      - kafka
      - mysql
      - redis

networks:
  gateway:
  cloudio:

volumes:
  cloudio-mysql:
  cloudio-redis:
  cloudio-certificates:
  cloudio-config:
  cloudio-kafka:
  cloudio-zookeeper:

Manual Installation

Install Apache Kafka or use Confluent Cloud

Install MySQL or use any Cloud Service for MySQL

Refer to the following link for MySQL installation https://dev.mysql.com/doc/mysql-installer/en/

Install Redis or use any Cloud Service for Redis

Refer to the following link to install Redis https://redis.io/topics/quickstart

You can use either a hardware or software load balancer for load balancing, reverse proxy and SSL termination. e.g. nginx, apache etc. Refer to the following link to install nginx https://www.nginx.com/resources/wiki/start/topics/tutorials/install/

Configure your load balancer/reverse proxy to redirect the incoming HTTPS & WSS requests to the host(s)/port(s) on which the CloudIO Platform is configured to run.

Example NGINX Configuration with single instance of platform running on localhost:3090

   ...

    upstream wsbackend {
        server localhost:3090;
    }
    
    server {
        listen 80 default_server;
        server_name subdomain.example.com;
        return 301 https://$server_name;
    }
    
    server {
        listen       443 ssl http2 default_server;
        server_name  subdomain.example.com;

        ssl_certificate "/cloudio/ssl/bundle.crt";
        ssl_certificate_key "/cloudio/ssl/star_example_com.key";
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_ciphers PROFILE=SYSTEM;
        ssl_prefer_server_ciphers on;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location /ws/ {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_pass http://wsbackend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
        }

        location / {
            proxy_pass http://localhost:3090;
            proxy_set_header   Host               $host;
            proxy_set_header   X-Real-IP          $remote_addr;
            proxy_set_header   X-Forwarded-Proto  $scheme;
            proxy_set_header   X-Forwarded-For    $proxy_add_x_forwarded_for;
            proxy_connect_timeout       600;
            proxy_send_timeout          600;
            proxy_read_timeout          600;
            send_timeout                600;
        }
    }
    
    ...

Make sure to either use a trusted network or SSL/TLS-enabled Redis, Kafka & MySQL

Install CloudIO Platform

Once you obtain a license from CloudIO, follow the instructions to download cloudio-platform.zip and unzip to a directory e.g. /mnt/cloudio and update the .env file with appropriate values for the following environment variables

Security Environment Variables

Variable NameDescriptionMandatoryApplicable ValuesDefault Value

API

Set it to true to enable the API Service (UI Backend)

Yes

true, false

SCHEDULER

Set it to true to enable the scheduler service

Yes

true, false

WORKFLOW

Set it to true to enable the multi-node workflow service

Yes

true, false

IO_ENV

development/test/production

Yes

development/test/production

development

JWT_SECRET

Used to encode/decode JWT tokens

Yes

secret

ARGON_SECRET

Used for password hashing

Yes

secret

ARGON_ITERATIONS

Used for a number of iterations to generate password hashing

No

ARGON_MEMORY_SIZE

Used for an amount of memory to generate password hashing

No

INSTANCE_ID

A unique name for this instance

Yes

cloudio_node1

HOST

An IP address and port combination on which the web server listens for incoming connections. You can run multiple instances on the same host with different ports and/or on multiple hosts depending on the load. A single instance can scale upto a million requests per 20 minutes.

Yes

127.0.0.1:3090

API_RATELIMIT

Number of API calls allowed per IP address per hour

Yes

1000

AUTH_RATELIMIT

Number of sign-in API calls allowed per IP address per hour

Yes

12

STATUS_RATELIMIT

Number of stutus API calls allowed per IP address per hour

Yes

60

TMP_DIR

Temp directory path

Yes

tmp

ENCRYPTED_ARGS

Set it to N to encrypt few env variables(JWT_SECRET, ARGON_SECRET, DATABASE_URL, READONLY_DATABASE_URL REDIS_URL, SMTP_PASSWORD, SASL_PASSWORD, ADMIN_PASSWORD, DB_PKCS12_PASSWORD)

No

Y,N

N

ADMIN_EMAIL

For the first time installation, platform will create a admin user with this given email address

No

ADMIN_PASSWORD

For the first time installation, platform will create a admin user with this given password

No

JS_DIR

Directory for the js libraries

Yes

js

THUMBNAILS_DIR

Directory for storing thumbnails

Yes

thumbnails

MULTI_TENANT

To enable multi tenant setup

Yes

N

X_FRAME_OPTION

Set a value for this to add a X_FRAME_OPTION header for server requests

DENY,SAMEORIGIN,_

DENY

MFA

Set it to EMAIL to enable MFA for sign-in (send a code to email while signing in)

Yes

EMAIL,OFF

OFF

MD_DB_TYPE

Metadata database type

Yes

mysql,postgres,oracle

mysql

LIVE_INTERVAL_SECONDS

Set a value in seconds to send live updates to clients

Yes

15

SQL_TIMEOUT_SECONDS

Set a value in seconds to set the timeout for the SQL query

Yes

120

MAX_CONCURRENT_REQUESTS

Set a value to allow maximum concurrent client requests on the server

Yes

40

SCHEDULER_SLEEP_SECONDS

Set a value to set a sleep time before running schedulers

Yes

60

PUBLIC_TINY_URL

Set it true to allow the public user to create a shared URL

Yes

AGENT

To enable Agent setup

No

Database Environment Variables

Variable NameDescriptionMandatoryApplicable ValuesDefault Value

DATABASE_URL

Specify the Database Url to connect metadata.

Yes

READONLY_DATABASE_URL

Used for running ad hoc queries from SQL Worksheet

Yes

Same as DATABASE_URL with readonly database user

ROOT_DATABASE_URL

Same as DATABASE_URL with root database user

No

Same as DATABASE_URL with root database user

DB_ROOT_CERT_PATH

CA cert path

No

DB_PKCS12_PATH

Private key in PKCS12 format

No

DB_PKCS12_PASSWORD

Private key password if any

No

DB_ACCEPT_INVALID_CERTS

To accept invalid certs (self signed certs)

No

true,false

DB_SKIP_DOMAIN_VALIDATION

To skip domain validation

No

true,false

ALLOW_SQL_WORKSHEET_UPDATES

Whether or not to allow ad hoc updates via. SQL Worksheet. Set this to N in Production & UAT instance.

Yes

Y,N

N

Azure Environment Variables

Variable NameDescriptionMandatoryApplicable ValuesDefault Value

AZURE_CLIENT_SECRET

Azure key vault account client secret

No

AZURE_CLIENT_ID

Azure key vault account client id

No

AZURE_TENANT_ID

Azure key vault account tenant id

No

AZURE_KEY_VAULT_URL

Azure key vault url

No

AZURE_STORAGE_ACCOUNT

Azure storage account

No

AZURE_STORAGE_MASTER_KEY

Azure storage account master key

No

Redis Environment Variables

Variable NameDescriptionMandatoryApplicable ValuesDefault Value

REDIS_PREFIX

To assign a prefix value for keys stored in Redis

No

dev

dev

REDIS_URL

URL of the Redis server

No

Kafka Environment Variables

Variable NameDescriptionMandatorySample ValuesDefault Value

KAFKA_PREFIX

To assign a prefix value for topic names before creating in Kafka

No

BOOTSTRAP_SERVERS

Kafka bootstrap server URL. If using a cloud instance from confluent then provide appropriate values for the additional variables SECURITY_PROTOCOL, SASL_MECHANISMS, SASL_USERNAME & SASL_PASSWORD provided by confluent cloud when creating a new Kafka cluster

No

#Local Kafka

BOOTSTRAP_SERVERS=localhost:9092

#Kafka on Confluent Cloud

BOOTSTRAP_SERVERS=p...5.us-west-2.aws.confluent.cloud:9092 SECURITY_PROTOCOL=SASL_SSL SASL_MECHANISMS=PLAIN SASL_USERNAME=SR4C...OP4DIA SASL_PASSWORD=j4StZg8Kg7m...B5Kgant9A

SECURITY_PROTOCOL

No

SASL_MECHANISMS

No

SASL_USERNAME

No

SASL_PASSWORD

No

Log Environment Variables

Variable NameDescriptionMandatoryApplicable ValuesDefault Value

LOG_OUTPUT

file or console

Yes

console, file

file

LOG_SQLS

Set it true to log the SQL queries and it's params

Yes

true,false

false

LOG_VIEWER_KEY

Key to access a logs without a session

Yes

viG_D6Zo6mtXDAt_3Z

ENABLE_LOG_VIEWER_USING_KEY

Set it true to access the logs without a session using a unique key

Yes

true,false

true

Email Environment Variables

Variable NameDescriptionMandatoryApplicable ValuesDefault Value

EMAIL_PROVIDER

Different email providers

Yes

GMAIL, SMTP

SMTP_HOST

SMTP Host Name to be used for sending email alerts

Yes

SMTP_PORT

SMTP port number

No

SMTP_USE_TLS

To enable SMTPS

No

true,false

SMTP_USERNAME

SMTP Username

if EMAIL Yes

SMTP_PASSWORD

SMTP Password

if EMAIL Yes

GMAIL_CREDENTIAL_FILE_PATH

GMAIL Service Account Credentials Path

if GMAIL Yes

SMTP_FROM

From email address to be used for the outbound emails

Yes

Sample .env

.env
# CloudIO Services
API=true
SCHEDULER=true
WORKFLOW=true

# Redis
# REDIS_URL="rediss://:redis_password@localhost:6379/#insecure"
# REDIS_URL="redis://localhost:6379/"
REDIS_URL="1233b5c090a64...iI="

# Secrets
JWT_SECRET="589b75fc4506...Km3KN2p8A=="
ARGON_SECRET="ebc8b30629d84...LLChImG5034="

# CloudIO Server
DEFAULT_SUBDOMAIN=cloudio
IO_ENV=development

# CloudIO Server on HTTPS
# IO_ENV=production

# Log
LOG=io_common=debug,cloudio=trace,warn
BACKTRACE=full
LOG_OUTPUT=file # console
RUSTFLAGS="-Zinstrument-coverage"

# MySQL Database
DATABASE_URL="fd56327978faab...WdMuuwp54F78//ESzpfHefhlw=="
READONLY_DATABASE_URL="fd56327975fZK...BbFu3tLfHefhlw=="
DB_ACCEPT_INVALID_CERTS=true

# Local Kafka
BOOTSTRAP_SERVERS=localhost:9092

# Kafka on Confluent Cloud
#BOOTSTRAP_SERVERS=p...5.us-west-2.aws.confluent.cloud:9092
#SECURITY_PROTOCOL=SASL_SSL
#SASL_MECHANISMS=PLAIN
#SASL_USERNAME=SR4C...OP4DIA
#SASL_PASSWORD=j4StZg8Kg7m...B5Kgant9A

# On Mac (Optional)
# SSL_CA_LOCATION=/etc/ssl/cert.pem

# On Linux
# SSL_CA_LOCATION=probe

INSTANCE_ID=dev_node

# gmail
SMTP_HOST=smtp.gmail.com
SMTP_USERNAME=noreply@example.com
SMTP_PASSWORD=5e848...t1Go=
SMTP_FROM=noreply@example.com

HOST=127.0.0.1:3090

API_RATELIMIT=1000

ADMIN_PASSWORD="a8e79...1QWTA=="
ADMIN_EMAIL=admin...@example.com

ORG=cloudio
APP=cloudio
SECRET="super strong secret xyz##$^#%3245"

TMP_DIR=tmp

ALLOW_SQL_WORKSHEET_UPDATES=Y # N in UAT/PROD

Running the Application

Change directory to where the cloudio-platfrom.zip is extracted and run ./start.sh from command prompt. The platform will install all necessary database objects and create necessary kafka topics as needed at startup.

Running it for the first time

When you start the server for the very first time, all the necessary tables would get created and populated with initial seed data. The platform will also create the initial admin user with full privileges. You must setup the following environment variables (only for the first time startup)

Environment Variable

Description

ADMIN_EMAIL

Admin user's email address. This needs to be a valid email, otherwise you cannot reset/change the password.

ADMIN_PASSWORD

Password to be used for the newly created admin user

Encrypting Environment Variables

AES 256 with IV is used for encryption/decryption

You must setup the environment variable SECRET with a super secure key. Once set, you must not change the value as it may be used to encrypt your application data. We will provide a CLI option to change the SECRET, which will automate the processing re-encrypting the data with the new SECRET.

The following environment variables must be encrypted before starting the server. You can use the sub-command encrypt (see an example below) to encrypt all the required values

Sample command to encrypt the REDIS_URL environment value

./cloudio encrypt --value "redis://localhost:6379/"

Output:
-------

        Done  You can use any of the following values
---------------------------------------------------------------
1233b5c090a64afa8032524e0c1698a4ZaLIIMOa9m/1OFpH0aFV12...=
190f684d4f2240a19bb1b86e8b58f41cApLuUI1m6isQVz413JcfAc...87I=
6950494897d14e27bd4983ba44fde2bdP0CPTF+6HCbbx5DQZ...B3tg=
---------------------------------------------------------------

// .env
// REDIS_URL="1233b5c090a64afa8032524e0c16...XdXEj3Cq/jkiI="

Environment Variable to be encrypted

JWT_SECRET

ARGON_SECRET

DATABASE_URL

READONLY_DATABASE_URL

REDIS_URL

SMTP_PASSWORD

SASL_PASSWORD

ADMIN_PASSWORD

DB_PKCS12_PASSWORD

High Volume Usage

If the server has to serve more than a million requests per hour, you must set up a scalable cluster for Kafka & Redis. The database must be scaled up according to the usage, and multiple platform instances must be run parallel to support the load.

Backups

Make sure to set up regular backups for MySQL.

Single Node Deployment

For simple deployment, you can disable Kafka, Blob Storage & Redis.

Use Cases for Single Node Deployment

  • Development Instances

  • Trial Instances

  • Production Instances with less than 3000 users and when scaling/high availability is not necessary

Environment Variables Setup for Single Node Installation

# Setting WORKFLOW to false will disable multi-node workers
WORKFLOW=false

# Comment out REDIS_URL to disable Redis usage
# REDIS_URL

# Comment out BOOTSTRAP_SERVERS to disable Kafka usage
# BOOTSTRAP_SERVERS

Multi-Node Deployment without Kafka

Environment Variables Setup for Multi-Node Installation without Kafka

# Setting WORKFLOW to false will disable multi-node workers
WORKFLOW=true

# Comment out REDIS_URL to disable Redis usage unless you need data
# caching in your applications logic
# REDIS_URL

# Comment out BOOTSTRAP_SERVERS to disable Kafka usage
# BOOTSTRAP_SERVERS

# Setting ENABLE_CLUSTER to true will allow multiple nodes
# to be up and running, forming a cluster
ENABLE_CLUSTER=true

# The leader node will listen to port 3030 on the private IP address
CLUSTER_HOST=:3030

# Set the hostname to "fargate" when deploying on AWS fargate. 
# The private IP will be fetched using the fargate metadata API during the startup
CLUSTER_HOST=fargate:3030

Last updated