This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Configurations

Configurations of LambdaStack

Different configuration options for LambaStack. There is a minimal and a full option when calling lambdastack init.... Inside the /schema directory at the LambdaStack Github repo is where ALL yml configurations are managed. They are broken down into Cloud providers and on-premise (any).

The following are the breakdowns:

  • Any - On-premise or those cloud providers not mentioned below like Oracle as an example
  • AWS - Amazon Web Services
  • Azure - Microsoft Azure
  • Common - Anything common among the providers
  • GCP - Google Cloud Platform

Links below...

1 - Any

Minimal and Full configuration options

1.1 - Minimal

Minimal configuration options

Minimal Configuration

This option is mainly for on-premise solutions. However, it can be used in a generic way for other clouds like Oracle Cloud, etc.

There are a number of changes to be made so that it can fit your on-premise or non-standard cloud provider environment.

  1. prefix: staging - (optional) Change this to something else like production if you like
  2. name: operations - (optional) Change the user name to anything you like
  3. key_path: lambdastack-operations - (optional) Change the SSH key pair name if you like
  4. hostname: ... - (optional/required) If you're good with keeping the default hostname then leave it or change it to support your environment for each host below
  5. ip: ... - (optional/required) If you're good with keeping the default 192.168.0.0 IP range then leave it or change it to support your environment for each host below
kind: lambdastack-cluster
title: "LambdaStack Cluster Config"
provider: any
name: "default"
build_path: "build/path" # This gets dynamically built
specification:
  name: lambdastack
  prefix: staging  # Can be anything you want that helps quickly identify the cluster
  admin_user:
    name: operations # YOUR-ADMIN-USERNAME
    key_path: lambdastack-operations # YOUR-SSH-KEY-FILE-NAME
    path: "/shared/build/<name of cluster>/keys/ssh/lambdastack-operations" # Will get dynamically created
  components:
    repository:
      count: 1
      machines:
        - default-repository
    kubernetes_master:
      count: 1
      machines:
        - default-k8s-master1
    kubernetes_node:
      count: 2
      machines:
        - default-k8s-node1
        - default-k8s-node2
    logging:
      count: 1
      machines:
        - default-logging
    monitoring:
      count: 1
      machines:
        - default-monitoring
    kafka:
      count: 2
      machines:
        - default-kafka1
        - default-kafka2
    postgresql:
      count: 1
      machines:
        - default-postgresql
    load_balancer:
      count: 1
      machines:
        - default-loadbalancer
    rabbitmq:
      count: 1
      machines:
        - default-rabbitmq
---
kind: infrastructure/machine
provider: any
name: default-repository
specification:
  hostname: repository # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.112 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-k8s-master1
specification:
  hostname: master1 # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.101 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-k8s-node1
specification:
  hostname: node1 # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.102 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-k8s-node2
specification:
  hostname: node2 # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.103 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-logging
specification:
  hostname: elk # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.105 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-monitoring
specification:
  hostname: prometheus # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.106 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-kafka1
specification:
  hostname: kafka1 # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.107 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-kafka2
specification:
  hostname: kafka2 # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.108 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-postgresql
specification:
  hostname: postgresql # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.109 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-loadbalancer
specification:
  hostname: loadbalancer # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.110 # YOUR-MACHINE-IP
---
kind: infrastructure/machine
provider: any
name: default-rabbitmq
specification:
  hostname: rabbitmq # YOUR-MACHINE-HOSTNAME
  ip: 192.168.100.111 # YOUR-MACHINE-IP

1.2 - Full

Full configuration options

Full Configuration

This option is mainly for on-premise solutions. However, it can be used in a generic way for other clouds like Oracle Cloud, etc.

There are a number of changes to be made so that it can fit your on-premise or non-standard cloud provider environment.

  1. prefix: staging - (optional) Change this to something else like production if you like
  2. name: operations - (optional) Change the user name to anything you like
  3. key_path: lambdastack-operations - (optional) Change the SSH key pair name if you like
kind: lambdastack-cluster
title: "LambdaStack Cluster Config"
provider: any
name: "default"
build_path: "build/path" # This gets dynamically built
specification:
  prefix: staging  # Can be anything you want that helps quickly identify the cluster
  name: lambdastack
  admin_user:
    name: operations # YOUR-ADMIN-USERNAME
    key_path: lambdastack-operations # YOUR-SSH-KEY-FILE-NAME
    path: "/shared/build/<name of cluster>/keys/ssh/lambdastack-operations" # Will get dynamically created
  components:
    kubernetes_master:
      count: 1
      machine: kubernetes-master-machine
      configuration: default
    kubernetes_node:
      count: 2
      machine: kubernetes-node-machine
      configuration: default
    logging:
      count: 1
      machine: logging-machine
      configuration: default
    monitoring:
      count: 1
      machine: monitoring-machine
      configuration: default
    kafka:
      count: 2
      machine: kafka-machine
      configuration: default
    postgresql:
      count: 0
      machine: postgresql-machine
      configuration: default
    load_balancer:
      count: 1
      machine: load-balancer-machine
      configuration: default
    rabbitmq:
      count: 0
      machine: rabbitmq-machine
      configuration: default
    ignite:
      count: 0
      machine: ignite-machine
      configuration: default
    opendistro_for_elasticsearch:
      count: 0
      machine: logging-machine
      configuration: default
    repository:
      count: 1
      machine: repository-machine
      configuration: default
    single_machine:
      count: 0
      machine: single-machine
      configuration: default

2 - AWS

Minimal and Full configuration options

2.1 - Minimal

Minimal configuration

As of v1.3.4, LambdaStack requires you to change the following attributes in the either the minimal or full configuration YAML. Beginning in v2.0, you will have the option to pass in these parameters to override whatever is present in the yaml file. v2.0 is in active development

All but the last two options are defaults. The last two are AWS Key and AWS Secret - these two are required

Attributes to change for the minimal configuration After you run `lambdastack init -p aws -n :

  • prefix: staging - Staging is a default prefix. You can use whatever you like (e.g., production). This value can help group your AWS clusters in the same region for easier maintenance
  • name: ubuntu - This attribute is under specification.admin_user.name. For ubuntu on AWS the default user name is ubuntu. For Redhat we default to operations
  • key_path: lambdastack-operations - This is the default SSH key file(s) name. This is the name of your SSH public and private key pairs. For example, in this example, one file (private one) would be named lambdastack-operations. The second file (public key) typically has a .pub suffix such as lambdastack-operations.pub
  • use_public_ips: True - This is the default public IP value. Important, this attribute by default allows for AWS to build your clusters with a public IP interface. We also build a private (non-public) interface using private IPs for internal communication between the nodes. With this attribute set to public it simply allows you easy access to the cluster so you can SSH into it using the name attribute value from above. This is NOT RECOMMENDED for sure not in production and not as a general rule. You should have a VPN or direct connect and route for the cluster
  • region: us-east-1 - This is the default region setting. This means that your cluster and storage will be created in AWS' us-east-1 region. Important - If you want to change this value in any way, you should use the full configuration and then change ALL references of region in the yaml file. If you do not then you may have services in regions you don't want and that may create problems for you
  • key: XXXXXXXXXX - This is very important. This, along with secret are used to access your AWS cluster programmatically which LambdaStack needs. This can be found at specification.cloud.credentials.key. This can be found under your AWS Account menu option in Security Credentials
  • secret: XXXXXXXXXXXXX - This is very important. This, along with key are used to access your AWS cluster programmatically which LambdaStack needs. This can be found at specification.cloud.credentials.secret. This can be found under your AWS Account menu option in Security Credentials. This can only be seen at the time you create it so use the download option and save the file somewhere safe. DO NOT save the file in your source code repo!

Now that you have made your changes to the .yml now run lambdastack apply -f build/<whatever you name your cluster>/<whatever you name your cluster>.yml. Now the building of a LambdaStack cluster will begin. Apply option will generate a final manifest.yml file that will be used for Terraform, Ansible and LambdaStack python code. The manifest.yml will combine the values from below plus ALL yaml configuration files for each service.

---
kind: lambdastack-cluster
title: "LambdaStack Cluster Config"
provider: aws
name: "default"
build_path: "build/path"  # This gets dynamically built
specification:
  name: lambdastack
  prefix: staging  # Can be anything you want that helps quickly identify the cluster
  admin_user:
    name: ubuntu # YOUR-ADMIN-USERNAME
    key_path: lambdastack-operations # YOUR-SSH-KEY-FILE-NAME
    path: "/shared/build/<name of cluster>/keys/ssh/lambdastack-operations" # Will get dynamically created
  cloud:
    k8s_as_cloud_service: False
    use_public_ips: True # When not using public IPs you have to provide connectivity via private IPs (VPN)
    region: us-east-1
    credentials:
      key: XXXXXXXXXX # AWS Subscription Key
      secret: XXXXXXXXX # AWS Subscription Secret
    default_os_image: default
  components:
    repository:
      count: 1
    kubernetes_master:
      count: 1
    kubernetes_node:
      count: 2
    logging:
      count: 1
    monitoring:
      count: 1
    kafka:
      count: 2
    postgresql:
      count: 1
    load_balancer:
      count: 1
    rabbitmq:
      count: 1

2.2 - Full

Full configuration

As of v1.3.4, LambdaStack requires you to change the following attributes in the either the minimal or full configuration YAML. Beginning in v2.0, you will have the option to pass in these parameters to override whatever is present in the yaml file. v2.0 is in active development

All but the last two options are defaults. The last two are AWS Key and AWS Secret - these two are required

Attributes to change for the full configuration After you run `lambdastack init -p aws -n :

  • prefix: staging - Staging is a default prefix. You can use whatever you like (e.g., production). This value can help group your AWS clusters in the same region for easier maintenance
  • name: ubuntu - This attribute is under specification.admin_user.name. For ubuntu on AWS the default user name is ubuntu. For Redhat we default to operations
  • key_path: lambdastack-operations - This is the default SSH key file(s) name. This is the name of your SSH public and private key pairs. For example, in this example, one file (private one) would be named lambdastack-operations. The second file (public key) typically has a .pub suffix such as lambdastack-operations.pub
  • use_public_ips: True - This is the default public IP value. Important, this attribute by default allows for AWS to build your clusters with a public IP interface. We also build a private (non-public) interface using private IPs for internal communication between the nodes. With this attribute set to public it simply allows you easy access to the cluster so you can SSH into it using the name attribute value from above. This is NOT RECOMMENDED for sure not in production and not as a general rule. You should have a VPN or direct connect and route for the cluster
  • region: us-east-1 - This is the default region setting. This means that your cluster and storage will be created in AWS' us-east-1 region. Important - If you want to change this value in any way, you should use the full configuration and then change ALL references of region in the yaml file. If you do not then you may have services in regions you don't want and that may create problems for you
  • key: XXXXXXXXXX - This is very important. This, along with secret are used to access your AWS cluster programmatically which LambdaStack needs. This can be found at specification.cloud.credentials.key. This can be found under your AWS Account menu option in Security Credentials
  • secret: XXXXXXXXXXXXX - This is very important. This, along with key are used to access your AWS cluster programmatically which LambdaStack needs. This can be found at specification.cloud.credentials.secret. This can be found under your AWS Account menu option in Security Credentials. This can only be seen at the time you create it so use the download option and save the file somewhere safe. DO NOT save the file in your source code repo!

Now that you have made your changes to the .yml now run lambdastack apply -f build/<whatever you name your cluster>/<whatever you name your cluster>.yml. Now the building of a LambdaStack cluster will begin. Apply option will generate a final manifest.yml file that will be used for Terraform, Ansible and LambdaStack python code. The manifest.yml will combine the values from below plus ALL yaml configuration files for each service.

---
kind: lambdastack-cluster
title: "LambdaStack Cluster Config"
provider: aws
name: "default"
build_path: "build/path"  # This gets dynamically built
specification:
  prefix: staging  # Can be anything you want that helps quickly identify the cluster
  name: lambdastack
  admin_user:
    name: ubuntu # YOUR-ADMIN-USERNAME
    key_path: lambdastack-operations # YOUR-SSH-KEY-FILE-NAME
    path: "/shared/build/<name of cluster>/keys/ssh/lambdastack-operations" # Will get dynamically created
  cloud:
    k8s_as_cloud_service: False
    vnet_address_pool: 10.1.0.0/20
    region: us-east-1
    use_public_ips: True # When not using public IPs you have to provide connectivity via private IPs (VPN)
    credentials:
      key: XXXXXXXXXXX # AWS Subscription Key
      secret: XXXXXXXXXXXX # AWS Subscription Secret
    network:
      use_network_security_groups: True
    default_os_image: default
  components:
    kubernetes_master:
      count: 1
      machine: kubernetes-master-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.1.0/24
        - availability_zone: us-east-1b
          address_pool: 10.1.2.0/24
    kubernetes_node:
      count: 2
      machine: kubernetes-node-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.1.0/24
        - availability_zone: us-east-1b
          address_pool: 10.1.2.0/24
    logging:
      count: 1
      machine: logging-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.3.0/24
    monitoring:
      count: 1
      machine: monitoring-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.4.0/24
    kafka:
      count: 2
      machine: kafka-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.5.0/24
    postgresql:
      count: 0
      machine: postgresql-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.6.0/24
    load_balancer:
      count: 1
      machine: load-balancer-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.7.0/24
    rabbitmq:
      count: 0
      machine: rabbitmq-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.8.0/24
    ignite:
      count: 0
      machine: ignite-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.9.0/24
    opendistro_for_elasticsearch:
      count: 0
      machine: logging-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.10.0/24
    repository:
      count: 1
      machine: repository-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.11.0/24
    single_machine:
      count: 0
      machine: single-machine
      configuration: default
      subnets:
        - availability_zone: us-east-1a
          address_pool: 10.1.1.0/24
        - availability_zone: us-east-1b
          address_pool: 10.1.2.0/24

3 - Azure

Minimal and Full configuration options

3.1 - Minimal

Minimal configuration

As of v1.3.4, LambdaStack requires you to change the following attributes in the either the minimal or full configuration YAML. Beginning in v2.0, you will have the option to pass in these parameters to override whatever is present in the yaml file. v2.0 is in active development

All options are defaults. Azure will automatically require you to login into the Azure portal before you can run LambdaStack for Azure unless you use the service principal option using the Full configuration. With the Full configuration you can specify your subscription name and service principal so that it can be machine-to-machine oriented not requiring any interaction

Attributes to change for the minimal configuration After you run `lambdastack init -p azure -n :

  • prefix: staging - Staging is a default prefix. You can use whatever you like (e.g., production). This value can help group your AWS clusters in the same region for easier maintenance
  • name: operations - This attribute is under specification.admin_user.name and the default
  • key_path: lambdastack-operations - This is the default SSH key file(s) name. This is the name of your SSH public and private key pairs. For example, in this example, one file (private one) would be named lambdastack-operations. The second file (public key) typically has a .pub suffix such as lambdastack-operations.pub
  • use_public_ips: True - This is the default public IP value. Important, this attribute by default allows for AWS to build your clusters with a public IP interface. We also build a private (non-public) interface using private IPs for internal communication between the nodes. With this attribute set to public it simply allows you easy access to the cluster so you can SSH into it using the name attribute value from above. This is NOT RECOMMENDED for sure not in production and not as a general rule. You should have a VPN or direct connect and route for the cluster
  • region: East US - This is the default region setting. This means that your cluster and storage will be created in Azure East US region. Important - If you want to change this value in any way, you should use the full configuration and then change ALL references of region in the yaml file. If you do not then you may have services in regions you don't want and that may create problems for you

Now that you have made your changes to the .yml now run lambdastack apply -f build/<whatever you name your cluster>/<whatever you name your cluster>.yml. Now the building of a LambdaStack cluster will begin. Apply option will generate a final manifest.yml file that will be used for Terraform, Ansible and LambdaStack python code. The manifest.yml will combine the values from below plus ALL yaml configuration files for each service.

---
kind: lambdastack-cluster
title: "LambdaStack Cluster Config"
provider: azure
name: "default"
build_path: "build/path"  # This gets dynamically built
specification:
  name: lambdastack
  prefix: staging  # Can be anything you want that helps quickly identify the clusterprefix
  admin_user:
    name: operations # YOUR-ADMIN-USERNAME
    key_path: lambdastack-operations # YOUR-SSH-KEY-FILE-NAME
    path: "/shared/build/<name of cluster>/keys/ssh/lambdastack-operations" # Will get dynamically created
  cloud:
    k8s_as_cloud_service: False
    use_public_ips: True # When not using public IPs you have to provide connectivity via private IPs (VPN)
    region: East US
    default_os_image: default
  components:
    repository:
      count: 1
    kubernetes_master:
      count: 1
    kubernetes_node:
      count: 2
    logging:
      count: 1
    monitoring:
      count: 1
    kafka:
      count: 2
    postgresql:
      count: 1
    load_balancer:
      count: 1
    rabbitmq:
      count: 1

3.2 - Full

Full configuration

As of v1.3.4, LambdaStack requires you to change the following attributes in the either the minimal or full configuration YAML. Beginning in v2.0, you will have the option to pass in these parameters to override whatever is present in the yaml file. v2.0 is in active development

All but the last two options are defaults. The last two are subscription_name and use_service_principal - these two are required

Attributes to change for the full configuration After you run `lambdastack init -p azure -n :

  • prefix: staging - Staging is a default prefix. You can use whatever you like (e.g., production). This value can help group your AWS clusters in the same region for easier maintenance
  • name: operations - This attribute is under specification.admin_user.name
  • key_path: lambdastack-operations - This is the default SSH key file(s) name. This is the name of your SSH public and private key pairs. For example, in this example, one file (private one) would be named lambdastack-operations. The second file (public key) typically has a .pub suffix such as lambdastack-operations.pub
  • use_public_ips: True - This is the default public IP value. Important, this attribute by default allows for AWS to build your clusters with a public IP interface. We also build a private (non-public) interface using private IPs for internal communication between the nodes. With this attribute set to public it simply allows you easy access to the cluster so you can SSH into it using the name attribute value from above. This is NOT RECOMMENDED for sure not in production and not as a general rule. You should have a VPN or direct connect and route for the cluster
  • region: East US - This is the default region setting. This means that your cluster and storage will be created in Azure East US region. Important - If you want to change this value in any way, you should use the full configuration and then change ALL references of region in the yaml file. If you do not then you may have services in regions you don't want and that may create problems for you
  • subscription_name: <whatever the sub name is> - This is very important. This, along with use_service_principal are used to access your Azure cluster programmatically which LambdaStack needs. This can be found at specification.cloud.subscrition_name. This can be found under your Azure Account menu option in settings
  • use_service_principal: True - This is very important. This, along with subcription_name are used to access your Azure cluster programmatically which LambdaStack needs. This can be found at specification.cloud.use_service_principal. This can be found under your Azure Account menu option in Security Credentials.

Now that you have made your changes to the .yml now run lambdastack apply -f build/<whatever you name your cluster>/<whatever you name your cluster>.yml. Now the building of a LambdaStack cluster will begin. Apply option will generate a final manifest.yml file that will be used for Terraform, Ansible and LambdaStack python code. The manifest.yml will combine the values from below plus ALL yaml configuration files for each service.

---
kind: lambdastack-cluster
title: "LambdaStack Cluster Config"
provider: azure
name: "default"
build_path: "build/path"  # This gets dynamically built
specification:
  prefix: staging  # Can be anything you want that helps quickly identify the cluster
  name: lambdastack
  admin_user:
    name: operations # YOUR-ADMIN-USERNAME
    key_path: lambdastack-operations # YOUR-SSH-KEY-FILE-NAME
    path: "/shared/build/<name of cluster>/keys/ssh/lambdastack-operations" # Will get dynamically created
  cloud:
    k8s_as_cloud_service: False
    subscription_name: <YOUR-SUB-NAME>
    vnet_address_pool: 10.1.0.0/20
    use_public_ips: True # When not using public IPs you have to provide connectivity via private IPs (VPN)
    use_service_principal: False
    region: East US
    network:
      use_network_security_groups: True
    default_os_image: default
  components:
    kubernetes_master:
      count: 1
      machine: kubernetes-master-machine
      configuration: default
      subnets:
        - address_pool: 10.1.1.0/24
    kubernetes_node:
      count: 2
      machine: kubernetes-node-machine
      configuration: default
      subnets:
        - address_pool: 10.1.1.0/24
    logging:
      count: 1
      machine: logging-machine
      configuration: default
      subnets:
        - address_pool: 10.1.3.0/24
    monitoring:
      count: 1
      machine: monitoring-machine
      configuration: default
      subnets:
        - address_pool: 10.1.4.0/24
    kafka:
      count: 2
      machine: kafka-machine
      configuration: default
      subnets:
        - address_pool: 10.1.5.0/24
    postgresql:
      count: 0
      machine: postgresql-machine
      configuration: default
      subnets:
        - address_pool: 10.1.6.0/24
    load_balancer:
      count: 1
      machine: load-balancer-machine
      configuration: default
      subnets:
        - address_pool: 10.1.7.0/24
    rabbitmq:
      count: 0
      machine: rabbitmq-machine
      configuration: default
      subnets:
        - address_pool: 10.1.8.0/24
    ignite:
      count: 0
      machine: ignite-machine
      configuration: default
      subnets:
        - address_pool: 10.1.9.0/24
    opendistro_for_elasticsearch:
      count: 0
      machine: logging-machine
      configuration: default
      subnets:
        - address_pool: 10.1.10.0/24
    repository:
      count: 1
      machine: repository-machine
      configuration: default
      subnets:
        - address_pool: 10.1.11.0/24
    single_machine:
      count: 0
      machine: single-machine
      configuration: default
      subnets:
        - address_pool: 10.1.1.0/24

4 - Common

Minimal and Full configuration options

ALL yaml configuration options listed in this section are for the internal use of LambdaStack only

4.1 - Cluster Applications

Applications that run in the LambdaStack cluster. This not applications that application developers create

The content of the applications.yml file is listed for reference only

---
kind: configuration/applications
title: "Kubernetes Applications Config"
name: default
specification:
  applications:

## --- ignite ---

  - name: ignite-stateless
    enabled: false
    image_path: "lambdastack/ignite:2.9.1" # it will be part of the image path: {{local_repository}}/{{image_path}}
    use_local_image_registry: true
    namespace: ignite
    service:
      rest_nodeport: 32300
      sql_nodeport: 32301
      thinclients_nodeport: 32302
    replicas: 1
    enabled_plugins:
    - ignite-kubernetes # required to work on K8s
    - ignite-rest-http

# Abstract these configs to separate default files and add
# the ability to add custom application roles.

## --- rabbitmq ---

  - name: rabbitmq
    enabled: false
    image_path: rabbitmq:3.8.9
    use_local_image_registry: true
    #image_pull_secret_name: regcred # optional
    service:
      name: rabbitmq-cluster
      port: 30672
      management_port: 31672
      replicas: 2
      namespace: queue
    rabbitmq:
      #amqp_port: 5672 #optional - default 5672
      plugins: # optional list of RabbitMQ plugins
        - rabbitmq_management
        - rabbitmq_management_agent
      policies: # optional list of RabbitMQ policies
        - name: ha-policy2
          pattern: ".*"
          definitions:
            ha-mode: all
      custom_configurations: #optional list of RabbitMQ configurations (new format -> https://www.rabbitmq.com/configure.html)
        - name: vm_memory_high_watermark.relative
          value: 0.5
      cluster:
        #is_clustered: true #redundant in in-Kubernetes installation, it will always be clustered
        #cookie: "cookieSetFromDataYaml" #optional - default value will be random generated string

## --- auth-service ---

  - name: auth-service # requires PostgreSQL to be installed in cluster
    enabled: false
    image_path: lambdastack/keycloak:14.0.0
    use_local_image_registry: true
    #image_pull_secret_name: regcred
    service:
      name: as-testauthdb
      port: 30104
      replicas: 2
      namespace: namespace-for-auth
      admin_user: auth-service-username
      admin_password: PASSWORD_TO_CHANGE
    database:
      name: auth-database-name
      #port: "5432" # leave it when default
      user: auth-db-user
      password: PASSWORD_TO_CHANGE

## --- pgpool ---

  - name: pgpool # this service requires PostgreSQL to be installed in cluster
    enabled: false
    image:
      path: bitnami/pgpool:4.2.4
      debug: false # ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
    use_local_image_registry: true
    namespace: postgres-pool
    service:
      name: pgpool
      port: 5432
    replicas: 3
    pod_spec:
      affinity:
        podAntiAffinity: # prefer to schedule replicas on different nodes
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - pgpool
                topologyKey: kubernetes.io/hostname
      nodeSelector: {}
      tolerations: {}
    resources: # Adjust to your configuration, see https://www.pgpool.net/docs/41/en/html/resource-requiremente.html
      limits:
        # cpu: 900m # Set according to your env
        memory: 310Mi
      requests:
        cpu: 250m # Adjust to your env, increase if possible
        memory: 310Mi
    pgpool:
      # https://github.com/bitnami/bitnami-docker-pgpool#configuration + https://github.com/bitnami/bitnami-docker-pgpool#environment-variables
      env:
        PGPOOL_BACKEND_NODES: autoconfigured # you can use custom value like '0:pg-node-1:5432,1:pg-node-2:5432'
        # Postgres users
        PGPOOL_POSTGRES_USERNAME: ls_pgpool_postgres_admin # with SUPERUSER role to use connection slots reserved for superusers for K8s liveness probes, also for user synchronization
        PGPOOL_SR_CHECK_USER: ls_pgpool_sr_check # with pg_monitor role, for streaming replication checks and health checks
        # ---
        PGPOOL_ADMIN_USERNAME: ls_pgpool_admin # Pgpool administrator (local pcp user)
        PGPOOL_ENABLE_LOAD_BALANCING: true # set to 'false' if there is no replication
        PGPOOL_MAX_POOL: 4
        PGPOOL_CHILD_LIFE_TIME: 300 # Default value, read before you change: https://www.pgpool.net/docs/42/en/html/runtime-config-connection-pooling.html
        PGPOOL_POSTGRES_PASSWORD_FILE: /opt/bitnami/pgpool/secrets/pgpool_postgres_password
        PGPOOL_SR_CHECK_PASSWORD_FILE: /opt/bitnami/pgpool/secrets/pgpool_sr_check_password
        PGPOOL_ADMIN_PASSWORD_FILE: /opt/bitnami/pgpool/secrets/pgpool_admin_password
      secrets:
        pgpool_postgres_password: PASSWORD_TO_CHANGE
        pgpool_sr_check_password: PASSWORD_TO_CHANGE
        pgpool_admin_password: PASSWORD_TO_CHANGE
      # https://www.pgpool.net/docs/41/en/html/runtime-config.html
      pgpool_conf_content_to_append: |
        #------------------------------------------------------------------------------
        # CUSTOM SETTINGS (appended by LambdaStack to override defaults)
        #------------------------------------------------------------------------------
        # num_init_children = 32
        connection_life_time = 900
        reserved_connections = 1        
      # https://www.pgpool.net/docs/42/en/html/runtime-config-connection.html
      pool_hba_conf: autoconfigured

## --- pgbouncer ---

  - name: pgbouncer
    enabled: false
    image_path: bitnami/pgbouncer:1.16.0
    init_image_path: bitnami/pgpool:4.2.4
    use_local_image_registry: true
    namespace: postgres-pool
    service:
      name: pgbouncer
      port: 5432
    replicas: 2
    resources:
      requests:
        cpu: 250m
        memory: 128Mi
      limits:
        cpu: 500m
        memory: 128Mi
    pgbouncer:
      env:
        DB_HOST: pgpool.postgres-pool.svc.cluster.local
        DB_LISTEN_PORT: 5432
        MAX_CLIENT_CONN: 150
        DEFAULT_POOL_SIZE: 25
        RESERVE_POOL_SIZE: 25
        POOL_MODE: session
        CLIENT_IDLE_TIMEOUT: 0

## --- istio ---

  - name: istio
    enabled: false
    use_local_image_registry: true
    namespaces:
      operator: istio-operator # namespace where operator will be deployed
      watched: # list of namespaces which operator will watch
        - istio-system
      istio: istio-system # namespace where istio control plane will be deployed
    istio_spec:
      profile: default # Check all possibilities https://istio.io/latest/docs/setup/additional-setup/config-profiles/
      name: istiocontrolplane

4.2 - Backups

Cluster backup options

The content of the backup.yml file is listed for reference only

---
kind: configuration/backup
title: Backup Config
name: default
specification:
  components:
    load_balancer:
      enabled: false
    logging:
      enabled: false
    monitoring:
      enabled: false
    postgresql:
      enabled: false
    rabbitmq:
      enabled: false
# Kubernetes recovery is not supported by LambdaStack at this point.
# You may create backup by enabling this below, but recovery should be done manually according to Kubernetes documentation.
    kubernetes:
      enabled: false

4.3 - ElasticSearch-Curator

ElasticSearch options

The content of the elasticsearch-curator.yml file is listed for reference only

---
kind: configuration/elasticsearch-curator
title: Elasticsearch Curator
name: default
specification:
  delete_indices_cron_jobs:
    - description: Delete indices older than N days
      cron:
        hour: 1
        minute: 0
        enabled: true
      filter_list:
        - filtertype: age
          unit_count: 30
          unit: days
          source: creation_date
          direction: older
    - description: Delete the oldest indices to not consume more than N gigabytes of disk space
      cron:
        minute: 30
        enabled: true
      filter_list:
        - filtertype: space
          disk_space: 20
          use_age: True
          source: creation_date

4.4 - Feature-Mapping

Feature mapping options

The content of the feature-mapping.yml file is listed for reference only

---
kind: configuration/feature-mapping
title: "Feature mapping to roles"
name: default
specification:
  available_roles:
    - name: repository
      enabled: true
    - name: firewall
      enabled: true
    - name: image-registry
      enabled: true
    - name: kubernetes-master
      enabled: true
    - name: kubernetes-node
      enabled: true
    - name: helm
      enabled: true
    - name: logging
      enabled: true
    - name: opendistro-for-elasticsearch
      enabled: true
    - name: elasticsearch-curator
      enabled: true
    - name: kibana
      enabled: true
    - name: filebeat
      enabled: true
    - name: logstash
      enabled: true
    - name: prometheus
      enabled: true
    - name: grafana
      enabled: true
    - name: node-exporter
      enabled: true
    - name: jmx-exporter
      enabled: true
    - name: zookeeper
      enabled: true
    - name: kafka
      enabled: true
    - name: rabbitmq
      enabled: true
    - name: kafka-exporter
      enabled: true
    - name: postgresql
      enabled: true
    - name: postgres-exporter
      enabled: true
    - name: haproxy
      enabled: true
    - name: haproxy-exporter
      enabled: true
    - name: vault
      enabled: true
    - name: applications
      enabled: true
    - name: ignite
      enabled: true

  roles_mapping:
    kafka:
      - zookeeper
      - jmx-exporter
      - kafka
      - kafka-exporter
      - node-exporter
      - filebeat
      - firewall
    rabbitmq:
      - rabbitmq
      - node-exporter
      - filebeat
      - firewall
    logging:
      - logging
      - kibana
      - node-exporter
      - filebeat
      - firewall
    load_balancer:
      - haproxy
      - haproxy-exporter
      - node-exporter
      - filebeat
      - firewall
    monitoring:
      - prometheus
      - grafana
      - node-exporter
      - filebeat
      - firewall
    postgresql:
      - postgresql
      - postgres-exporter
      - node-exporter
      - filebeat
      - firewall
    custom:
      - repository
      - image-registry
      - kubernetes-master
      - node-exporter
      - filebeat
      - rabbitmq
      - postgresql
      - prometheus
      - grafana
      - node-exporter
      - logging
      - firewall
    single_machine:
      - repository
      - image-registry
      - kubernetes-master
      - helm
      - applications
      - rabbitmq
      - postgresql
      - firewall
      - vault
    kubernetes_master:
      - kubernetes-master
      - helm
      - applications
      - node-exporter
      - filebeat
      - firewall
      - vault
    kubernetes_node:
      - kubernetes-node
      - node-exporter
      - filebeat
      - firewall
    ignite:
      - ignite
      - node-exporter
      - filebeat
      - firewall
    opendistro_for_elasticsearch:
      - opendistro-for-elasticsearch
      - node-exporter
      - filebeat
      - firewall
    repository:
      - repository
      - image-registry
      - firewall
      - filebeat
      - node-exporter

4.5 - Filebeat

Filebeat options

The content of the filebeat.yml file is listed for reference only

---
kind: configuration/filebeat
title: Filebeat
name: default
specification:
  kibana:
    dashboards:
      index: filebeat-*
      enabled: auto
  disable_helm_chart: false
  postgresql_input:
    multiline:
      pattern: >-
                '^\d{4}-\d{2}-\d{2} '
      negate: true
      match: after

4.6 - Firewall

Firewall options

The content of the firewall.yml file is listed for reference only

---
kind: configuration/firewall
title: OS level firewall
name: default
specification:
  Debian:                         # On RHEL on Azure firewalld is already in VM image (pre-installed)
    install_firewalld: false      # false to avoid random issue "No route to host" even when firewalld service is disabled
  firewall_service_enabled: false # for all inventory hosts
  apply_configuration: false      # if false only service state is managed
  managed_zone_name: LambdaStack
  rules:
    applications:
      enabled: false
      ports:
        - 30104/tcp       # auth-service
        - 30672/tcp       # rabbitmq-amqp
        - 31672/tcp       # rabbitmq-http (management)
        - 32300-32302/tcp # ignite
    common: # for all inventory hosts
      enabled: true
      ports:
        - 22/tcp
    grafana:
      enabled: true
      ports:
        - 3000/tcp
    haproxy:
      enabled: true
      ports:
        - 443/tcp
        - 9000/tcp # stats
    haproxy_exporter:
      enabled: true
      ports:
        - 9101/tcp
    ignite:
      enabled: true
      ports:
        - 8080/tcp  # REST API
        - 10800/tcp # thin client connection
        - 11211/tcp # JDBC
        - 47100/tcp # local communication
        - 47500/tcp # local discovery
    image_registry:
      enabled: true
      ports:
        - 5000/tcp
    jmx_exporter:
      enabled: true
      ports:
        - 7071/tcp # Kafka
        - 7072/tcp # ZooKeeper
    kafka:
      enabled: true
      ports:
        - 9092/tcp
      # - 9093/tcp # encrypted communication (if TLS/SSL is enabled)
    kafka_exporter:
      enabled: true
      ports:
        - 9308/tcp
    kibana:
      enabled: true
      ports:
        - 5601/tcp
    kubernetes_master:
      enabled: true
      ports:
        - 6443/tcp      # API server
        - 2379-2380/tcp # etcd server client API
        - 8472/udp      # flannel (vxlan backend)
        - 10250/tcp     # Kubelet API
        - 10251/tcp     # kube-scheduler
        - 10252/tcp     # kube-controller-manager
    kubernetes_node:
      enabled: true
      ports:
        - 8472/udp  # flannel (vxlan backend)
        - 10250/tcp # Kubelet API
    logging:
      enabled: true
      ports:
        - 9200/tcp
    node_exporter:
      enabled: true
      ports:
        - 9100/tcp
    opendistro_for_elasticsearch:
      enabled: true
      ports:
        - 9200/tcp
    postgresql:
      enabled: true
      ports:
        - 5432/tcp
        - 6432/tcp #PGBouncer
    prometheus:
      enabled: true
      ports:
        - 9090/tcp
        - 9093/tcp # Alertmanager
    rabbitmq:
      enabled: true
      ports:
        - 4369/tcp    # peer discovery service used by RabbitMQ nodes and CLI tools
        # - 5671/tcp  # encrypted communication (if TLS/SSL is enabled)
        - 5672/tcp    # AMQP
        # - 15672/tcp # HTTP API clients, management UI and rabbitmqadmin (only if the management plugin is enabled)
        - 25672/tcp   # distribution server
    zookeeper:
      enabled: true
      ports:
        - 2181/tcp # client connections
        - 2888/tcp # peers communication
        - 3888/tcp # leader election

4.7 - Grafana

Grafana options

The content of the grafana.yml file is listed for reference only

---
kind: configuration/grafana
title: "Grafana"
name: default
specification:
  grafana_logs_dir: "/var/log/grafana"
  grafana_data_dir: "/var/lib/grafana"
  grafana_address: "0.0.0.0"
  grafana_port: 3000

  # Should the provisioning be kept synced. If true, previous provisioned objects will be removed if not referenced anymore.
  grafana_provisioning_synced: false
  # External Grafana address. Variable maps to "root_url" in grafana server section
  grafana_url: "https://0.0.0.0:3000"

  # Additional options for grafana "server" section
  # This section WILL omit options for: http_addr, http_port, domain, and root_url, as those settings are set by variables listed before
  grafana_server:
    protocol: https
    enforce_domain: false
    socket: ""
    cert_key: "/etc/grafana/ssl/grafana_key.key"
    cert_file: "/etc/grafana/ssl/grafana_cert.pem"
    enable_gzip: false
    static_root_path: public
    router_logging: false

  # Variables correspond to ones in grafana.ini configuration file
  # Security
  grafana_security:
    admin_user: admin
    admin_password: PASSWORD_TO_CHANGE
  #  secret_key: ""
  #  login_remember_days: 7
  #  cookie_username: grafana_user
  #  cookie_remember_name: grafana_remember
  #  disable_gravatar: true
  #  data_source_proxy_whitelist:

  # Database setup
  grafana_database:
    type: sqlite3
  #  host: 127.0.0.1:3306
  #  name: grafana
  #  user: root
  #  password: ""
  #  url: ""
  #  ssl_mode: disable
  #  path: grafana.db
  #  max_idle_conn: 2
  #  max_open_conn: ""
  #  log_queries: ""

  # Default dashboards predefined and available in online & offline mode
  grafana_external_dashboards: []
    #   # Kubernetes cluster monitoring (via Prometheus)
    # - dashboard_id: '315'
    #   datasource: 'Prometheus'
    #   # Node Exporter Server Metrics
    # - dashboard_id: '405'
    #   datasource: 'Prometheus'
    #   # Postgres Overview
    # - dashboard_id: '455'
    #   datasource: 'Prometheus'
    #   # Node Exporter Full
    # - dashboard_id: '1860'
    #   datasource: 'Prometheus'
    #   # RabbitMQ Monitoring
    # - dashboard_id: '4279'
    #   datasource: 'Prometheus'
    #   # Kubernetes Cluster
    # - dashboard_id: '7249'
    #   datasource: 'Prometheus'
    #   # Kafka Exporter Overview
    # - dashboard_id: '7589'
    #   datasource: 'Prometheus'
    #   # PostgreSQL Database
    # - dashboard_id: '9628'
    #   datasource: 'Prometheus'
    #   # RabbitMQ cluster monitoring (via Prometheus)
    # - dashboard_id: '10991'
    #   datasource: 'Prometheus'
    #   # 1 Node Exporter for Prometheus Dashboard EN v20201010
    # - dashboard_id: '11074'
    #   datasource: 'Prometheus'

  # Get dashboards from https://grafana.com/dashboards. Only for online mode
  grafana_online_dashboards: []
    # - dashboard_id: '4271'
    #   revision_id: '3'
    #   datasource: 'Prometheus'
    # - dashboard_id: '1860'
    #   revision_id: '4'
    #   datasource: 'Prometheus'
    # - dashboard_id: '358'
    #   revision_id: '1'
    #   datasource: 'Prometheus'

  # Deployer local folder with dashboard definitions in .json format
  grafana_dashboards_dir: "dashboards" # Replace with your dashboard directory if you have dashboards to include

  # User management and registration
  grafana_welcome_email_on_sign_up: false
  grafana_users:
    allow_sign_up: false
    # allow_org_create: true
    # auto_assign_org: true
    auto_assign_org_role: Viewer
    # login_hint: "email or username"
    default_theme: dark
    # external_manage_link_url: ""
    # external_manage_link_name: ""
    # external_manage_info: ""

  # grafana authentication mechanisms
  grafana_auth: {}
  #  disable_login_form: false
  #  disable_signout_menu: false
  #  anonymous:
  #    org_name: "Main Organization"
  #    org_role: Viewer
  #  ldap:
  #    config_file: "/etc/grafana/ldap.toml"
  #    allow_sign_up: false
  #  basic:
  #    enabled: true

  grafana_ldap: {}
  #  verbose_logging: false
  #  servers:
  #    host: 127.0.0.1
  #    port: 389 # 636 for SSL
  #    use_ssl: false
  #    start_tls: false
  #    ssl_skip_verify: false
  #    root_ca_cert: /path/to/certificate.crt
  #    bind_dn: "cn=admin,dc=grafana,dc=org"
  #    bind_password: grafana
  #    search_filter: "(cn=%s)" # "(sAMAccountName=%s)" on AD
  #    search_base_dns:
  #      - "dc=grafana,dc=org"
  #    group_search_filter: "(&(objectClass=posixGroup)(memberUid=%s))"
  #    group_search_base_dns:
  #      - "ou=groups,dc=grafana,dc=org"
  #    attributes:
  #      name: givenName
  #      surname: sn
  #      username: sAMAccountName
  #      member_of: memberOf
  #      email: mail
  #  group_mappings:
  #    - name: Main Org.
  #      id: 1
  #      groups:
  #        - group_dn: "cn=admins,ou=groups,dc=grafana,dc=org"
  #          org_role: Admin
  #        - group_dn: "cn=editors,ou=groups,dc=grafana,dc=org"
  #          org_role: Editor
  #        - group_dn: "*"
  #          org_role: Viewer
  #    - name: Alternative Org
  #      id: 2
  #      groups:
  #        - group_dn: "cn=alternative_admins,ou=groups,dc=grafana,dc=org"
  #          org_role: Admin

  grafana_session: {}
  #  provider: file
  #  provider_config: "sessions"

  grafana_analytics: {}
  #  reporting_enabled: true
  #  google_analytics_ua_id: ""

  # Set this for mail notifications
  grafana_smtp: {}
  #  host:
  #  user:
  #  password:
  #  from_address:

  # Enable grafana alerting mechanism
  grafana_alerting:
    execute_alerts: true
  #  error_or_timeout: 'alerting'
  #  nodata_or_nullvalues: 'no_data'
  #  concurrent_render_limit: 5

  # Grafana logging configuration
  grafana_log: {}
  # mode: 'console file'
  # level: info

  # Internal grafana metrics system
  grafana_metrics: {}
  #  interval_seconds: 10
  #  graphite:
  #    address: "localhost:2003"
  #    prefix: "prod.grafana.%(instance_name)s"

  # Distributed tracing options
  grafana_tracing: {}
  #  address: "localhost:6831"
  #  always_included_tag: "tag1:value1,tag2:value2"
  #  sampler_type: const
  #  sampler_param: 1

  grafana_snapshots: {}
  #  external_enabled: true
  #  external_snapshot_url: "https://snapshots-origin.raintank.io"
  #  external_snapshot_name: "Publish to snapshot.raintank.io"
  #  snapshot_remove_expired: true
  #  snapshot_TTL_days: 90

  # External image store
  grafana_image_storage: {}
  #  provider: gcs
  #  key_file:
  #  bucket:
  #  path:


  #######
  # Plugins from https://grafana.com/plugins
  grafana_plugins: []
  #  - raintank-worldping-app
  #


  # Alert notification channels to configure
  grafana_alert_notifications: []
  #   - name: "Email Alert"
  #     type: "email"
  #     isDefault: true
  #     settings:
  #       addresses: "example@example.com"

  # Datasources to configure
  grafana_datasources:
    - name: "Prometheus"
      type: "prometheus"
      access: "proxy"
      url: "http://localhost:9090"
      basicAuth: false
      basicAuthUser: ""
      basicAuthPassword: ""
      isDefault: true
      editable: true
      jsonData:
        tlsAuth: false
        tlsAuthWithCACert: false
        tlsSkipVerify: true

  # API keys to configure
  grafana_api_keys: []
  #  - name: "admin"
  #    role: "Admin"
  #  - name: "viewer"
  #    role: "Viewer"
  #  - name: "editor"
  #    role: "Editor"

  # Logging options to configure
  grafana_logging:
    log_rotate: true
    daily_rotate: true
    max_days: 7

4.8 - HAProxy-Exporter

HAProxy-Exporter options

The content of the haproxy-exporter.yml file is listed for reference only

---
kind: configuration/haproxy-exporter
title: "HAProxy exporter"
name: default
specification:
  description: "Service that runs HAProxy Exporter"

  web_listen_port: "9101"

  config_for_prometheus: # configuration that will be written to Prometheus to allow scraping metrics from this exporter
    exporter_listen_port: "9101"
    prometheus_config_dir: /etc/prometheus
    file_sd_labels:
      - label: "job"
        value: "haproxy-exporter"

4.9 - HAProxy

HAProxy options

The content of the haproxy.yml file is listed for reference only

---
kind: configuration/haproxy
title: "HAProxy"
name: default
specification:
  logs_max_days: 60
  self_signed_certificate_name: self-signed-fullchain.pem
  self_signed_private_key_name: self-signed-privkey.pem
  self_signed_concatenated_cert_name: self-signed-test.tld.pem
  haproxy_log_path: "/var/log/haproxy.log"

  stats:
    enable: true
    bind_address: 127.0.0.1:9000
    uri: "/haproxy?stats"
    user: operations
    password: your-haproxy-stats-pwd
  frontend:
    - name: https_front
      port: 443
      https: true
      backend:
      - http_back1
  backend: # example backend config below
    - name: http_back1
      server_groups:
      - kubernetes_node
      # servers: # Definition for server to that hosts the application.
      # - name: "node1"
      #   address: "lambdastack-vm1.domain.com"
      port: 30104

4.10 - Helm-Charts

Helm-Charts options

The content of the helm-charts.yml file is listed for reference only

---
kind: configuration/helm-charts
title: "Helm charts"
name: default
specification:
  apache_lsrepo_path: "/var/www/html/lsrepo"

4.11 - Helm

Helm options - Internal for LambdaStack

The content of the helm.yml file is listed for reference only

---
kind: configuration/helm
title: "Helm"
name: default
specification:
  apache_lsrepo_path: "/var/www/html/lsrepo"

4.12 - Apache Ignite

Ignite caching options

The content of the ignite.yml file is listed for reference only

---
kind: configuration/ignite
title: "Apache Ignite stateful installation"
name: default
specification:
  enabled_plugins:
  - ignite-rest-http
  config: |
    <?xml version="1.0" encoding="UTF-8"?>

    <!--
      Licensed to the Apache Software Foundation (ASF) under one or more
      contributor license agreements.  See the NOTICE file distributed with
      this work for additional information regarding copyright ownership.
      The ASF licenses this file to You under the Apache License, Version 2.0
      (the "License"); you may not use this file except in compliance with
      the License.  You may obtain a copy of the License at
          http://www.apache.org/licenses/LICENSE-2.0
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License.
    -->

    <beans xmlns="http://www.springframework.org/schema/beans"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="
          http://www.springframework.org/schema/beans
          http://www.springframework.org/schema/beans/spring-beans.xsd">

        <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
          <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
              <!-- Set the page size to 4 KB -->
              <property name="pageSize" value="#{4 * 1024}"/>
              <!--
              Sets a path to the root directory where data and indexes are
              to be persisted. It's assumed the directory is on a separated SSD.
              -->
              <property name="storagePath" value="/var/lib/ignite/persistence"/>

              <!--
                  Sets a path to the directory where WAL is stored.
                  It's assumed the directory is on a separated HDD.
              -->
              <property name="walPath" value="/wal"/>

              <!--
                  Sets a path to the directory where WAL archive is stored.
                  The directory is on the same HDD as the WAL.
              -->
              <property name="walArchivePath" value="/wal/archive"/>
            </bean>
          </property>

          <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
              <property name="ipFinder">
                <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                  <property name="addresses">
    IP_LIST_PLACEHOLDER
                  </property>
                </bean>
              </property>
              <property name="localPort" value="47500"/>
              <!-- Limit number of potentially used ports from 100 to 10 -->
              <property name="localPortRange" value="10"/>
            </bean>
          </property>

          <property name="communicationSpi">
            <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
              <property name="localPort" value="47100"/>
              <!-- Limit number of potentially used ports from 100 to 10 -->
              <property name="localPortRange" value="10"/>
            </bean>
          </property>

          <property name="clientConnectorConfiguration">
            <bean class="org.apache.ignite.configuration.ClientConnectorConfiguration">
              <property name="port" value="10800"/>
              <!-- Limit number of potentially used ports from 100 to 10 -->
              <property name="portRange" value="10"/>
            </bean>
          </property>

          <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
              <property name="port" value="11211"/>
              <!-- Limit number of potentially used ports from 100 to 10 -->
              <property name="portRange" value="10"/>
            </bean>
          </property>

        </bean>
    </beans>    

4.13 - Image-Registry

Image Registry options

The content of the image-registry.yml file is listed for reference only

---
kind: configuration/image-registry
title: "LambdaStack image registry"
name: default
specification:
  description: "Local registry with Docker images"
  registry_image:
    name: "registry:2"
    file_name: registry-2.tar
  images_to_load:
    x86_64:
      generic:
        - name: "lambdastack/keycloak:14.0.0"
          file_name: keycloak-14.0.0.tar
        - name: "rabbitmq:3.8.9"
          file_name: rabbitmq-3.8.9.tar
        - name: "lambdastack/ignite:2.9.1"
          file_name: ignite-2.9.1.tar
        - name: "kubernetesui/dashboard:v2.3.1"
          file_name: dashboard-v2.3.1.tar
        - name: "kubernetesui/metrics-scraper:v1.0.7"
          file_name: metrics-scraper-v1.0.7.tar
        - name: "vault:1.7.0"
          file_name: vault-1.7.0.tar
        - name: "hashicorp/vault-k8s:0.10.0"
          file_name: vault-k8s-0.10.0.tar
        - name: "istio/proxyv2:1.8.1"
          file_name: proxyv2-1.8.1.tar
        - name: "istio/pilot:1.8.1"
          file_name: pilot-1.8.1.tar
        - name: "istio/operator:1.8.1"
          file_name: operator-1.8.1.tar
        # postgres
        - name: bitnami/pgpool:4.2.4
          file_name: pgpool-4.2.4.tar
        - name: bitnami/pgbouncer:1.16.0
          file_name: pgbouncer-1.16.0.tar
      current:
        - name: "haproxy:2.2.2-alpine"
          file_name: haproxy-2.2.2-alpine.tar
        # K8s v1.20.12 - LambdaStack 1.3 (transitional version)
        # https://github.com/kubernetes/kubernetes/blob/v1.20.12/build/dependencies.yaml
        - name: "k8s.gcr.io/kube-apiserver:v1.20.12"
          file_name: kube-apiserver-v1.20.12.tar
        - name: "k8s.gcr.io/kube-controller-manager:v1.20.12"
          file_name: kube-controller-manager-v1.20.12.tar
        - name: "k8s.gcr.io/kube-proxy:v1.20.12"
          file_name: kube-proxy-v1.20.12.tar
        - name: "k8s.gcr.io/kube-scheduler:v1.20.12"
          file_name: kube-scheduler-v1.20.12.tar
        - name: "k8s.gcr.io/coredns:1.7.0"
          file_name: coredns-1.7.0.tar
        - name: "k8s.gcr.io/etcd:3.4.13-0"
          file_name: etcd-3.4.13-0.tar
        - name: "k8s.gcr.io/pause:3.2"
          file_name: pause-3.2.tar
        # flannel
        - name: "quay.io/coreos/flannel:v0.14.0-amd64"
          file_name: flannel-v0.14.0-amd64.tar
        - name: "quay.io/coreos/flannel:v0.14.0"
          file_name: flannel-v0.14.0.tar
        # canal & calico
        - name: "calico/cni:v3.20.2"
          file_name: cni-v3.20.2.tar
        - name: "calico/kube-controllers:v3.20.2"
          file_name: kube-controllers-v3.20.2.tar
        - name: "calico/node:v3.20.2"
          file_name: node-v3.20.2.tar
        - name: "calico/pod2daemon-flexvol:v3.20.2"
          file_name: pod2daemon-flexvol-v3.20.2.tar
      legacy:
        # K8s v1.18.6 - LambdaStack 0.7.1 - 1.2
        - name: "k8s.gcr.io/kube-apiserver:v1.18.6"
          file_name: kube-apiserver-v1.18.6.tar
        - name: "k8s.gcr.io/kube-controller-manager:v1.18.6"
          file_name: kube-controller-manager-v1.18.6.tar
        - name: "k8s.gcr.io/kube-proxy:v1.18.6"
          file_name: kube-proxy-v1.18.6.tar
        - name: "k8s.gcr.io/kube-scheduler:v1.18.6"
          file_name: kube-scheduler-v1.18.6.tar
        - name: "k8s.gcr.io/coredns:1.6.7"
          file_name: coredns-1.6.7.tar
        - name: "k8s.gcr.io/etcd:3.4.3-0"
          file_name: etcd-3.4.3-0.tar
        # flannel
        - name: "quay.io/coreos/flannel:v0.12.0-amd64"
          file_name: flannel-v0.12.0-amd64.tar
        - name: "quay.io/coreos/flannel:v0.12.0"
          file_name: flannel-v0.12.0.tar
        # canal & calico
        - name: "calico/cni:v3.15.0"
          file_name: cni-v3.15.0.tar
        - name: "calico/kube-controllers:v3.15.0"
          file_name: kube-controllers-v3.15.0.tar
        - name: "calico/node:v3.15.0"
          file_name: node-v3.15.0.tar
        - name: "calico/pod2daemon-flexvol:v3.15.0"
          file_name: pod2daemon-flexvol-v3.15.0.tar
    aarch64:
      generic:
        - name: "lambdastack/keycloak:14.0.0"
          file_name: keycloak-14.0.0.tar
        - name: "rabbitmq:3.8.9"
          file_name: rabbitmq-3.8.9.tar
        - name: "lambdastack/ignite:2.9.1"
          file_name: ignite-2.9.1.tar
        - name: "kubernetesui/dashboard:v2.3.1"
          file_name: dashboard-v2.3.1.tar
        - name: "kubernetesui/metrics-scraper:v1.0.7"
          file_name: metrics-scraper-v1.0.7.tar
        - name: "vault:1.7.0"
          file_name: vault-1.7.0.tar
        - name: "hashicorp/vault-k8s:0.10.0"
          file_name: vault-k8s-0.10.0.tar
      current:
        - name: "haproxy:2.2.2-alpine"
          file_name: haproxy-2.2.2-alpine.tar
        # K8s v1.20.12 - LambdaStack 1.3 (transition version)
        - name: "k8s.gcr.io/kube-apiserver:v1.20.12"
          file_name: kube-apiserver-v1.20.12.tar
        - name: "k8s.gcr.io/kube-controller-manager:v1.20.12"
          file_name: kube-controller-manager-v1.20.12.tar
        - name: "k8s.gcr.io/kube-proxy:v1.20.12"
          file_name: kube-proxy-v1.20.12.tar
        - name: "k8s.gcr.io/kube-scheduler:v1.20.12"
          file_name: kube-scheduler-v1.20.12.tar
        - name: "k8s.gcr.io/coredns:1.7.0"
          file_name: coredns-1.7.0.tar
        - name: "k8s.gcr.io/etcd:3.4.13-0"
          file_name: etcd-3.4.13-0.tar
        - name: "k8s.gcr.io/pause:3.2"
          file_name: pause-3.2.tar
        # flannel
        - name: "quay.io/coreos/flannel:v0.14.0-arm64"
          file_name: flannel-v0.14.0-arm64.tar
        - name: "quay.io/coreos/flannel:v0.14.0"
          file_name: flannel-v0.14.0.tar
        # canal & calico
        - name: "calico/cni:v3.20.2"
          file_name: cni-v3.20.2.tar
        - name: "calico/kube-controllers:v3.20.2"
          file_name: kube-controllers-v3.20.2.tar
        - name: "calico/node:v3.20.2"
          file_name: node-v3.20.2.tar
        - name: "calico/pod2daemon-flexvol:v3.20.2"
          file_name: pod2daemon-flexvol-v3.20.2.tar
      legacy:
        # K8s v1.18.6 - LambdaStack 0.7.1 - 1.2
        - name: "k8s.gcr.io/kube-apiserver:v1.18.6"
          file_name: kube-apiserver-v1.18.6.tar
        - name: "k8s.gcr.io/kube-controller-manager:v1.18.6"
          file_name: kube-controller-manager-v1.18.6.tar
        - name: "k8s.gcr.io/kube-proxy:v1.18.6"
          file_name: kube-proxy-v1.18.6.tar
        - name: "k8s.gcr.io/kube-scheduler:v1.18.6"
          file_name: kube-scheduler-v1.18.6.tar
        - name: "k8s.gcr.io/coredns:1.6.7"
          file_name: coredns-1.6.7.tar
        - name: "k8s.gcr.io/etcd:3.4.3-0"
          file_name: etcd-3.4.3-0.tar
        # flannel
        - name: "quay.io/coreos/flannel:v0.12.0-arm64"
          file_name: flannel-v0.12.0-arm64.tar
        - name: "quay.io/coreos/flannel:v0.12.0"
          file_name: flannel-v0.12.0.tar
        # canal & calico
        - name: "calico/cni:v3.15.0"
          file_name: cni-v3.15.0.tar
        - name: "calico/kube-controllers:v3.15.0"
          file_name: kube-controllers-v3.15.0.tar
        - name: "calico/node:v3.15.0"
          file_name: node-v3.15.0.tar
        - name: "calico/pod2daemon-flexvol:v3.15.0"
          file_name: pod2daemon-flexvol-v3.15.0.tar

4.14 - JMX-Exporter

JMX-Exporter options

The content of the jmx-exporter.yml file is listed for reference only

---
kind: configuration/jmx-exporter
title: "JMX exporter"
name: default
specification:
  file_name: "jmx_prometheus_javaagent-0.14.0.jar"
  jmx_path: /opt/jmx-exporter/jmx_prometheus_javaagent.jar # Changing it requires also change for same variable in Kafka and Zookeeper configs.  # Todo Zookeeper and Kafka to use this variable
  jmx_jars_directory: /opt/jmx-exporter/jars
  jmx_exporter_user: jmx-exporter
  jmx_exporter_group: jmx-exporter

4.15 - Kafka-Exporter

Kafka-exporter options

The content of the kafka-exporter.yml file is listed for reference only

kind: configuration/kafka-exporter
title: "Kafka exporter"
name: default
specification:
  description: "Service that runs Kafka Exporter"

  web_listen_port: "9308"
  config_flags:
    - "--web.listen-address=:9308" # Address to listen on for web interface and telemetry.
    - '--web.telemetry-path=/metrics' # Path under which to expose metrics.
    - '--log.level=info'
    - '--topic.filter=.*' # Regex that determines which topics to collect.
    - '--group.filter=.*' # Regex that determines which consumer groups to collect.
    #- '--tls.insecure-skip-tls-verify' # If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
    - '--kafka.version=2.0.0'
    #- '--sasl.enabled' # Connect using SASL/PLAIN.
    #- '--sasl.handshake' # Only set this to false if using a non-Kafka SASL proxy
    #- '--sasl.username=""'
    #- '--sasl.password=""'
    #- '--tls.enabled' # Connect using TLS
    #- '--tls.ca-file=""' # The optional certificate authority file for TLS client authentication
    #- '--tls.cert-file=""' # The optional certificate file for client authentication
    #- '--tls.key-file=""' # The optional key file for client authentication

  config_for_prometheus: # configuration that will be written to Prometheus to allow scraping metrics from this exporter
    exporter_listen_port: "9308"
    prometheus_config_dir: /etc/prometheus
    file_sd_labels:
      - label: "job"
        value: "kafka-exporter"

4.16 - Kafka

Kafka options

The content of the kafka.yml file is listed for reference only

---
kind: configuration/kafka
title: "Kafka"
name: default
specification:
  kafka_var:
    enabled: True
    admin: kafka
    admin_pwd: LambdaStack
    # javax_net_debug: all # uncomment to activate debugging, other debug options: https://colinpaice.blog/2020/04/05/using-java-djavax-net-debug-to-examine-data-flows-including-tls/
    security:
      ssl:
        enabled: False
        port: 9093
        server:
          local_cert_download_path: kafka-certs
          keystore_location: /var/private/ssl/kafka.server.keystore.jks
          truststore_location: /var/private/ssl/kafka.server.truststore.jks
          cert_validity: 365
          passwords:
            keystore: PasswordToChange
            truststore: PasswordToChange
            key: PasswordToChange
        endpoint_identification_algorithm: HTTPS
        client_auth: required
      encrypt_at_rest: False
      inter_broker_protocol: PLAINTEXT
      authorization:
        enabled: False
        authorizer_class_name: kafka.security.auth.SimpleAclAuthorizer
        allow_everyone_if_no_acl_found: False
        super_users:
          - tester01
          - tester02
        users:
          - name: test_user
            topic: test_topic
      authentication:
        enabled: False
        authentication_method: certificates
        sasl_mechanism_inter_broker_protocol:
        sasl_enabled_mechanisms: PLAIN
    sha: "b28e81705e30528f1abb6766e22dfe9dae50b1e1e93330c880928ff7a08e6b38ee71cbfc96ec14369b2dfd24293938702cab422173c8e01955a9d1746ae43f98"
    port: 9092
    min_insync_replicas: 1 # Minimum number of replicas (ack write)
    default_replication_factor: 1 # Minimum number of automatically created topics
    offsets_topic_replication_factor: 1 # Minimum number of offsets topic (consider higher value for HA)
    num_recovery_threads_per_data_dir: 1 # Minimum number of recovery threads per data dir
    num_replica_fetchers: 1 # Minimum number of replica fetchers
    replica_fetch_max_bytes: 1048576
    replica_socket_receive_buffer_bytes: 65536
    partitions: 8 # 100 x brokers x replicas for reasonable size cluster. Small clusters can be less
    log_retention_hours: 168 # The minimum age of a log file to be eligible for deletion due to age
    log_retention_bytes: -1 # -1 is no size limit only a time limit (log_retention_hours). This limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    offset_retention_minutes: 10080 # Offsets older than this retention period will be discarded
    heap_opts: "-Xmx2G -Xms2G"
    opts: "-Djavax.net.debug=all"
    jmx_opts:
    max_incremental_fetch_session_cache_slots: 1000
    controlled_shutdown_enable: true
    group: kafka
    user: kafka
    conf_dir: /opt/kafka/config
    data_dir: /var/lib/kafka
    log_dir: /var/log/kafka
    socket_settings:
      network_threads: 3 # The number of threads handling network requests
      io_threads: 8 # The number of threads doing disk I/O
      send_buffer_bytes: 102400 # The send buffer (SO_SNDBUF) used by the socket server
      receive_buffer_bytes: 102400 # The receive buffer (SO_RCVBUF) used by the socket server      
      request_max_bytes: 104857600 # The maximum size of a request that the socket server will accept (protection against OOM)
  zookeeper_set_acl: false
  zookeeper_hosts: "{{ groups['zookeeper']|join(':2181,') }}:2181"
  jmx_exporter_user: jmx-exporter
  jmx_exporter_group: jmx-exporter
  prometheus_jmx_path: /opt/jmx-exporter/jmx_prometheus_javaagent.jar
  prometheus_jmx_exporter_web_listen_port: 7071
  prometheus_jmx_config: /opt/kafka/config/jmx-kafka.config.yml
  prometheus_config_dir: /etc/prometheus
  prometheus_kafka_jmx_file_sd_labels:
    "job": "jmx-kafka"

4.17 - Kibana

Kibana options

The content of the kibana.yml file is listed for reference only

---
kind: configuration/kibana
title: "Kibana"
name: default
specification:
  kibana_log_dir: /var/log/kibana

4.18 - Kubernetes-ControlPlane

Kubernetes ControlPlane (aka kubernetes-master) options

The content of the kubernetes-master.yml file is listed for reference only

---
kind: configuration/kubernetes-master
title: Kubernetes Control Plane Config
name: default
specification:
  version: 1.20.12
  cni_version: 0.8.7
  cluster_name: "kubernetes-lambdastack"
  allow_pods_on_master: False
  storage:
    name: lambdastack-cluster-volume # name of the Kubernetes resource
    path: / # directory path in mounted storage
    enable: True
    capacity: 50 # GB
    data: {} #AUTOMATED - data specific to cloud provider
  advanced: # modify only if you are sure what value means
    api_server_args: # https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
      profiling: false
      enable-admission-plugins: "AlwaysPullImages,DenyEscalatingExec,NamespaceLifecycle,ServiceAccount,NodeRestriction"
      audit-log-path: "/var/log/apiserver/audit.log"
      audit-log-maxbackup: 10
      audit-log-maxsize: 200
      secure-port: 6443
    controller_manager_args: # https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
      profiling: false
      terminated-pod-gc-threshold: 200
    scheduler_args:  # https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/
      profiling: false
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      plugin: flannel # valid options: calico, flannel, canal (due to lack of support for calico on Azure - use canal)
    imageRepository: k8s.gcr.io
    certificates:
      expiration_days: 365 # values greater than 24855 are not recommended
      renew: false
    etcd_args:
      encrypted: true
    kubeconfig:
      local:
        api_server:
          # change if you want a custom hostname (you can use jinja2/ansible expressions here, for example "{{ groups.kubernetes_master[0] }}")
          hostname: 127.0.0.1
          # change if you want a custom port
          port: 6443
#  image_registry_secrets:
#  - email: emaul@domain.com
#    name: secretname
#    namespace: default
#    password: docker-registry-pwd
#    server_url: docker-registry-url
#    username: docker-registry-user

4.19 - Kubernetes Nodes

Kubernetes Nodes options

The content of the kubernetes-nodes.yml file is listed for reference only

---
kind: configuration/kubernetes-node
title: Kubernetes Node Config
name: default
specification:
  version: 1.20.12
  cni_version: 0.8.7
  node_labels: "node-type=lambdastack"

4.20 - Logging

Logging options

The content of the logging.yml file is listed for reference only

---
kind: configuration/logging
title: Logging Config
name: default
specification:
  cluster_name: LambdaStackElastic
  admin_password: PASSWORD_TO_CHANGE
  kibanaserver_password: PASSWORD_TO_CHANGE
  kibanaserver_user_active: true
  logstash_password: PASSWORD_TO_CHANGE
  logstash_user_active: true
  demo_users_to_remove:
  - kibanaro
  - readall
  - snapshotrestore
  paths:
    data: /var/lib/elasticsearch
    repo: /var/lib/elasticsearch-snapshots
    logs: /var/log/elasticsearch
  jvm_options:
    Xmx: 1g # see https://www.elastic.co/guide/en/elasticsearch/reference/7.9/heap-size.html
  opendistro_security:
    ssl:
      transport:
        enforce_hostname_verification: true

4.21 - Logstash

Logstash options

The content of the logstash.yml file is listed for reference only

---
kind: configuration/logstash
title: "Logstash"
name: default
specification: {}

4.22 - Node-Exporter

Node-exporter options

The content of the node-exporter.yml file is listed for reference only

---
kind: configuration/node-exporter
title: "Node exporter"
name: default
specification:
  disable_helm_chart: false
  helm_chart_values:
    service:
      port: 9100
      targetPort: 9100 
  files:
    node_exporter_helm_chart_file_name: node-exporter-1.1.2.tgz
  enabled_collectors:
    - conntrack
    - diskstats
    - entropy
    - filefd
    - filesystem
    - loadavg
    - mdadm
    - meminfo
    - netdev
    - netstat
    - sockstat
    - stat
    - textfile
    - time
    - uname
    - vmstat
    - systemd

  config_flags:
    - "--web.listen-address=:9100"
    - '--log.level=info'
    - '--collector.diskstats.ignored-devices=^(ram|loop|fd)\d+$'
    - '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)'
    - '--collector.netdev.device-blacklist="^$"'
    - '--collector.textfile.directory="/var/lib/prometheus/node-exporter"'
    - '--collector.systemd.unit-whitelist="(kafka\.service|zookeeper\.service)"'

  web_listen_port: "9100"
  web_listen_address: ""

  config_for_prometheus: # configuration that will be written to Prometheus to allow scraping metrics from this exporter
    exporter_listen_port: "9100"
    prometheus_config_dir: /etc/prometheus
    file_sd_labels:
      - label: "job"
        value: "node"

4.23 - Opendistro-for-ElasticSearch

Opendistro options

The content of the opendistro-for-elasticsearch.yml file is listed for reference only

---
kind: configuration/opendistro-for-elasticsearch
title: Open Distro for Elasticsearch Config
name: default
specification:
  cluster_name: LambdaStackElastic
  clustered: true
  admin_password: PASSWORD_TO_CHANGE
  kibanaserver_password: PASSWORD_TO_CHANGE
  kibanaserver_user_active: false
  logstash_password: PASSWORD_TO_CHANGE
  logstash_user_active: false
  demo_users_to_remove:
  - kibanaro
  - readall
  - snapshotrestore
  - logstash
  - kibanaserver
  paths:
    data: /var/lib/elasticsearch
    repo: /var/lib/elasticsearch-snapshots
    logs: /var/log/elasticsearch
  jvm_options:
    Xmx: 1g # see https://www.elastic.co/guide/en/elasticsearch/reference/7.9/heap-size.html
  opendistro_security:
    ssl:
      transport:
        enforce_hostname_verification: true

4.24 - Postgres-Exporter

Postgres-Exporter options

The content of the postgres-exporter.yml file is listed for reference only

---
kind: configuration/postgres-exporter
title: Postgres exporter
name: default
specification:
  config_flags:
  - --log.level=info
  - --extend.query-path=/opt/postgres_exporter/queries.yaml
  - --auto-discover-databases
  # Please see optional flags: https://github.com/prometheus-community/postgres_exporter/tree/v0.9.0#flags
  config_for_prometheus:
    exporter_listen_port: '9187'
    prometheus_config_dir: /etc/prometheus
    file_sd_labels:
    - label: "job"
      value: "postgres-exporter"

4.25 - PostreSQL

PostgreSQL options

The content of the postgresql.yml file is listed for reference only

---
kind: configuration/postgresql
title: PostgreSQL
name: default
specification:
  config_file:
    parameter_groups:
      - name: CONNECTIONS AND AUTHENTICATION
        subgroups:
          - name: Connection Settings
            parameters:
              - name: listen_addresses
                value: "'*'"
                comment: listen on all addresses
          - name: Security and Authentication
            parameters:
              - name: ssl
                value: 'off'
                comment: to have the default value also on Ubuntu
      - name: RESOURCE USAGE (except WAL)
        subgroups:
          - name: Kernel Resource Usage
            parameters:
              - name: shared_preload_libraries
                value: AUTOCONFIGURED
                comment: set by automation
      - name: ERROR REPORTING AND LOGGING
        subgroups:
          - name: Where to Log
            parameters:
              - name: log_directory
                value: "'/var/log/postgresql'"
                comment: to have standard location for Filebeat and logrotate
              - name: log_filename
                value: "'postgresql.log'"
                comment: to use logrotate with common configuration
      - name: WRITE AHEAD LOG
        subgroups:
          - name: Settings
            parameters:
              - name: wal_level
                value: replica
                when: replication
          # Changes to archive_mode require a full PostgreSQL server restart,
          # while archive_command changes can be applied via a normal configuration reload.
          # See https://repmgr.org/docs/repmgr.html#CONFIGURATION-POSTGRESQL
          - name: Archiving
            parameters:
              - name: archive_mode
                value: 'on'
                when: replication
              - name: archive_command
                value: "'/bin/true'"
                when: replication
      - name: REPLICATION
        subgroups:
          - name: Sending Server(s)
            parameters:
              - name: max_wal_senders
                value: 10
                comment: maximum number of simultaneously running WAL sender processes
                when: replication
              - name: wal_keep_size
                value: 500
                comment: the size of WAL files held for standby servers (MB)
                when: replication
          - name: Standby Servers # ignored on master server
            parameters:
              - name: hot_standby
                value: 'on'
                comment: must be 'on' for repmgr needs, ignored on primary but recommended in case primary becomes standby
                when: replication
  extensions:
    pgaudit:
      enabled: false
      shared_preload_libraries:
        - pgaudit
      config_file_parameters:
        log_connections: 'off'
        log_disconnections: 'off'
        log_statement: 'none'
        log_line_prefix: "'%m [%p] %q%u@%d,host=%h '"
        # pgaudit specific, see https://github.com/pgaudit/pgaudit/tree/REL_13_STABLE#settings
        pgaudit.log: "'write, function, role, ddl, misc_set'"
        pgaudit.log_catalog: 'off # to reduce overhead of logging' # default is 'on'
        # the following first 2 parameters are set to values that make it easier to access audit log per table
        # change their values to the opposite if you need to reduce overhead of logging
        pgaudit.log_relation: 'on # separate log entry for each relation' # default is 'off'
        pgaudit.log_statement_once: 'off' # same as default
        pgaudit.log_parameter: 'on' # default is 'off'
    pgbouncer:
      enabled: false
    replication:
      enabled: false
      replication_user_name: ls_repmgr
      replication_user_password: PASSWORD_TO_CHANGE
      privileged_user_name: ls_repmgr_admin
      privileged_user_password: PASSWORD_TO_CHANGE
      repmgr_database: ls_repmgr
      shared_preload_libraries:
        - repmgr
  logrotate:
    pgbouncer:
      period: weekly
      rotations: 5
    # Configuration partly based on /etc/logrotate.d/postgresql-common provided by 'postgresql-common' package from Ubuntu repo.
      # PostgreSQL from Ubuntu repo:
        # By default 'logging_collector' is disabled, so 'log_directory' parameter is ignored.
        # Default log path is /var/log/postgresql/postgresql-$version-$cluster.log.
      # PostgreSQL from SCL repo (RHEL):
        # By default 'logging_collector' is enabled and there is up to 7 files named 'daily' (e.g. postgresql-Wed.log)
        # and thus they can be overwritten by built-in log facility to provide rotation.
    # To have similar configuration for both distros (with logrotate), 'log_filename' parameter is modified.
    postgresql: |-
      /var/log/postgresql/postgresql*.log {
          maxsize 10M
          daily
          rotate 6
          copytruncate
          # delaycompress is for Filebeat
          delaycompress
          compress
          notifempty
          missingok
          su root root
          nomail
          # to have multiple unique filenames per day when dateext option is set
          dateformat -%Y%m%dH%H
      }      

4.26 - Prometheus

Prometheus options

The content of the prometheus.yml file is listed for reference only

---
kind: configuration/prometheus
title: "Prometheus"
name: default
specification:
  config_directory: "/etc/prometheus"
  storage:
    data_directory: "/var/lib/prometheus"
  config_flags:                                                            # Parameters that Prometheus service will be started with.
    - "--config.file=/etc/prometheus/prometheus.yml"                       # Directory should be the same as "config_directory"
    - "--storage.tsdb.path=/var/lib/prometheus"                            # Directory should be the same as "storage.data_directory"
    - "--storage.tsdb.retention.time=180d"                                 # Data retention time for metrics
    - "--storage.tsdb.retention.size=20GB"                                 # Data retention size for metrics
    - "--web.console.libraries=/etc/prometheus/console_libraries"          # Directory should be the same as "config_directory"
    - "--web.console.templates=/etc/prometheus/consoles"                   # Directory should be the same as "config_directory"
    - "--web.listen-address=0.0.0.0:9090"                                  # Address that Prometheus console will be available
    - "--web.enable-admin-api"                                             # Enables administrative HTTP API
  metrics_path: "/metrics"
  scrape_interval : "15s"
  scrape_timeout: "10s"
  evaluation_interval: "10s"
  remote_write: []
  remote_read: []
  alertmanager:
    enable: false # To make Alertmanager working, you have to enable it and define receivers and routes
    alert_rules:
      common: true
      container: false
      kafka: false
      node: false
      postgresql: false
      prometheus: false
    # config: # Configuration for Alertmanager, it will be passed to Alertmanager service.
    #   # Full list of configuration fields https://prometheus.io/docs/alerting/configuration/
    #   global:
    #     resolve_timeout: 5m
    #     smtp_from: "alert@test.com"
    #     smtp_smarthost: "smtp-url:smtp-port"
    #     smtp_auth_username: "your-smtp-user@domain.com"
    #     smtp_auth_password: "your-smtp-password"
    #     smtp_require_tls: True
    #   route:
    #     group_by: ['alertname']
    #     group_wait: 10s
    #     group_interval: 10s
    #     repeat_interval: 1h
    #     receiver: 'email' # Default receiver, change if another is set to default
    #     routes: # Example routes, names need to match 'name' field of receiver
    #       - match_re:
    #           severity: critical
    #         receiver: opsgenie
    #         continue: true
    #       - match_re:
    #           severity: critical
    #         receiver: pagerduty
    #         continue: true
    #       - match_re:
    #           severity: info|warning|critical
    #         receiver: slack
    #         continue: true
    #       - match_re:
    #           severity: warning|critical
    #         receiver: email
    #   receivers: # example configuration for receivers # api_url: https://prometheus.io/docs/alerting/configuration/#receiver
    #     - name: 'email'
    #       email_configs:
    #         - to: "test@domain.com"
    #     - name: 'slack'
    #       slack_configs:
    #         - api_url: "your-slack-integration-url"
    #     - name: 'pagerduty'
    #       pagerduty_configs:
    #         - service_key: "your-pagerduty-service-key"
    #     - name: 'opsgenie'
    #       opsgenie_config:
    #         api_key: <secret> | default = global.opsgenie_api_key
    #         api_url: <string> | default = global.opsgenie_api_url

4.27 - RabbitMQ

RabbitMQ options

The content of the rabbitmq.yml file is listed for reference only

---
kind: configuration/rabbitmq
title: "RabbitMQ"
name: default
specification:
  rabbitmq_user: rabbitmq
  rabbitmq_group: rabbitmq
  stop_service: false

  logrotate_period: weekly
  logrotate_number: 10
  ulimit_open_files: 65535

  amqp_port: 5672
  rabbitmq_use_longname: AUTOCONFIGURED # true/false/AUTOCONFIGURED
  rabbitmq_policies: []
  rabbitmq_plugins: []
  custom_configurations: []
  cluster:
    is_clustered: false

4.28 - Recovery

Recovery options

The content of the recovery.yml file is listed for reference only

---
kind: configuration/recovery
title: Recovery Config
name: default
specification:
  components:
    load_balancer:
      enabled: false
      snapshot_name: latest
    logging:
      enabled: false
      snapshot_name: latest
    monitoring:
      enabled: false
      snapshot_name: latest
    postgresql:
      enabled: false
      snapshot_name: latest
    rabbitmq:
      enabled: false
      snapshot_name: latest

4.29 - Repository

Repository options

The content of the repository.yml file is listed for reference only

---
kind: configuration/repository
title: "LambdaStack requirements repository"
name: default
specification:
  description: "Local repository of binaries required to install LambdaStack"
  download_done_flag_expire_minutes: 120
  apache_lsrepo_path: "/var/www/html/lsrepo"
  teardown:
    disable_http_server: true # whether to stop and disable Apache HTTP Server service
    remove:
      files: false
      helm_charts: false
      images: false
      packages: false

4.30 - Shared-Config

Shared-Config options

The content of the shared-config.yml file is listed for reference only

---
kind: configuration/shared-config
title: "Shared configuration that will be visible to all roles"
name: default
specification:
  custom_repository_url: '' # leave it empty to use local repository or provide url to your repo
  custom_image_registry_address: '' # leave it empty to use local registry or provide address of your registry (hostname:port). This registry will be used to populate K8s control plane and should contain all required images.
  download_directory: /tmp # directory where files and images will be stored just before installing/loading
  vault_location: '' # if empty "BUILD DIRECTORY/vault" will be used
  vault_tmp_file_location: SET_BY_AUTOMATION
  use_ha_control_plane: False
  promote_to_ha: False

4.31 - Vault

Hashicorp Vault options

The content of the vault.yml file is listed for reference only

---
kind: configuration/vault
title: Vault Config
name: default
specification:
  vault_enabled: false
  vault_system_user: vault
  vault_system_group: vault
  enable_vault_audit_logs: false
  enable_vault_ui: false
  vault_script_autounseal: true
  vault_script_autoconfiguration: true
  tls_disable: false
  kubernetes_integration: true
  kubernetes_configuration: true
  kubernetes_namespace: default
  enable_vault_kubernetes_authentication: true
  app_secret_path: devwebapp
  revoke_root_token: false
  secret_mount_path: secret
  vault_token_cleanup: true
  vault_install_dir: /opt/vault
  vault_log_level: info
  override_existing_vault_users: false
  certificate_name: fullchain.pem
  private_key_name: privkey.pem
  selfsigned_certificate:
    country: US
    state: state
    city: city
    company: company
    common_name: "*"
  vault_tls_valid_days: 365
  vault_users:
    - name: admin
      policy: admin
    - name: provisioner
      policy: provisioner
  files:
    vault_helm_chart_file_name: v0.11.0.tar.gz
  vault_helm_chart_values:
    injector:
      image:
        repository: "{{ image_registry_address }}/hashicorp/vault-k8s"
      agentImage:
        repository: "{{ image_registry_address }}/vault"
    server:
      image:
        repository: "{{ image_registry_address }}/vault"

4.32 - Zookeeper

Zookeeper options

The content of the zookeeper.yml file is listed for reference only

---
kind: configuration/zookeeper
title: "Zookeeper"
name: default
specification:
  static_config_file:
    # This block is injected to $ZOOCFGDIR/zoo.cfg
    configurable_block: |
      # Limits the number of concurrent connections (at the socket level) that a single client, identified by IP address,
      # may make to a single member of the ZooKeeper ensemble. This is used to prevent certain classes of DoS attacks,
      # including file descriptor exhaustion. The default is 60. Setting this to 0 removes the limit.
      maxClientCnxns=0

      # --- AdminServer configuration ---

      # By default the AdminServer is enabled. Disabling it will cause automated test failures.
      admin.enableServer=true

      # The address the embedded Jetty server listens on. Defaults to 0.0.0.0.
      admin.serverAddress=127.0.0.1

      # The port the embedded Jetty server listens on. Defaults to 8080.
      admin.serverPort=8008      

5 - GCP

Minimal and Full configuration options

WIP - Comming Soon!