Customize Pigsty with Configuration

Express your infra and clusters with declarative config files

Besides using the configuration wizard to auto-generate configs, you can write Pigsty config files from scratch. This tutorial guides you through building a complex inventory step by step.

If you define everything in the inventory upfront, a single deploy.yml playbook run completes all deployment—but it hides the details.

This doc breaks down all modules and playbooks, showing how to incrementally build from a simple config to a complete deployment.


Minimal Configuration

The simplest valid config only defines the admin_ip variable—the IP address of the node where Pigsty is installed (admin node):

all: { vars: { admin_ip: 10.10.10.10 } }
# Set region: china to use mirrors
all: { vars: { admin_ip: 10.10.10.10, region: china } }

This config deploys nothing, but running ./deploy.yml generates a self-signed CA in files/pki/ca for issuing certificates.

For convenience, you can also set region to specify which region’s software mirrors to use (default, china, europe).


Add Nodes

Pigsty’s NODE module manages cluster nodes. Any IP address in the inventory will be managed by Pigsty with the NODE module installed.

all:  # Remember to replace 10.10.10.10 with your actual IP
  children: { nodes: { hosts: { 10.10.10.10: {} } } }
  vars:
    admin_ip: 10.10.10.10                   # Current node IP
    region: default                         # Default repos
    node_repo_modules: node,pgsql,infra     # Add node, pgsql, infra repos
all:  # Remember to replace 10.10.10.10 with your actual IP
  children: { nodes: { hosts: { 10.10.10.10: {} } } }
  vars:
    admin_ip: 10.10.10.10                 # Current node IP
    region: china                         # Use mirrors
    node_repo_modules: node,pgsql,infra   # Add node, pgsql, infra repos

We added two global parameters: node_repo_modules specifies repos to add; region specifies which region’s mirrors to use.

These parameters enable the node to use correct repositories and install required packages. The NODE module offers many customization options: node names, DNS, repos, packages, NTP, kernel params, tuning templates, monitoring, log collection, etc. Even without changes, the defaults are sufficient.

Run deploy.yml or more precisely node.yml to bring the defined node under Pigsty management.

IDNODEINFRAETCDPGSQLDescription
110.10.10.10---Add node

Add Infrastructure

A full-featured RDS cloud database service needs infrastructure support: monitoring (metrics/log collection, alerting, visualization), NTP, DNS, and other foundational services.

Define a special group infra to deploy the INFRA module:

all:  # Simply changed group name from nodes -> infra and added infra_seq
  children: { infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } } }
  vars:
    admin_ip: 10.10.10.10
    region: default
    node_repo_modules: node,pgsql,infra
all:  # Simply changed group name from nodes -> infra and added infra_seq
  children: { infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } } }
  vars:
    admin_ip: 10.10.10.10
    region: china
    node_repo_modules: node,pgsql,infra

We also assigned an identity parameter: infra_seq to distinguish nodes in multi-node HA INFRA deployments.

Run infra.yml to install INFRA**](/docs/infra/) and [**NODE modules on 10.10.10.10:

./infra.yml   # Install INFRA module on infra group (includes NODE module)

NODE module is implicitly defined as long as an IP exists. NODE is idempotent—re-running has no side effects.

After completion, you’ll have complete observability infrastructure and node monitoring, but PostgreSQL database service is not yet deployed.

If your goal is just to set up this monitoring system (Grafana + Victoria), you’re done! The infra template is designed for this. Everything in Pigsty is modular: you can deploy only monitoring infra without databases; or vice versa—run HA PostgreSQL clusters without infra—Slim Install.

IDNODEINFRAETCDPGSQLDescription
110.10.10.10infra-1--Add infrastructure

Deploy Database Cluster

To provide PostgreSQL service, install the PGSQL` module and its dependency ETCD—just two lines of config:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } } # Add pg cluster
  vars: { admin_ip: 10.10.10.10, region: default, node_repo_modules: node,pgsql,infra }
all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } } # Add pg cluster
  vars: { admin_ip: 10.10.10.10, region: china, node_repo_modules: node,pgsql,infra }

We added two new groups: etcd and pg-meta, defining a single-node etcd cluster and a single-node PostgreSQL cluster.

Use ./deploy.yml to redeploy everything, or incrementally deploy:

./etcd.yml  -l etcd      # Install ETCD module on etcd group
./pgsql.yml -l pg-meta   # Install PGSQL module on pg-meta group

PGSQL depends on ETCD for HA consensus, so install ETCD first. After completion, you have a working PostgreSQL service!

IDNODEINFRAETCDPGSQLDescription
110.10.10.10infra-1etcd-1pg-meta-1Add etcd and PostgreSQL cluster

We used node.yml, infra.yml, etcd.yml, and pgsql.yml to deploy all four core modules on a single machine.


Define Databases and Users

In Pigsty, you can customize PostgreSQL cluster internals like databases and users through the inventory:

all:
  children:
    # Other groups and variables hidden for brevity
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:       # Define database users
          - { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user  }
        pg_databases:   # Define business databases
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [vector] }
  • pg_users: Defines a new user dbuser_meta with password DBUser.Meta
  • pg_databases: Defines a new database meta with Pigsty CMDB schema (optional) and vector extension

Pigsty offers rich customization parameters covering all aspects of databases and users. If you define these parameters upfront, they’re automatically created during ./pgsql.yml execution. For existing clusters, you can incrementally create or modify users and databases:

bin/pgsql-user pg-meta dbuser_meta      # Ensure user dbuser_meta exists in pg-meta
bin/pgsql-db   pg-meta meta             # Ensure database meta exists in pg-meta

Configure PG Version and Extensions

You can install different major versions of PostgreSQL, and up to 440 extensions. Let’s remove the current default PG 18 and install PG 17:

./pgsql-rm.yml -l pg-meta   # Remove old pg-meta cluster (it's PG 18)

We can customize parameters to install and enable common extensions by default: timescaledb, postgis, and pgvector:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_version: 17   # Specify PG version as 17
        pg_extensions: [ timescaledb, postgis, pgvector ]      # Install these extensions
        pg_libs: 'timescaledb,pg_stat_statements,auto_explain'  # Preload these extension libraries
        pg_databases: { { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [vector, postgis, timescaledb ] } }
        pg_users: { { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user } }

  vars:
    admin_ip: 10.10.10.10
    region: default
    node_repo_modules: node,pgsql,infra
./pgsql.yml -l pg-meta   # Install PG17 and extensions, recreate pg-meta cluster

Add More Nodes

Add more nodes to the deployment, bring them under Pigsty management, deploy monitoring, configure repos, install software…

# Add entire cluster at once, or add nodes individually
bin/node-add pg-test

bin/node-add 10.10.10.11
bin/node-add 10.10.10.12
bin/node-add 10.10.10.13

Deploy HA PostgreSQL Cluster

Now deploy a new database cluster pg-test on the three newly added nodes, using a three-node HA architecture:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq: 1 } } }, vars: { etcd_cluster: etcd } }
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica  }
        10.10.10.13: { pg_seq: 3, pg_role: replica  }
      vars: { pg_cluster: pg-test }

Deploy Redis Cluster

Pigsty provides optional Redis support as a caching service in front of PostgreSQL:

bin/redis-add redis-ms
bin/redis-add redis-meta
bin/redis-add redis-test

Redis HA requires cluster mode or sentinel mode. See Redis Configuration.


Deploy MinIO Cluster

Pigsty provides optional open-source object storage, S3 alternative—MinIO support, as backup repository for PostgreSQL.

./minio.yml -l minio

Serious prod MinIO deployments typically require at least 4 nodes with 4 disks each (4N/16D).


Deploy Docker Module

If you want to use containers to run tools for managing PG or software using PostgreSQL, install the DOCKER module:

./docker.yml -l infra

Use pre-made application templates to launch common software tools with one click, such as the GUI tool for PG management: Pgadmin:

./app.yml    -l infra -e app=pgadmin

You can even self-host enterprise-grade Supabase with Pigsty, using external HA PostgreSQL clusters as the foundation and running stateless components in containers.


Last Modified 2026-01-09: add supabase asciinema demo (693cfa8)