This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Get Started

Deploy Pigsty single-node version on your laptop/cloud server, access DB and Web UI

Pigsty uses a scalable architecture design, suitable for both large-scale production environments and single-node development/demo environments. This guide focuses on the latter.

If you intend to learn about Pigsty, you can start with the Quick Start single-node deployment. A Linux virtual machine with 1C/2G is sufficient to run Pigsty.

You can use a Linux MiniPC, free/discounted virtual machines provided by cloud providers, Windows WSL, or create a virtual machine on your own laptop for Pigsty deployment. Pigsty provides out-of-the-box Vagrant templates and Terraform templates to help you provision Linux VMs with one click locally or in the cloud.

pigsty-arch

The single-node version of Pigsty includes all core features: 440+ PG extensions, self-contained Grafana/Victoria monitoring, IaC provisioning capabilities, and local PITR point-in-time recovery. If you have external object storage (for PostgreSQL PITR backup), then for scenarios like demos, personal websites, and small services, even a single-node environment can provide a certain degree of data persistence guarantee. However, single-node cannot achieve High Availability—automatic failover requires at least 3 nodes.

If you want to install Pigsty in an environment without internet connection, please refer to the Offline Install mode. If you only need the PostgreSQL database itself, please refer to the Slim Install mode. If you are ready to start serious multi-node production deployment, please refer to the Deployment Guide.


Quick Start

Prepare a node with compatible Linux system, and execute as an admin user with passwordless ssh and sudo privileges:

curl -fsSL https://repo.pigsty.cc/get | bash  # Install Pigsty and dependencies
cd ~/pigsty; ./configure -g                   # Generate config (use default single-node config template, -g parameter generates random passwords)
./deploy.yml                                  # Execute deployment playbook to complete deployment

Yes, it’s that simple. You can use pre-configured templates to bring up Pigsty with one click without understanding any details.

Next, you can explore the Graphical User Interface, access PostgreSQL database services; or perform configuration customization and execute playbooks to deploy more clusters.

1 - Single-Node Installation

Get started with Pigsty—complete single-node install on a fresh Linux host!

This is the Pigsty single-node install guide. For multi-node HA prod deployment, refer to the Deployment docs.

Pigsty single-node installation consists of three steps: Install, Configure, and Deploy.


Summary

Prepare a node with compatible OS, and run as an admin user with nopass ssh and sudo:

curl -fsSL https://repo.pigsty.io/get | bash;
curl -fsSL https://repo.pigsty.cc/get | bash;

This command runs the install script, downloads and extracts Pigsty source to your home directory and installs dependencies. Then complete Configure and Deploy:

cd ~/pigsty      # Enter Pigsty directory
./configure -g   # Generate config file (optional, skip if you know how to configure)
./deploy.yml     # Execute deployment playbook based on generated config

After installation, access the Web UI via IP/domain + port 80/443 through Nginx, and access the default PostgreSQL service via port 5432.

The complete process takes 3–10 minutes depending on server specs/network. Offline installation speeds this up significantly; for monitoring-free setups, use Slim Install for even faster deployment.

Video Example: Online Single-Node Installation (Debian 13, x86_64)


Prepare

Installing Pigsty involves some preparation work. Here’s a checklist.

For single-node installations, many constraints can be relaxed—typically you only need to know your IP address. If you don’t have a static IP, use 127.0.0.1.

ItemRequirementItemRequirement
Node1-node, at least 1C2G, no upper limitDisk/data mount point, xfs recommended
OSLinux x86_64 / aarch64, EL/Debian/UbuntuNetworkStatic IPv4; single-node without fixed IP can use 127.0.0.1
SSHnopass SSH login via public keySUDOsudo privilege, preferably with nopass option

Typically, you only need to focus on your local IP address—as an exception, for single-node deployment, use 127.0.0.1 if no static IP available.


Install

Use the following commands to auto-install Pigsty source to ~/pigsty (recommended). Deployment dependencies (Ansible) are installed automatically.

curl -fsSL https://repo.pigsty.io/get | bash            # Install latest stable version
curl -fsSL https://repo.pigsty.io/get | bash -s v4.0.0  # Install specific version
curl -fsSL https://repo.pigsty.cc/get | bash            # Install latest stable version
curl -fsSL https://repo.pigsty.cc/get | bash -s v4.0.0  # Install specific version

If you prefer not to run a remote script, you can manually download or clone the source. When using git, always checkout a specific version before use.

git clone https://github.com/pgsty/pigsty; cd pigsty;
git checkout v4.0.0-b4;  # Always checkout a specific version when using git

For manual download/clone installations, run the bootstrap script to install Ansible and other dependencies. You can also install them yourself.

./bootstrap           # Install ansible for subsequent deployment

Configure

In Pigsty, deployment blueprints are defined by the inventory, the pigsty.yml configuration file. You can customize through declarative configuration.

Pigsty provides the configure script as an optional configuration wizard, which generates an inventory with good defaults based on your environment and input:

./configure -g                # Use config wizard to generate config with random passwords

The generated config file is at ~/pigsty/pigsty.yml by default. Review and customize as needed before installation.

Many configuration templates are available for reference. You can skip the wizard and directly edit pigsty.yml:

./configure                  # Default template, install PG 18 with essential extensions
./configure -v 17            # Use PG 17 instead of default PG 18
./configure -c rich          # Create local repo, download all extensions, install major ones
./configure -c slim          # Minimal install template, use with ./slim.yml playbook
./configure -c app/supa      # Use app/supa self-hosted Supabase template
./configure -c ivory         # Use IvorySQL kernel instead of native PG
./configure -i 10.11.12.13   # Explicitly specify primary IP address
./configure -r china         # Use China mirrors instead of default repos
./configure -c ha/full -s    # Use 4-node sandbox template, skip IP replacement/detection
Example configure output
$ ./configure

configure pigsty v4.0.0 begin
[ OK ] region  = default
[ OK ] kernel  = Linux
[ OK ] machine = x86_64
[ OK ] package = rpm,dnf
[ OK ] vendor  = rocky (Rocky Linux)
[ OK ] version = 9 (9.6)
[ OK ] sudo = vagrant ok
[ OK ] ssh = [email protected] ok
[WARN] Multiple IP address candidates found:
    (1) 192.168.121.24	inet 192.168.121.24/24 brd 192.168.121.255 scope global dynamic noprefixroute eth0
    (2) 10.10.10.12	    inet 10.10.10.12/24 brd 10.10.10.255 scope global noprefixroute eth1
[ IN ] INPUT primary_ip address (of current meta node, e.g 10.10.10.10):
=> 10.10.10.12    # <------- INPUT YOUR PRIMARY IPV4 ADDRESS HERE!
[ OK ] primary_ip = 10.10.10.12 (from input)
[ OK ] admin = [email protected] ok
[ OK ] mode = meta (el9)
[ OK ] locale  = C.UTF-8
[ OK ] configure pigsty done
proceed with ./deploy.yml

Common configure arguments:

ArgumentDescription
-i|--ipPrimary internal IP of current host, replaces placeholder 10.10.10.10
-c|--confConfig template name relative to conf/, without .yml suffix
-v|--versionPostgreSQL major version: 13, 14, 15, 16, 17, 18
-r|--regionUpstream repo region for faster downloads: (default|china|europe)
-n|--non-interactiveUse command-line args for primary IP, skip interactive wizard
-x|--proxyUse current env vars to configure proxy_env

If your machine has multiple IPs bound, use -i|--ip <ipaddr> to explicitly specify the primary IP, or provide it in the interactive prompt. The script replaces the placeholder 10.10.10.10 with your node’s primary IPv4 address. Choose a static IP; do not use public IPs.


Deploy

Pigsty’s deploy.yml playbook applies the blueprint from Configure to target nodes.

./deploy.yml     # Deploy all defined modules on current node at once
Example deployment output
......

TASK [pgsql : pgsql init done] *************************************************
ok: [10.10.10.11] => {
    "msg": "postgres://10.10.10.11/postgres | meta  | dbuser_meta dbuser_view "
}
......

TASK [pg_monitor : load grafana datasource meta] *******************************
changed: [10.10.10.11]

PLAY RECAP *********************************************************************
10.10.10.11                : ok=302  changed=232  unreachable=0    failed=0    skipped=65   rescued=0    ignored=1
localhost                  : ok=6    changed=3    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

When you see pgsql init done, PLAY RECAP and similar output at the end, installation is complete!



Interface

After single-node installation, you typically have four modules installed on the current node: PGSQL, INFRA, NODE, and ETCD.

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1infra-1etcd-1

The INFRA module provides a graphical management interface, accessible via Nginx on ports 80/443.

The PGSQL module provides a PostgreSQL database server, listening on 5432, also accessible via Pgbouncer/HAProxy proxies.


More

Use the current node as a base to deploy and monitor more clusters: add cluster definitions to the inventory and run:

bin/node-add   pg-test      # Add the 3 nodes of cluster pg-test to Pigsty management
bin/pgsql-add  pg-test      # Initialize a 3-node pg-test HA PG cluster
bin/redis-add  redis-ms     # Initialize Redis cluster: redis-ms

Most modules require the NODE module installed first. See available modules for details:

PGSQL, INFRA, NODE, ETCD, MINIO, REDIS, FERRET, DOCKER……

2 - Web Interface

Explore Pigsty’s Web graphical management interface, Grafana dashboards, and how to access them via domain names and HTTPS.

After single-node installation, you’ll have the INFRA module installed on the current node, which includes an out-of-the-box Nginx web server.

The default server configuration provides a WebUI graphical interface for displaying monitoring dashboards and unified proxy access to other component web interfaces.


Access

You can access this graphical interface by entering the deployment node’s IP address in your browser. By default, Nginx serves on standard ports 80/443.

Direct IP AccessDomain (HTTP)Domain (HTTPS)Demo
http://10.10.10.10http://i.pigstyhttps://i.pigstyhttps://demo.pigsty.io


Monitoring

To access Pigsty’s monitoring system dashboards (Grafana), visit the /ui endpoint on the server.

Direct IP AccessDomain (HTTP)Domain (HTTPS)Demo
http://10.10.10.10/uihttp://i.pigsty/uihttps://i.pigsty/uihttps://demo.pigsty.io/ui

If your service is exposed to Internet or office network, we recommend accessing via domain names and enabling HTTPS encryption—only minimal configuration is needed.


Endpoints

By default, Nginx exposes the following endpoints via different paths on the default server at ports 80/443:

EndpointComponentNative PortDescriptionPublic Demo
/Nginx80/443Homepage, local repo, file servicedemo.pigsty.io
/ui/Grafana3000Grafana dashboard portaldemo.pigsty.io/ui/
/vmetrics/VictoriaMetrics8428Time series database Web UIdemo.pigsty.io/vmetrics/
/vlogs/VictoriaLogs9428Log database Web UIdemo.pigsty.io/vlogs/
/vtraces/VictoriaTraces10428Distributed tracing Web UIdemo.pigsty.io/vtraces/
/vmalert/VMAlert8880Alert rule managementdemo.pigsty.io/vmalert/
/alertmgr/AlertManager9059Alert management Web UIdemo.pigsty.io/alertmgr/
/blackbox/Blackbox9115Blackbox exporter
/haproxy/*HAProxy9101Load balancer admin Web UI
/pevPEV280PostgreSQL execution plan visualizerdemo.pigsty.io/pev
/nginxNginx80Nginx status page (for metrics)

Domain Access

If you have your own domain name, you can point it to Pigsty server’s IP address to access various services via domain.

If you want to enable HTTPS, you should modify the home server configuration in the infra_portal parameter:

all:
  vars:
    infra_portal:
      home : { domain: i.pigsty } # Replace i.pigsty with your domain
all:
  vars:
    infra_portal:  # domain specifies the domain name  # certbot parameter specifies certificate name
      home : { domain: demo.pigsty.io ,certbot: mycert }

You can run make cert command after deployment to apply for a free Let’s Encrypt certificate for the domain. If you don’t define the certbot field, Pigsty will use the local CA to issue a self-signed HTTPS certificate by default. In this case, you must first trust Pigsty’s self-signed CA to access normally in your browser.

You can also mount local directories and other upstream services to Nginx. For more management details, refer to INFRA Management - Nginx.

3 - Getting Started with PostgreSQL

Get started with PostgreSQL—connect using CLI and graphical clients

PostgreSQL (abbreviated as PG) is the world’s most advanced and popular open-source relational database. Use it to store and retrieve multi-modal data.

This guide is for developers with basic Linux CLI experience but not very familiar with PostgreSQL, helping you quickly get started with PG in Pigsty.

We assume you’re a personal user deploying in the default single-node mode. For prod multi-node HA cluster access, refer to Prod Service Access.


Basics

In the default single-node installation template, you’ll create a PostgreSQL database cluster named pg-meta on the current node, with only one primary instance.

PostgreSQL listens on port 5432, and the cluster has a preset database meta available for use.

After installation, exit the current admin user ssh session and re-login to refresh environment variables. Then simply type p and press Enter to access the database cluster via the psql CLI tool:

vagrant@pg-meta-1:~$ p
psql (18.1 (Ubuntu 18.1-1.pgdg24.04+2))
Type "help" for help.

postgres=#

You can also switch to the postgres OS user and execute psql directly to connect to the default postgres admin database.


Connecting to Database

To access a PostgreSQL database, use a CLI tool or graphical client and fill in the PostgreSQL connection string:

postgres://username:password@host:port/dbname

Some drivers and tools may require you to fill in these parameters separately. The following five are typically required:

ParameterDescriptionExample ValueNotes
hostDatabase server address10.10.10.10Replace with your node IP or domain; can omit for localhost
portPort number5432PG default port, can be omitted
usernameUsernamedbuser_dbaPigsty default database admin
passwordPasswordDBUser.DBAPigsty default admin password (change this!)
dbnameDatabase namemetaDefault template database name

For personal use, you can directly use the Pigsty default database superuser dbuser_dba for connection and management. The dbuser_dba has full database privileges. By default, if you specified the configure -g parameter when configuring Pigsty, the password will be randomly generated and saved in ~/pigsty/pigsty.yml:

cat ~/pigsty/pigsty.yml | grep pg_admin_password

Default Accounts

Pigsty’s default single-node template presets the following database users, ready to use out of the box:

UsernamePasswordRolePurpose
dbuser_dbaDBUser.DBASuperuserDatabase admin (change this!)
dbuser_metaDBUser.MetaBusiness adminApp R/W (change this!)
dbuser_viewDBUser.ViewerRead-only userData viewing (change this!)

For example, you can connect to the meta database in the pg-meta cluster using three different connection strings with three different users:

postgres://dbuser_dba:[email protected]:5432/meta
postgres://dbuser_meta:[email protected]:5432/meta
postgres://dbuser_view:[email protected]:5432/meta

Note: These default passwords are automatically replaced with random strong passwords when using configure -g. Remember to replace the IP address and password with actual values.


Using CLI Tools

psql is the official PostgreSQL CLI client tool, powerful and the first choice for DBAs and developers.

On a server with Pigsty deployed, you can directly use psql to connect to the local database:

# Simplest way: use postgres system user for local connection (no password needed)
sudo -u postgres psql

# Use connection string (recommended, most universal)
psql 'postgres://dbuser_dba:[email protected]:5432/meta'

# Use parameter form
psql -h 10.10.10.10 -p 5432 -U dbuser_dba -d meta

# Use env vars to avoid password appearing in command line
export PGPASSWORD='DBUser.DBA'
psql -h 10.10.10.10 -p 5432 -U dbuser_dba -d meta

After successful connection, you’ll see a prompt like this:

psql (18.1)
Type "help" for help.

meta=#

Common psql Commands

After entering psql, you can execute SQL statements or use meta-commands starting with \:

CommandDescriptionCommandDescription
Ctrl+CInterrupt queryCtrl+DExit psql
\?Show all meta commands\hShow SQL command help
\lList all databases\c dbnameSwitch to database
\d tableView table structure\d+ tableView table details
\duList all users/roles\dxList installed extensions
\dnList all schemas\dtList all tables

Executing SQL

In psql, directly enter SQL statements ending with semicolon ;:

-- Check PostgreSQL version
SELECT version();

-- Check current time
SELECT now();

-- Create a test table
CREATE TABLE test (id SERIAL PRIMARY KEY, name TEXT, created_at TIMESTAMPTZ DEFAULT now());

-- Insert data
INSERT INTO test (name) VALUES ('hello'), ('world');

-- Query data
SELECT * FROM test;

-- Drop test table
DROP TABLE test;

Using Graphical Clients

If you prefer graphical interfaces, here are some popular PostgreSQL clients:

Grafana

Pigsty’s INFRA module includes Grafana with a pre-configured PostgreSQL data source (Meta). You can directly query the database using SQL from the Grafana Explore panel through the browser graphical interface, no additional client tools needed.

Grafana’s default username is admin, and the password can be found in the grafana_admin_password field in the inventory (default pigsty).

DataGrip

DataGrip is a professional database IDE from JetBrains, with powerful features. IntelliJ IDEA’s built-in Database Console can also connect to PostgreSQL in a similar way.

DBeaver

DBeaver is a free open-source universal database tool supporting almost all major databases. It’s a cross-platform desktop client.

pgAdmin

pgAdmin is the official PostgreSQL-specific GUI tool from PGDG, available through browser or as a desktop client.

Pigsty provides a configuration template for one-click pgAdmin service deployment using Docker in Software Template: pgAdmin.


Viewing Monitoring Dashboards

Pigsty provides many PostgreSQL monitoring dashboards, covering everything from cluster overview to single-table analysis.

We recommend starting with PGSQL Overview. Many elements in the dashboards are clickable, allowing you to drill down layer by layer to view details of each cluster, instance, database, and even internal database objects like tables, indexes, and functions.


Trying Extensions

One of PostgreSQL’s most powerful features is its extension ecosystem. Extensions can add new data types, functions, index methods, and more to the database.

Pigsty provides an unparalleled 440+ extensions in the PG ecosystem, covering 16 major categories including time-series, geographic, vector, and full-text search—install with one click. Start with three powerful and commonly used extensions that are automatically installed in Pigsty’s default template. You can also install more extensions as needed.

  • postgis: Geographic information system for processing maps and location data
  • pgvector: Vector database supporting AI embedding vector similarity search
  • timescaledb: Time-series database for efficient storage and querying of time-series data
\dx                            -- psql meta command, list installed extensions
TABLE pg_available_extensions; -- Query installed, available extensions
CREATE EXTENSION postgis;      -- Enable postgis extension

Next Steps

Congratulations on completing the PostgreSQL basics! Next, you can start configuring and customizing your database.

4 - Customize Pigsty with Configuration

Express your infra and clusters with declarative config files

Besides using the configuration wizard to auto-generate configs, you can write Pigsty config files from scratch. This tutorial guides you through building a complex inventory step by step.

If you define everything in the inventory upfront, a single deploy.yml playbook run completes all deployment—but it hides the details.

This doc breaks down all modules and playbooks, showing how to incrementally build from a simple config to a complete deployment.


Minimal Configuration

The simplest valid config only defines the admin_ip variable—the IP address of the node where Pigsty is installed (admin node):

all: { vars: { admin_ip: 10.10.10.10 } }
# Set region: china to use mirrors
all: { vars: { admin_ip: 10.10.10.10, region: china } }

This config deploys nothing, but running ./deploy.yml generates a self-signed CA in files/pki/ca for issuing certificates.

For convenience, you can also set region to specify which region’s software mirrors to use (default, china, europe).


Add Nodes

Pigsty’s NODE module manages cluster nodes. Any IP address in the inventory will be managed by Pigsty with the NODE module installed.

all:  # Remember to replace 10.10.10.10 with your actual IP
  children: { nodes: { hosts: { 10.10.10.10: {} } } }
  vars:
    admin_ip: 10.10.10.10                   # Current node IP
    region: default                         # Default repos
    node_repo_modules: node,pgsql,infra     # Add node, pgsql, infra repos
all:  # Remember to replace 10.10.10.10 with your actual IP
  children: { nodes: { hosts: { 10.10.10.10: {} } } }
  vars:
    admin_ip: 10.10.10.10                 # Current node IP
    region: china                         # Use mirrors
    node_repo_modules: node,pgsql,infra   # Add node, pgsql, infra repos

We added two global parameters: node_repo_modules specifies repos to add; region specifies which region’s mirrors to use.

These parameters enable the node to use correct repositories and install required packages. The NODE module offers many customization options: node names, DNS, repos, packages, NTP, kernel params, tuning templates, monitoring, log collection, etc. Even without changes, the defaults are sufficient.

Run deploy.yml or more precisely node.yml to bring the defined node under Pigsty management.

IDNODEINFRAETCDPGSQLDescription
110.10.10.10---Add node

Add Infrastructure

A full-featured RDS cloud database service needs infrastructure support: monitoring (metrics/log collection, alerting, visualization), NTP, DNS, and other foundational services.

Define a special group infra to deploy the INFRA module:

all:  # Simply changed group name from nodes -> infra and added infra_seq
  children: { infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } } }
  vars:
    admin_ip: 10.10.10.10
    region: default
    node_repo_modules: node,pgsql,infra
all:  # Simply changed group name from nodes -> infra and added infra_seq
  children: { infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } } }
  vars:
    admin_ip: 10.10.10.10
    region: china
    node_repo_modules: node,pgsql,infra

We also assigned an identity parameter: infra_seq to distinguish nodes in multi-node HA INFRA deployments.

Run infra.yml to install INFRA**](/docs/infra/) and [**NODE modules on 10.10.10.10:

./infra.yml   # Install INFRA module on infra group (includes NODE module)

NODE module is implicitly defined as long as an IP exists. NODE is idempotent—re-running has no side effects.

After completion, you’ll have complete observability infrastructure and node monitoring, but PostgreSQL database service is not yet deployed.

If your goal is just to set up this monitoring system (Grafana + Victoria), you’re done! The infra template is designed for this. Everything in Pigsty is modular: you can deploy only monitoring infra without databases; or vice versa—run HA PostgreSQL clusters without infra—Slim Install.

IDNODEINFRAETCDPGSQLDescription
110.10.10.10infra-1--Add infrastructure

Deploy Database Cluster

To provide PostgreSQL service, install the PGSQL` module and its dependency ETCD—just two lines of config:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } } # Add pg cluster
  vars: { admin_ip: 10.10.10.10, region: default, node_repo_modules: node,pgsql,infra }
all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } } # Add pg cluster
  vars: { admin_ip: 10.10.10.10, region: china, node_repo_modules: node,pgsql,infra }

We added two new groups: etcd and pg-meta, defining a single-node etcd cluster and a single-node PostgreSQL cluster.

Use ./deploy.yml to redeploy everything, or incrementally deploy:

./etcd.yml  -l etcd      # Install ETCD module on etcd group
./pgsql.yml -l pg-meta   # Install PGSQL module on pg-meta group

PGSQL depends on ETCD for HA consensus, so install ETCD first. After completion, you have a working PostgreSQL service!

IDNODEINFRAETCDPGSQLDescription
110.10.10.10infra-1etcd-1pg-meta-1Add etcd and PostgreSQL cluster

We used node.yml, infra.yml, etcd.yml, and pgsql.yml to deploy all four core modules on a single machine.


Define Databases and Users

In Pigsty, you can customize PostgreSQL cluster internals like databases and users through the inventory:

all:
  children:
    # Other groups and variables hidden for brevity
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:       # Define database users
          - { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user  }
        pg_databases:   # Define business databases
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [vector] }
  • pg_users: Defines a new user dbuser_meta with password DBUser.Meta
  • pg_databases: Defines a new database meta with Pigsty CMDB schema (optional) and vector extension

Pigsty offers rich customization parameters covering all aspects of databases and users. If you define these parameters upfront, they’re automatically created during ./pgsql.yml execution. For existing clusters, you can incrementally create or modify users and databases:

bin/pgsql-user pg-meta dbuser_meta      # Ensure user dbuser_meta exists in pg-meta
bin/pgsql-db   pg-meta meta             # Ensure database meta exists in pg-meta

Configure PG Version and Extensions

You can install different major versions of PostgreSQL, and up to 440 extensions. Let’s remove the current default PG 18 and install PG 17:

./pgsql-rm.yml -l pg-meta   # Remove old pg-meta cluster (it's PG 18)

We can customize parameters to install and enable common extensions by default: timescaledb, postgis, and pgvector:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_version: 17   # Specify PG version as 17
        pg_extensions: [ timescaledb, postgis, pgvector ]      # Install these extensions
        pg_libs: 'timescaledb,pg_stat_statements,auto_explain'  # Preload these extension libraries
        pg_databases: { { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [vector, postgis, timescaledb ] } }
        pg_users: { { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user } }

  vars:
    admin_ip: 10.10.10.10
    region: default
    node_repo_modules: node,pgsql,infra
./pgsql.yml -l pg-meta   # Install PG17 and extensions, recreate pg-meta cluster

Add More Nodes

Add more nodes to the deployment, bring them under Pigsty management, deploy monitoring, configure repos, install software…

# Add entire cluster at once, or add nodes individually
bin/node-add pg-test

bin/node-add 10.10.10.11
bin/node-add 10.10.10.12
bin/node-add 10.10.10.13

Deploy HA PostgreSQL Cluster

Now deploy a new database cluster pg-test on the three newly added nodes, using a three-node HA architecture:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq: 1 } } }, vars: { etcd_cluster: etcd } }
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica  }
        10.10.10.13: { pg_seq: 3, pg_role: replica  }
      vars: { pg_cluster: pg-test }

Deploy Redis Cluster

Pigsty provides optional Redis support as a caching service in front of PostgreSQL:

bin/redis-add redis-ms
bin/redis-add redis-meta
bin/redis-add redis-test

Redis HA requires cluster mode or sentinel mode. See Redis Configuration.


Deploy MinIO Cluster

Pigsty provides optional open-source object storage, S3 alternative—MinIO support, as backup repository for PostgreSQL.

./minio.yml -l minio

Serious prod MinIO deployments typically require at least 4 nodes with 4 disks each (4N/16D).


Deploy Docker Module

If you want to use containers to run tools for managing PG or software using PostgreSQL, install the DOCKER module:

./docker.yml -l infra

Use pre-made application templates to launch common software tools with one click, such as the GUI tool for PG management: Pgadmin:

./app.yml    -l infra -e app=pgadmin

You can even self-host enterprise-grade Supabase with Pigsty, using external HA PostgreSQL clusters as the foundation and running stateless components in containers.

5 - Run Playbooks with Ansible

Use Ansible playbooks to deploy and manage Pigsty clusters

Pigsty uses Ansible to manage clusters, a very popular large-scale/batch/automation ops tool in the SRE community.

Ansible can use declarative approach for server configuration management. All module deployments are implemented through a series of idempotent Ansible playbooks.

For example, in single-node deployment, you’ll use the deploy.yml playbook. Pigsty has more built-in playbooks, you can choose to use as needed.

Understanding Ansible basics helps with better use of Pigsty, but this is not required, especially for single-node deployment.


Deploy Playbook

Pigsty provides a “one-stop” deploy playbook deploy.yml, installing all modules on the current env in one go (if defined in config):

PlaybookCommandGroupinfra[nodes]etcdminio[pgsql]
infra.yml./infra.yml-l infra
node.yml./node.yml
etcd.yml./etcd.yml-l etcd
minio.yml./minio.yml-l minio
pgsql.yml./pgsql.yml

This is the simplest deployment method. You can also follow instructions in Customization Guide to incrementally complete deployment of all modules and nodes step by step.


Install Ansible

When using the Pigsty installation script, or the bootstrap phase of offline installation, Pigsty will automatically install ansible and its dependencies for you.

If you want to manually install Ansible, refer to the following instructions. The minimum supported Ansible version is 2.9.

sudo apt install -y ansible python3-jmespath
sudo dnf install -y ansible python-jmespath         # EL 10
sudo dnf install -y ansible python3.12-jmespath     # EL 9/8
brew install ansible
pip3 install jmespath

Ansible is also available on macOS. You can use Homebrew to install Ansible on Mac, and use it as an admin node to manage remote cloud servers. This is convenient for single-node Pigsty deployment on cloud VPS, but not recommended in prod envs.


Execute Playbook

Ansible playbooks are executable YAML files containing a series of task definitions to execute. Running playbooks requires the ansible-playbook executable in your environment variable PATH. Running ./node.yml playbook is essentially executing the ansible-playbook node.yml command.

You can use some parameters to fine-tune playbook execution. The following 4 parameters are essential for effective Ansible use:

PurposeParameterDescription
Target-l|--limit <pattern>Limit execution to specific groups/hosts/patterns
Tasks-t|--tags <tags>Only run tasks with specific tags
Params-e|--extra-vars <vars>Extra command-line parameters
Config-i|--inventory <path>Use a specific inventory file
./node.yml                         # Run node playbook on all hosts
./pgsql.yml -l pg-test             # Run pgsql playbook on pg-test cluster
./infra.yml -t repo_build          # Run infra.yml subtask repo_build
./pgsql-rm.yml -e pg_rm_pkg=false  # Remove pgsql, but keep packages (don't uninstall software)
./infra.yml -i conf/mynginx.yml    # Use another location's config file

Limit Hosts

Playbook execution targets can be limited with -l|--limit <selector>. This is convenient when running playbooks on specific hosts/nodes or groups/clusters. Here are some host limit examples:

./pgsql.yml                              # Run on all hosts (dangerous!)
./pgsql.yml -l pg-test                   # Run on pg-test cluster
./pgsql.yml -l 10.10.10.10               # Run on single host 10.10.10.10
./pgsql.yml -l pg-*                      # Run on hosts/groups matching glob `pg-*`
./pgsql.yml -l '10.10.10.11,&pg-test'    # Run on 10.10.10.11 in pg-test group
./pgsql-rm.yml -l 'pg-test,!10.10.10.11' # Run on pg-test, except 10.10.10.11

See all details in Ansible documentation: Patterns: targeting hosts and groups


Limit Tasks

Execution tasks can be controlled with -t|--tags <tags>. If specified, only tasks with the given tags will execute instead of the entire playbook.

./infra.yml -t repo          # Create repo
./node.yml  -t node_pkg      # Install node packages
./pgsql.yml -t pg_install    # Install PG packages and extensions
./etcd.yml  -t etcd_purge    # Destroy ETCD cluster
./minio.yml -t minio_alias   # Write MinIO CLI config

To run multiple tasks, specify multiple tags separated by commas -t tag1,tag2:

./node.yml  -t node_repo,node_pkg   # Add repos, then install packages
./pgsql.yml -t pg_hba,pg_reload     # Configure, then reload pg hba rules

Extra Vars

You can override config parameters at runtime using CLI arguments, which have highest priority.

Extra command-line parameters are passed via -e|--extra-vars KEY=VALUE, usable multiple times:

# Create admin using another admin user
./node.yml -e ansible_user=admin -k -K -t node_admin

# Initialize a specific Redis instance: 10.10.10.11:6379
./redis.yml -l 10.10.10.10 -e redis_port=6379 -t redis

# Remove PostgreSQL but keep packages and data
./pgsql-rm.yml -e pg_rm_pkg=false -e pg_rm_data=false

For complex parameters, use JSON strings to pass multiple complex parameters at once:

# Add repo and install packages
./node.yml -t node_install -e '{"node_repo_modules":"infra","node_packages":["duckdb"]}'

Specify Inventory

The default config file is pigsty.yml in the Pigsty home directory.

You can use -i <path> to specify a different inventory file path.

./pgsql.yml -i conf/rich.yml            # Initialize single node with all extensions per rich config
./pgsql.yml -i conf/ha/full.yml         # Initialize 4-node cluster per full config
./pgsql.yml -i conf/app/supa.yml        # Initialize 1-node Supabase deployment per supa.yml

Convenience Scripts

Pigsty provides a series of convenience scripts to simplify common operations. These scripts are in the bin/ directory:

bin/node-add   <cls>            # Add nodes to Pigsty management: ./node.yml -l <cls>
bin/node-rm    <cls>            # Remove nodes from Pigsty: ./node-rm.yml -l <cls>
bin/pgsql-add  <cls>            # Initialize PG cluster: ./pgsql.yml -l <cls>
bin/pgsql-rm   <cls>            # Remove PG cluster: ./pgsql-rm.yml -l <cls>
bin/pgsql-user <cls> <username> # Add business user: ./pgsql-user.yml -l <cls> -e username=<user>
bin/pgsql-db   <cls> <dbname>   # Add business database: ./pgsql-db.yml -l <cls> -e dbname=<db>
bin/redis-add  <cls>            # Initialize Redis cluster: ./redis.yml -l <cls>
bin/redis-rm   <cls>            # Remove Redis cluster: ./redis-rm.yml -l <cls>

These scripts are simple wrappers around Ansible playbooks, making common operations more convenient.


Playbook List

Below are the built-in playbooks in Pigsty. You can also easily add your own playbooks, or customize and modify playbook implementation logic as needed.

ModulePlaybookFunction
INFRAdeploy.ymlOne-click deploy Pigsty on current node
INFRAinfra.ymlInitialize Pigsty infrastructure on infra nodes
INFRAinfra-rm.ymlRemove infrastructure components from infra nodes
INFRAcache.ymlCreate offline packages from target node
INFRAcert.ymlIssue certificates using Pigsty self-signed CA
NODEnode.ymlInitialize node, adjust to desired state
NODEnode-rm.ymlRemove node from Pigsty
PGSQLpgsql.ymlInitialize HA PostgreSQL cluster or add replica
PGSQLpgsql-rm.ymlRemove PostgreSQL cluster or replica
PGSQLpgsql-db.ymlAdd new business database to existing cluster
PGSQLpgsql-user.ymlAdd new business user to existing cluster
PGSQLpgsql-pitr.ymlPerform point-in-time recovery on cluster
PGSQLpgsql-monitor.ymlMonitor remote PostgreSQL with local exporter
PGSQLpgsql-migration.ymlGenerate migration manual and scripts
PGSQLslim.ymlInstall Pigsty with minimal components
REDISredis.ymlInitialize Redis cluster/node/instance
REDISredis-rm.ymlRemove Redis cluster/node/instance
ETCDetcd.ymlInitialize ETCD cluster or add new member
ETCDetcd-rm.ymlRemove ETCD cluster/data or shrink member
MINIOminio.ymlInitialize MinIO cluster (optional pgBackRest repo)
MINIOminio-rm.ymlRemove MinIO cluster and data
DOCKERdocker.ymlInstall Docker on nodes
DOCKERapp.ymlInstall applications using Docker Compose
FERRETmongo.ymlInstall Mongo/FerretDB on nodes

6 - Offline Installation

Install Pigsty in air-gapped env using offline packages

Pigsty installs from Internet upstream by default, but some envs are isolated from the Internet. To address this, Pigsty supports offline installation using offline packages. Think of them as Linux-native Docker images.


Overview

Offline packages bundle all required RPM/DEB packages and dependencies; they are snapshots of the local APT/YUM repo after a normal installation.

In serious prod deployments, we strongly recommend using offline packages. They ensure all future nodes have consistent software versions with the existing env, and avoid online installation failures caused by upstream changes (quite common!), guaranteeing you can run it independently forever.


Offline Packages

We typically release offline packages for the following Linux distros, using the latest OS minor version.

Linux DistributionSystem CodeMinor VersionPackage
RockyLinux 8 x86_64el8.x86_648.10pigsty-pkg-v4.0.0.el8.x86_64.tgz
RockyLinux 8 aarch64el8.aarch648.10pigsty-pkg-v4.0.0.el8.aarch64.tgz
RockyLinux 9 x86_64el9.x86_649.6pigsty-pkg-v4.0.0.el9.x86_64.tgz
RockyLinux 9 aarch64el9.aarch649.6pigsty-pkg-v4.0.0.el9.aarch64.tgz
RockyLinux 10 x86_64el10.x86_6410.0pigsty-pkg-v4.0.0.el10.x86_64.tgz
RockyLinux 10 aarch64el10.aarch6410.0pigsty-pkg-v4.0.0.el10.aarch64.tgz
Debian 12 x86_64d12.x86_6412.11pigsty-pkg-v4.0.0.d12.x86_64.tgz
Debian 12 aarch64d12.aarch6412.11pigsty-pkg-v4.0.0.d12.aarch64.tgz
Debian 13 x86_64d13.x86_6413.2pigsty-pkg-v4.0.0.d13.x86_64.tgz
Debian 13 aarch64d13.aarch6413.2pigsty-pkg-v4.0.0.d13.aarch64.tgz
Ubuntu 24.04 x86_64u24.x86_6424.04.2pigsty-pkg-v4.0.0.u24.x86_64.tgz
Ubuntu 24.04 aarch64u24.aarch6424.04.2pigsty-pkg-v4.0.0.u24.aarch64.tgz
Ubuntu 22.04 x86_64u22.x86_6422.04.5pigsty-pkg-v4.0.0.u22.x86_64.tgz
Ubuntu 22.04 aarch64u22.aarch6422.04.5pigsty-pkg-v4.0.0.u22.aarch64.tgz

If you use an OS from the list above (exact minor version match), we recommend using offline packages. Pigsty provides ready-to-use pre-made offline packages for these systems, freely downloadable from GitHub.

You can find these packages on the GitHub release page:

6a26fa44f90a16c7571d2aaf0e997d07  pigsty-v4.0.0.tgz
537839201c536a1211f0b794482d733b  pigsty-pkg-v4.0.0.el9.x86_64.tgz
85687cb56517acc2dce14245452fdc05  pigsty-pkg-v4.0.0.el9.aarch64.tgz
a333e8eb34bf93f475c85a9652605139  pigsty-pkg-v4.0.0.el10.x86_64.tgz
4b98b463e2ebc104c35ddc94097e5265  pigsty-pkg-v4.0.0.el10.aarch64.tgz
4f62851c9d79a490d403f59deb4823f4  pigsty-pkg-v4.0.0.el8.x86_64.tgz
66e283c9f6bfa80654f7ed3ffb9b53e5  pigsty-pkg-v4.0.0.el8.aarch64.tgz
f7971d9d6aab1f8f307556c2f64b701c  pigsty-pkg-v4.0.0.d12.x86_64.tgz
c4d870e5ef61ed05724c15fbccd1220b  pigsty-pkg-v4.0.0.d12.aarch64.tgz
408991c5ff028b5c0a86fac804d64b93  pigsty-pkg-v4.0.0.d13.x86_64.tgz
8d7c9404b97a11066c00eb7fc1330181  pigsty-pkg-v4.0.0.d13.aarch64.tgz
2a25eff283332d9006854f36af6602b2  pigsty-pkg-v4.0.0.u24.x86_64.tgz
a4fb30148a2d363bbfd3bec0daa14ab6  pigsty-pkg-v4.0.0.u24.aarch64.tgz
87bb91ef703293b6ec5b77ae3bb33d54  pigsty-pkg-v4.0.0.u22.x86_64.tgz
5c81bdaa560dad4751840dec736fe404  pigsty-pkg-v4.0.0.u22.aarch64.tgz

Using Offline Packages

Offline installation steps:

  1. Download Pigsty offline package, place it at /tmp/pkg.tgz
  2. Download Pigsty source package, extract and enter directory (assume extracted to home: cd ~/pigsty)
  3. ./bootstrap, it will extract the package and configure using local repo (and install ansible from it offline)
  4. ./configure -g -c rich, you can directly use the rich template configured for offline installation, or configure yourself
  5. Run ./deploy.yml as usual—it will install everything from the local repo

If you want to use the already extracted and configured offline package in your own config, modify and ensure these settings:

  • repo_enabled: Set to true, will build local software repo (explicitly disabled in most templates)
  • node_repo_modules: Set to local, then all nodes in the env will install from the local software repo
    • In most templates, this is explicitly set to: node,infra,pgsql, i.e., install directly from these upstream repos.
    • Setting it to local will use the local software repo to install all packages, fastest, no interference from other repos.
    • If you want to use both local and upstream repos, you can add other repo module names too, e.g., local,node,infra,pgsql

The first parameter, if enabled, Pigsty will create a local software repo. The second parameter, if contains local, then all nodes in the env will use this local software repo. If it only contains local, then it becomes the sole repo for all nodes. If you still want to install other packages from other upstream repos, you can add other repo module names too, e.g., local,node,infra,pgsql.

Hybrid Installation Mode

If your env has Internet access, there’s a hybrid approach combining advantages of offline and online installation. You can use the offline package as a base, and supplement missing packages online.

For example, if you’re using RockyLinux 9.5 but the official offline package is for RockyLinux 9.6. You can use the el9 offline package (though made for 9.6), then execute make repo-build before formal installation to re-download missing packages for 9.5. Pigsty will download the required increments from upstream repos.


Making Offline Packages

If your OS isn’t in the default list, you can make your own offline package with the built-in cache.yml playbook:

  1. Find a node running the exact same OS version with Internet access
  2. Use rich config template to perform online installation (configure -c rich)
  3. cd ~/pigsty; ./cache.yml: make and fetch the offline package to ~/pigsty/dist/${version}/
  4. Copy the offline package to the env without Internet access (ftp, scp, usb, etc.), extract and use via bootstrap

We offer paid services providing tested, pre-made offline packages for specific Linux major.minor versions (¥200).


Bootstrap

Pigsty relies on ansible to execute playbooks; this script is responsible for ensuring ansible is correctly installed in various ways.

./bootstrap       # Ensure ansible is correctly installed (if offline package exists, use offline installation and extract first)

Usually, you need to run this script in two cases:

  • You didn’t install Pigsty via the installation script, but by downloading or git clone of the source package, so ansible isn’t installed.
  • You’re preparing to install Pigsty via offline packages and need to use this script to install ansible from the offline package.

The bootstrap script will automatically detect if the offline package exists (-p to specify, default is /tmp/pkg.tgz). If it exists, it will extract and use it, then install ansible from it. If the offline package doesn’t exist, it will try to install ansible from the Internet. If that still fails, you’re on your own!

7 - Slim Installation

Install only HA PostgreSQL clusters with minimal dependencies

If you only want HA PostgreSQL database cluster itself without monitoring, infra, etc., consider Slim Installation.

Slim installation has no INFRA module, no monitoring, no local repo—just ETCD and PGSQL and partial NODE functionality.


Overview

To use slim installation, you need to:

  1. Use the slim.yml slim install config template (configure -c slim)
  2. Run the slim.yml playbook instead of the default deploy.yml
curl https://repo.pigsty.io/get | bash
./configure -g -c slim
./slim.yml

Description

Slim installation only installs/configures these components:

ComponentRequiredDescription
patroni⚠️ RequiredBootstrap HA PostgreSQL cluster
etcd⚠️ RequiredMeta database dependency (DCS) for Patroni
pgbouncer✔️ OptionalPostgreSQL connection pooler
vip-manager✔️ OptionalL2 VIP binding to PostgreSQL cluster primary
haproxy✔️ OptionalAuto-routing services via Patroni health checks
chronyd✔️ OptionalTime synchronization with NTP server
tuned✔️ OptionalNode tuning template and kernel parameter management

You can disable all optional components via configuration, keeping only the required patroni and etcd.

Because there’s no INFRA module’s Nginx providing local repo service, offline installation only works in single-node mode.


Configuration

Slim installation config file example: conf/slim.yml:

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1No INFRA moduleetcd-1
---
#==============================================================#
# File      :   slim.yml
# Desc      :   Pigsty slim installation config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for slim / minimal installation
# No monitoring & infra will be installed, just raw postgresql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c slim
#   ./slim.yml

all:
  children:

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        #10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        #10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        #10.10.10.11: { pg_seq: 2, pg_role: replica } # you can add more!
        #10.10.10.12: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Deployment

Slim installation uses the slim.yml playbook instead of deploy.yml:

./slim.yml

HA Cluster

Slim installation can also deploy HA clusters—just add more nodes to the etcd and pg-meta groups. A three-node deployment example:

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1No INFRA moduleetcd-1
210.10.10.11pg-meta-2No INFRA moduleetcd-2
310.10.10.12pg-meta-3No INFRA moduleetcd-3
all:
  children:
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
        10.10.10.11: { etcd_seq: 2 }  # <-- New
        10.10.10.12: { etcd_seq: 3 }  # <-- New

    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica } # <-- New
        10.10.10.12: { pg_seq: 3, pg_role: replica } # <-- New
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
  vars:
    # omitted ……

8 - Security Tips

Three security hardening tips for single-node quick-start deployment

For Demo/Dev single-node deployments, Pigsty’s default config is secure enough as long as you change default passwords.

If your deployment is exposed to Internet or office network, consider adding firewall rules to restrict port access and source IPs for enhanced security.

Additionally, we recommend protecting Pigsty’s critical files (config files and CA private key) from unauthorized access and backing them up regularly.

For enterprise prod envs with strict security requirements, refer to the Deployment - Security Hardening documentation for advanced configuration.


Passwords

Pigsty is an open-source project with well-known default passwords. If your deployment is exposed to Internet or office network, you must change all default passwords!

ModuleParameterDefault Value
INFRAgrafana_admin_passwordpigsty
INFRAgrafana_view_passwordDBUser.Viewer
PGSQLpg_admin_passwordDBUser.DBA
PGSQLpg_monitor_passwordDBUser.Monitor
PGSQLpg_replication_passwordDBUser.Replicator
PGSQLpatroni_passwordPatroni.API
NODEhaproxy_admin_passwordpigsty
MINIOminio_secret_keyS3User.MinIO
ETCDetcd_root_passwordEtcd.Root

To avoid manually modifying passwords, Pigsty’s configuration wizard provides automatic random strong password generation using the -g argument with configure.

$ ./configure -g
configure pigsty v4.0.0 begin
[ OK ] region = china
[WARN] kernel  = Darwin, can be used as admin node only
[ OK ] machine = arm64
[ OK ] package = brew (macOS)
[WARN] primary_ip = default placeholder 10.10.10.10 (macOS)
[ OK ] mode = meta (unknown distro)
[ OK ] locale  = C.UTF-8
[ OK ] generating random passwords...
    grafana_admin_password   : CdG0bDcfm3HFT9H2cvFuv9w7
    pg_admin_password        : 86WqSGdokjol7WAU9fUxY8IG
    pg_monitor_password      : 0X7PtgMmLxuCd2FveaaqBuX9
    pg_replication_password  : 4iAjjXgEY32hbRGVUMeFH460
    patroni_password         : DsD38QLTSq36xejzEbKwEqBK
    haproxy_admin_password   : uhdWhepXrQBrFeAhK9sCSUDo
    minio_secret_key         : z6zrYUN1SbdApQTmfRZlyWMT
    etcd_root_password       : Bmny8op1li1wKlzcaAmvPiWc
    DBUser.Meta              : U5v3CmeXICcMdhMNzP9JN3KY
    DBUser.Viewer            : 9cGQF1QMNCtV3KlDn44AEzpw
    S3User.Backup            : 2gjgSCFYNmDs5tOAiviCqM2X
    S3User.Meta              : XfqkAKY6lBtuDMJ2GZezA15T
    S3User.Data              : OygorcpCbV7DpDmqKe3G6UOj
[ OK ] random passwords generated, check and save them
[ OK ] ansible = ready
[ OK ] pigsty configured
[WARN] don't forget to check it and change passwords!
proceed with ./deploy.yml

Firewall

For deployments exposed to Internet or office networks, we strongly recommend configuring firewall rules to limit access IP ranges and ports.

You can use your cloud provider’s security group features, or Linux distribution firewall services (like firewalld, ufw, iptables, etc.) to implement this.

DirectionProtocolPortServiceDescription
InboundTCP22SSHAllow SSH login access
InboundTCP80NginxAllow Nginx HTTP access
InboundTCP443NginxAllow Nginx HTTPS access
InboundTCP5432PostgreSQLRemote database access, enable as needed

Pigsty supports configuring firewall rules to allow 22/80/443/5432 from external networks, but this is not enabled by default.


Files

In Pigsty, you need to protect the following files:

  • pigsty.yml: Pigsty main config file, contains access information and passwords for all nodes
  • files/pki/ca/ca.key: Pigsty self-signed CA private key, used to issue all SSL certificates in the deployment (auto-generated during deployment)

We recommend strictly controlling access permissions for these two files, regularly backing them up, and storing them in a secure location.