This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Concepts

Understand Pigsty’s core concepts, architecture design, and principles. Master high availability, backup recovery, security compliance, and other key capabilities.

Pigsty is a portable, extensible open-source PostgreSQL distribution for building production-grade database services in local environments with declarative configuration and automation. It has a vast ecosystem providing a complete set of tools, scripts, and best practices to bring PostgreSQL to enterprise-grade RDS service levels.

Pigsty’s name comes from PostgreSQL In Great STYle, also understood as Postgres, Infras, Graphics, Service, Toolbox, it’s all Yours—a self-hosted PostgreSQL solution with graphical monitoring that’s all yours. You can find the source code on GitHub, visit the official documentation for more information, or experience the Web UI in the online demo.

pigsty-banner


Why Pigsty? What Can It Do?

PostgreSQL is a sufficiently perfect database kernel, but it needs more tools and systems to become a truly excellent database service. In production environments, you need to manage every aspect of your database: high availability, backup recovery, monitoring alerts, access control, parameter tuning, extension installation, connection pooling, load balancing…

Wouldn’t it be easier if all this complex operational work could be automated? This is precisely why Pigsty was created.

Pigsty provides:

  • Out-of-the-Box PostgreSQL Distribution

    Pigsty deeply integrates 440+ extensions from the PostgreSQL ecosystem, providing out-of-the-box distributed, time-series, geographic, spatial, graph, vector, search, and other multi-modal database capabilities. From kernel to RDS distribution, providing production-grade database services for versions 13-18 on EL/Debian/Ubuntu.

  • Self-Healing High Availability Architecture

    A high availability architecture built on Patroni, Etcd, and HAProxy enables automatic failover for hardware failures with seamless traffic handoff. Primary failure recovery time RTO < 30s, data recovery point RPO ≈ 0. You can perform rolling maintenance and upgrades on the entire cluster without application coordination.

  • Complete Point-in-Time Recovery Capability

    Based on pgBackRest and optional MinIO cluster, providing out-of-the-box PITR point-in-time recovery capability. Giving you the ability to quickly return to any point in time, protecting against software defects and accidental data deletion.

  • Flexible Service Access and Traffic Management

    Through HAProxy, Pgbouncer, and VIP, providing flexible service access patterns for read-write separation, connection pooling, and automatic routing. Delivering stable, reliable, auto-routing, transaction-pooled high-performance database services.

  • Stunning Observability

    A modern observability stack based on Prometheus and Grafana provides unparalleled monitoring best practices. Over three thousand types of monitoring metrics describe every aspect of the system, from global dashboards to CRUD operations on individual objects.

  • Declarative Configuration Management

    Following the Infrastructure as Code philosophy, using declarative configuration to describe the entire environment. You just tell Pigsty “what kind of database cluster you want” without worrying about how to implement it—the system automatically adjusts to the desired state.

  • Modular Architecture Design

    A modular architecture design that can be freely combined to suit different scenarios. Beyond the core PostgreSQL module, it also provides optional modules for Redis, MinIO, Etcd, FerretDB, and support for various PG-compatible kernels.

  • Solid Security Best Practices

    Industry-leading security best practices: self-signed CA certificate encryption, AES encrypted backups, scram-sha-256 encrypted passwords, out-of-the-box ACL model, HBA rule sets following the principle of least privilege, ensuring data security.

  • Simple and Easy Deployment

    All dependencies are pre-packaged for one-click installation in environments without internet access. Local sandbox environments can run on micro VMs with 1 core and 2GB RAM, providing functionality identical to production environments. Provides Vagrant-based local sandboxes and Terraform-based cloud deployments.


What Pigsty Is Not

Pigsty is not a traditional, all-encompassing PaaS (Platform as a Service) system.

  • Pigsty doesn’t provide basic hardware resources. It runs on nodes you provide, whether bare metal, VMs, or cloud instances, but it doesn’t create or manage these resources itself (though it provides Terraform templates to simplify cloud resource preparation).

  • Pigsty is not a container orchestration system. It runs directly on the operating system, not requiring Kubernetes or Docker as infrastructure. Of course, it can coexist with these systems and provides a Docker module for running stateless applications.

  • Pigsty is not a general database management tool. It focuses on PostgreSQL and its ecosystem. While it also supports peripheral components like Redis, Etcd, and MinIO, the core is always built around PostgreSQL.

  • Pigsty won’t lock you in. It’s built on open-source components, doesn’t modify the PostgreSQL kernel, and introduces no proprietary protocols. You can continue using your well-managed PostgreSQL clusters anytime without Pigsty.

Pigsty doesn’t restrict how you should or shouldn’t build your database services. For example:

  • Pigsty provides good parameter defaults and configuration templates, but you can override any parameter.
  • Pigsty provides a declarative API, but you can still use underlying tools (Ansible, Patroni, pgBackRest, etc.) for manual management.
  • Pigsty can manage the complete lifecycle, or you can use only its monitoring system to observe existing database instances or RDS.

Pigsty provides a different level of abstraction than the hardware layer—it works at the database service layer, focusing on how to deliver PostgreSQL at its best, rather than reinventing the wheel.


Evolution of PostgreSQL Deployment

To understand Pigsty’s value, let’s review the evolution of PostgreSQL deployment approaches.

Manual Deployment Era

In traditional deployment, DBAs needed to manually install and configure PostgreSQL, manually set up replication, manually configure monitoring, and manually handle failures. The problems with this approach are obvious:

  • Low efficiency: Each instance requires repeating many manual operations, prone to errors.
  • Lack of standardization: Databases configured by different DBAs can vary greatly, making maintenance difficult.
  • Poor reliability: Failure handling depends on manual intervention, with long recovery times and susceptibility to human error.
  • Weak observability: Lack of unified monitoring, making problem discovery and diagnosis difficult.

Managed Database Era

To solve these problems, cloud providers offer managed database services (RDS). Cloud RDS does solve some operational issues, but also brings new challenges:

  • High cost: Managed services typically charge multiples to dozens of times hardware cost as “service fees.”
  • Vendor lock-in: Migration is difficult, tied to specific cloud platforms.
  • Limited functionality: Cannot use certain advanced features, extensions are restricted, parameter tuning is limited.
  • Data sovereignty: Data stored in the cloud, reducing autonomy and control.

Local RDS Era

Pigsty represents a third approach: building database services in local environments that match or exceed cloud RDS.

Pigsty combines the advantages of both approaches:

  • High automation: One-click deployment, automatic configuration, self-healing failures—as convenient as cloud RDS.
  • Complete autonomy: Runs on your own infrastructure, data completely in your own hands.
  • Extremely low cost: Run enterprise-grade database services at near-pure-hardware costs.
  • Complete functionality: Unlimited use of PostgreSQL’s full capabilities and ecosystem extensions.
  • Open architecture: Based on open-source components, no vendor lock-in, free to migrate anytime.

This approach is particularly suitable for:

  • Private and hybrid clouds: Enterprises needing to run databases in local environments.
  • Cost-sensitive users: Organizations looking to reduce database TCO.
  • High-security scenarios: Critical data requiring complete autonomy and control.
  • PostgreSQL power users: Scenarios requiring advanced features and rich extensions.
  • Development and testing: Quickly setting up databases locally that match production environments.

What’s Next

Now that you understand Pigsty’s basic concepts, you can:

1 - Architecture

Pigsty’s modular architecture—declarative composition, on-demand customization, flexible deployment.

Pigsty uses a modular architecture with a declarative interface. You can freely combine modules like building blocks as needed.


Modules

Pigsty uses a modular design with six main default modules: PGSQL, INFRA, NODE, ETCD, REDIS, and MINIO.

  • PGSQL: Self-healing HA Postgres clusters powered by Patroni, Pgbouncer, HAproxy, PgBackrest, and more.
  • INFRA: Local software repo, Nginx, Grafana, Victoria, AlertManager, Blackbox Exporter—the complete observability stack.
  • NODE: Tune nodes to desired state—hostname, timezone, NTP, ssh, sudo, haproxy, docker, vector, keepalived.
  • ETCD: Distributed key-value store as DCS for HA Postgres clusters: consensus leader election/config management/service discovery.
  • REDIS: Redis servers supporting standalone primary-replica, sentinel, and cluster modes with full monitoring.
  • MINIO: S3-compatible simple object storage that can serve as an optional backup destination for PG databases.

You can declaratively compose them freely. If you only want host monitoring, installing the INFRA module on infrastructure nodes and the NODE module on managed nodes is sufficient. The ETCD and PGSQL modules are used to build HA PG clusters—installing these modules on multiple nodes automatically forms a high-availability database cluster. You can reuse Pigsty infrastructure and develop your own modules; REDIS and MINIO can serve as examples. More modules will be added—preliminary support for Mongo and MySQL is already on the roadmap.

Note that all modules depend strongly on the NODE module: in Pigsty, nodes must first have the NODE module installed to be managed before deploying other modules. When nodes (by default) use the local software repo for installation, the NODE module has a weak dependency on the INFRA module. Therefore, the admin/infrastructure nodes with the INFRA module complete the bootstrap process in the deploy.yml playbook, resolving the circular dependency.

pigsty-sandbox


Standalone Installation

By default, Pigsty installs on a single node (physical/virtual machine). The deploy.yml playbook installs INFRA, ETCD, PGSQL, and optionally MINIO modules on the current node, giving you a fully-featured observability stack (Prometheus, Grafana, Loki, AlertManager, PushGateway, BlackboxExporter, etc.), plus a built-in PostgreSQL standalone instance as a CMDB, ready to use out of the box (cluster name pg-meta, database name meta).

This node now has a complete self-monitoring system, visualization tools, and a Postgres database with PITR auto-configured (HA unavailable since you only have one node). You can use this node as a devbox, for testing, running demos, and data visualization/analysis. Or, use this node as an admin node to deploy and manage more nodes!

pigsty-arch


Monitoring

The installed standalone meta node can serve as an admin node and monitoring center to bring more nodes and database servers under its supervision and control.

Pigsty’s monitoring system can be used independently. If you want to install the Prometheus/Grafana observability stack, Pigsty provides best practices! It offers rich dashboards for host nodes and PostgreSQL databases. Whether or not these nodes or PostgreSQL servers are managed by Pigsty, with simple configuration, you immediately have a production-grade monitoring and alerting system, bringing existing hosts and PostgreSQL under management.

pigsty-dashboard.jpg


HA PostgreSQL Clusters

Pigsty helps you own your own production-grade HA PostgreSQL RDS service anywhere.

To create such an HA PostgreSQL cluster/RDS service, you simply describe it with a short config and run the playbook to create it:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: replica }
  vars: { pg_cluster: pg-test }
$ bin/pgsql-add pg-test  # Initialize cluster 'pg-test'

In less than 10 minutes, you’ll have a PostgreSQL database cluster with service access, monitoring, backup PITR, and HA fully configured.

pigsty-ha.png

Hardware failures are covered by the self-healing HA architecture provided by patroni, etcd, and haproxy—in case of primary failure, automatic failover executes within 30 seconds by default. Clients don’t need to modify config or restart applications: Haproxy uses patroni health checks for traffic distribution, and read-write requests are automatically routed to the new cluster primary, avoiding split-brain issues. This process is seamless—for example, in case of replica failure or planned switchover, clients experience only a momentary flash of the current query.

Software failures, human errors, and datacenter-level disasters are covered by pgbackrest and the optional MinIO cluster. This provides local/cloud PITR capabilities and, in case of datacenter failure, offers cross-region replication and disaster recovery.

1.1 - Nodes

A node is an abstraction of hardware/OS resources—physical machines, bare metal, VMs, or containers/pods.

A node is an abstraction of hardware resources and operating systems. It can be a physical machine, bare metal, virtual machine, or container/pod.

Any machine running a Linux OS (with systemd daemon) and standard CPU/memory/disk/network resources can be treated as a node.

Nodes can have modules installed. Pigsty has several node types, distinguished by which modules are deployed:

TypeDescription
Regular NodeA node managed by Pigsty
ADMIN NodeThe node that runs Ansible to issue management commands
INFRA NodeNodes with the INFRA module installed
ETCD NodeNodes with the ETCD module for DCS
MINIO NodeNodes with the MINIO module for object storage
PGSQL NodeNodes with the PGSQL module installed
Nodes with other modules…

In a singleton Pigsty deployment, multiple roles converge on one node: it serves as the regular node, admin node, infra node, ETCD node, and database node simultaneously.


Regular Node

Nodes managed by Pigsty can have modules installed. The node.yml playbook configures nodes to the desired state. A regular node may run the following services:

ComponentPortDescriptionStatus
node_exporter9100Host metrics exporterEnabled
haproxy9101HAProxy load balancer (admin port)Enabled
vector9598Log collection agentEnabled
docker9323Container runtime supportOptional
keepalivedn/aL2 VIP for node clusterOptional
keepalived_exporter9650Keepalived status monitorOptional

Here, node_exporter exposes host metrics, vector sends logs to the collection system, and haproxy provides load balancing. These three are enabled by default. Docker, keepalived, and keepalived_exporter are optional and can be enabled as needed.


ADMIN Node

A Pigsty deployment has exactly one admin node—the node that runs Ansible playbooks and issues control/deployment commands.

This node has ssh/sudo access to all other nodes. Admin node security is critical; ensure access is strictly controlled.

During single-node installation and configuration, the current node becomes the admin node. However, alternatives exist. For example, if your laptop can SSH to all managed nodes and has Ansible installed, it can serve as the admin node—though this isn’t recommended for production.

For instance, you might use your laptop to manage a Pigsty VM in the cloud. In this case, your laptop is the admin node.

In serious production environments, the admin node is typically 1-2 dedicated DBA machines. In resource-constrained setups, INFRA nodes often double as admin nodes since all INFRA nodes have Ansible installed by default.


INFRA Node

A Pigsty deployment may have 1 or more INFRA nodes; large production environments typically have 2-3.

The infra group in the inventory defines which nodes are INFRA nodes. These nodes run the INFRA module with these components:

ComponentPortDescription
nginx80/443Web UI, local software repository
grafana3000Visualization platform
victoriaMetrics8428Time-series database (metrics)
victoriaLogs9428Log collection server
victoriaTraces10428Trace collection server
vmalert8880Alerting and derived metrics
alertmanager9093Alert aggregation and routing
blackbox_exporter9115Blackbox probing (ping nodes/VIPs)
dnsmasq53Internal DNS resolution
chronyd123NTP time server
ansible-Playbook execution

Nginx serves as the module’s entry point, providing the web UI and local software repository. With multiple INFRA nodes, services on each are independent, but you can access all monitoring data sources from any INFRA node’s Grafana.

Note: The INFRA module is licensed under AGPLv3 due to Grafana. As an exception, if you only use Nginx/Victoria components without Grafana, you’re effectively under Apache-2.0.


ETCD Node

The ETCD module provides Distributed Consensus Service (DCS) for PostgreSQL high availability.

The etcd group in the inventory defines ETCD nodes. These nodes run etcd servers on two ports:

ComponentPortDescription
etcd2379ETCD key-value store (client port)
etcd2380ETCD cluster peer communication

MINIO Node

The MINIO module provides optional backup storage for PostgreSQL.

The minio group in the inventory defines MinIO nodes. These nodes run MinIO servers on:

ComponentPortDescription
minio9000MinIO S3 API endpoint
minio9001MinIO admin console

PGSQL Node

Nodes with the PGSQL module are called PGSQL nodes. Node and PostgreSQL instance have a 1:1 deployment—one PG instance per node.

PGSQL nodes can borrow identity from their PostgreSQL instance—controlled by node_id_from_pg, defaulting to true, meaning the node name is set to the PG instance name.

PGSQL nodes run these additional components beyond regular node services:

ComponentPortDescriptionStatus
postgres5432PostgreSQL database serverEnabled
pgbouncer6432PgBouncer connection poolEnabled
patroni8008Patroni HA managementEnabled
pg_exporter9630PostgreSQL metrics exporterEnabled
pgbouncer_exporter9631PgBouncer metrics exporterEnabled
pgbackrest_exporter9854pgBackRest metrics exporterEnabled
vip-managern/aBinds L2 VIP to cluster primaryOptional
{{ pg_cluster }}-primary5433HAProxy service: pooled read/writeEnabled
{{ pg_cluster }}-replica5434HAProxy service: pooled read-onlyEnabled
{{ pg_cluster }}-default5436HAProxy service: primary direct connectionEnabled
{{ pg_cluster }}-offline5438HAProxy service: offline readEnabled
{{ pg_cluster }}-<service>543xHAProxy service: custom PostgreSQL servicesCustom

The vip-manager is only enabled when users configure a PG VIP. Additional custom services can be defined in pg_services, exposed via haproxy using additional service ports.


Node Relationships

Regular nodes typically reference an INFRA node via the admin_ip parameter as their infrastructure provider. For example, with global admin_ip = 10.10.10.10, all nodes use infrastructure services at this IP.

Parameters that reference ${admin_ip}:

ParameterModuleDefault ValueDescription
repo_endpointINFRAhttp://${admin_ip}:80Software repo URL
repo_upstream.baseurlINFRAhttp://${admin_ip}/pigstyLocal repo baseurl
infra_portal.endpointINFRA${admin_ip}:<port>Nginx proxy backend
dns_recordsINFRA["${admin_ip} i.pigsty", ...]DNS records
node_default_etc_hostsNODE["${admin_ip} i.pigsty"]Default static DNS
node_etc_hostsNODE-Custom static DNS
node_dns_serversNODE["${admin_ip}"]Dynamic DNS servers
node_ntp_serversNODE-NTP servers (optional)

Typically the admin node and INFRA node coincide. With multiple INFRA nodes, the admin node is usually the first one; others serve as backups.

In large-scale production deployments, you might separate the Ansible admin node from INFRA module nodes. For example, use 1-2 small dedicated hosts under the DBA team as the control hub (ADMIN nodes), and 2-3 high-spec physical machines as monitoring infrastructure (INFRA nodes).

Typical node counts by deployment scale:

ScaleADMININFRAETCDMINIOPGSQL
Single-node11101
3-node13303
Small prod1230N
Large prod2354+N

1.2 - Infrastructure

Infrastructure module architecture, components, and functionality in Pigsty.

Running production-grade, highly available PostgreSQL clusters typically requires a comprehensive set of infrastructure services (foundation) for support, such as monitoring and alerting, log collection, time synchronization, DNS resolution, and local software repositories. Pigsty provides the INFRA module to address this—it’s an optional module, but we strongly recommend enabling it.


Overview

The diagram below shows the architecture of a single-node deployment. The right half represents the components included in the INFRA module:

ComponentTypeDescription
NginxWeb ServerUnified entry for WebUI, local repo, reverse proxy for internal services
RepoSoftware RepoAPT/DNF repository with all RPM/DEB packages needed for deployment
GrafanaVisualizationDisplays metrics, logs, and traces; hosts dashboards, reports, and custom data apps
VictoriaMetricsTime Series DBScrapes all metrics, Prometheus API compatible, provides VMUI query interface
VictoriaLogsLog PlatformCentralized log storage; all nodes run Vector by default, pushing logs here
VictoriaTracesTracingCollects slow SQL, service traces, and other tracing data
VMAlertAlert EngineEvaluates alerting rules, pushes events to Alertmanager
AlertManagerAlert ManagerAggregates alerts, dispatches notifications via email, Webhook, etc.
BlackboxExporterBlackbox ProbeProbes reachability of IPs/VIPs/URLs
DNSMASQDNS ServiceProvides DNS resolution for domains used within Pigsty [Optional]
ChronydTime SyncProvides NTP time synchronization to ensure consistent time across nodes [Optional]
CACertificateIssues encryption certificates within the environment
AnsibleOrchestrationBatch, declarative, agentless tool for managing large numbers of servers

pigsty-arch


Nginx

Nginx is the access entry point for all WebUI services in Pigsty, using ports 80 / 443 for HTTP/HTTPS by default. Live Demo

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10http://i.pigstyhttps://i.pigstyhttps://demo.pigsty.io

Infrastructure components with WebUIs can be exposed uniformly through Nginx, such as Grafana, VictoriaMetrics (VMUI), AlertManager, and HAProxy console. Additionally, the local software repository and other static resources are served via Nginx.

Nginx configures local web servers or reverse proxy servers based on definitions in infra_portal.

infra_portal:
  home : { domain: i.pigsty }

By default, it exposes Pigsty’s admin homepage: i.pigsty. Different endpoints on this page proxy different components:

EndpointComponentNative PortNotesPublic Demo
/Nginx80/443Homepage, local repo, file serverdemo.pigsty.io
/ui/Grafana3000Grafana dashboard entrydemo.pigsty.io/ui/
/vmetrics/VictoriaMetrics8428Time series DB Web UIdemo.pigsty.io/vmetrics/
/vlogs/VictoriaLogs9428Log DB Web UIdemo.pigsty.io/vlogs/
/vtraces/VictoriaTraces10428Tracing Web UIdemo.pigsty.io/vtraces/
/vmalert/VMAlert8880Alert rule managementdemo.pigsty.io/vmalert/
/alertmgr/AlertManager9059Alert management Web UIdemo.pigsty.io/alertmgr/
/blackbox/Blackbox9115Blackbox probe

Pigsty allows rich customization of Nginx as a local file server or reverse proxy, with self-signed or real HTTPS certificates.

For more information, see: Tutorial: Nginx—Expose Web Services via Proxy and Tutorial: Certbot—Request and Renew HTTPS Certificates


Repo

Pigsty creates a local software repository on the Infra node during installation to accelerate subsequent software installations. Live Demo

This repository defaults to the /www/pigsty directory, served by Nginx and mounted at the /pigsty path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/pigstyhttp://i.pigsty/pigstyhttps://i.pigsty/pigstyhttps://demo.pigsty.io/pigsty

Pigsty supports offline installation, which essentially pre-copies a prepared local software repository to the target environment. When Pigsty performs production deployment and needs to create a local software repository, if it finds the /www/pigsty/repo_complete marker file already exists locally, it skips downloading packages from upstream and uses existing packages directly, avoiding internet downloads.

repo

For more information, see: Config: INFRA - REPO


Grafana

Grafana is the core component of Pigsty’s monitoring system, used for visualizing metrics, logs, and various information. Live Demo

Grafana listens on port 3000 by default and is proxied via Nginx at the /ui path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/uihttp://i.pigsty/uihttps://i.pigsty/uihttps://demo.pigsty.io/ui

Pigsty provides pre-built dashboards based on VictoriaMetrics / Logs / Traces, with one-click drill-down and roll-up via URL jumps for rapid troubleshooting.

Grafana can also serve as a low-code visualization platform, so ECharts, victoriametrics-datasource, victorialogs-datasource plugins are installed by default, with Vector / Victoria datasources registered uniformly as vmetrics-*, vlogs-*, vtraces-* for easy custom dashboard extension.

dashboard

For more information, see: Config: INFRA - GRAFANA.


VictoriaMetrics

VictoriaMetrics is Pigsty’s time series database, responsible for scraping and storing all monitoring metrics. Live Demo

It listens on port 8428 by default, mounted at Nginx /vmetrics path, and also accessible via the p.pigsty domain:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vmetricshttp://p.pigstyhttps://i.pigsty/vmetricshttps://demo.pigsty.io/vmetrics

VictoriaMetrics is fully compatible with the Prometheus API, supporting PromQL queries, remote read/write protocols, and the Alertmanager API. The built-in VMUI provides an ad-hoc query interface for exploring metrics data directly, and also serves as a Grafana datasource.

vmetrics

For more information, see: Config: INFRA - VMETRICS


VictoriaLogs

VictoriaLogs is Pigsty’s log platform, centrally storing structured logs from all nodes. Live Demo

It listens on port 9428 by default, mounted at Nginx /vlogs path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vlogshttp://i.pigsty/vlogshttps://i.pigsty/vlogshttps://demo.pigsty.io/vlogs

All managed nodes run Vector Agent by default, collecting system logs, PostgreSQL logs, Patroni logs, Pgbouncer logs, etc., processing them into structured format and pushing to VictoriaLogs. The built-in Web UI supports log search and filtering, and can be integrated with Grafana’s victorialogs-datasource plugin for visual analysis.

vlogs

For more information, see: Config: INFRA - VLOGS


VictoriaTraces

VictoriaTraces is used for collecting trace data and slow SQL records. Live Demo

It listens on port 10428 by default, mounted at Nginx /vtraces path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vtraceshttp://i.pigsty/vtraceshttps://i.pigsty/vtraceshttps://demo.pigsty.io/vtraces

VictoriaTraces provides a Jaeger-compatible interface for analyzing service call chains and database slow queries. Combined with Grafana dashboards, it enables rapid identification of performance bottlenecks and root cause tracing.

For more information, see: Config: INFRA - VTRACES


VMAlert

VMAlert is the alerting rule computation engine, responsible for evaluating alert rules and pushing triggered events to Alertmanager. Live Demo

It listens on port 8880 by default, mounted at Nginx /vmalert path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vmalerthttp://i.pigsty/vmalerthttps://i.pigsty/vmalerthttps://demo.pigsty.io/vmalert

VMAlert reads metrics data from VictoriaMetrics and periodically evaluates alerting rules. Pigsty provides pre-built alerting rules for PGSQL, NODE, REDIS, and other modules, covering common failure scenarios out of the box.

vmalert

For more information, see: Config: INFRA - VMALERT


AlertManager

AlertManager handles alert event aggregation, deduplication, grouping, and dispatch. Live Demo

It listens on port 9059 by default, mounted at Nginx /alertmgr path, and also accessible via the a.pigsty domain:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/alertmgrhttp://a.pigstyhttps://i.pigsty/alertmgrhttps://demo.pigsty.io/alertmgr

AlertManager supports multiple notification channels: email, Webhook, Slack, PagerDuty, WeChat Work, etc. Through alert routing rules, differentiated dispatch based on severity level and module type is possible, with support for silencing, inhibition, and other advanced features.

alertmanager

For more information, see: Config: INFRA - AlertManager


BlackboxExporter

Blackbox Exporter is used for active probing of target reachability, enabling blackbox monitoring.

It listens on port 9115 by default, mounted at Nginx /blackbox path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/blackboxhttp://i.pigsty/blackboxhttps://i.pigsty/blackboxhttps://demo.pigsty.io/blackbox

It supports multiple probe methods including ICMP Ping, TCP ports, and HTTP/HTTPS endpoints. Useful for monitoring VIP reachability, service port availability, external dependency health, etc.—an important tool for assessing failure impact scope.

blackbox

For more information, see: Config: INFRA - BLACKBOX


Ansible

Ansible is Pigsty’s core orchestration tool; all deployment, configuration, and management operations are performed through Ansible Playbooks.

Pigsty automatically installs Ansible on the admin node (Infra node) during installation. It adopts a declarative configuration style and idempotent playbook design: the same playbook can be run repeatedly, and the system automatically converges to the desired state without side effects.

Ansible’s core advantages:

  • Agentless: Executes remotely via SSH, no additional software needed on target nodes.
  • Declarative: Describes the desired state rather than execution steps; configuration is documentation.
  • Idempotent: Multiple executions produce consistent results; supports retry after partial failures.

For more information, see: Playbooks: Pigsty Playbook


DNSMASQ

DNSMASQ provides DNS resolution on INFRA nodes, resolving domain names to their corresponding IP addresses.

DNSMASQ listens on port 53 (UDP/TCP) by default, providing DNS resolution for all nodes. Records are stored in the /infra/hosts directory.

Other modules automatically register their domain names with DNSMASQ during deployment, which you can use as needed. DNS is completely optional—Pigsty works normally without it. Client nodes can configure INFRA nodes as their DNS servers, allowing access to services via domain names without remembering IP addresses.

For more information, see: Config: INFRA - DNS and Tutorial: DNS—Configure Domain Resolution


Chronyd

Chronyd provides NTP time synchronization, ensuring consistent clocks across all nodes. It listens on port 123 (UDP) by default as the time source.

Time synchronization is critical for distributed systems: log analysis requires aligned timestamps, certificate validation depends on accurate clocks, and PostgreSQL streaming replication is sensitive to clock drift. In isolated network environments, the INFRA node can serve as an internal NTP server with other nodes synchronizing to it.

In Pigsty, all nodes run chronyd by default for time sync. The default upstream is pool.ntp.org public NTP servers. Chronyd is essentially managed by the Node module, but in isolated networks, you can use admin_ip to point to the INFRA node’s Chronyd service as the internal time source. In this case, the Chronyd service on the INFRA node serves as the internal time synchronization infrastructure.

For more information, see: Config: NODE - TIME


INFRA Node vs Regular Node

In Pigsty, the relationship between nodes and infrastructure is a weak circular dependency: node_monitor → infra → node

The NODE module itself doesn’t depend on the INFRA module, but the monitoring functionality (node_monitor) requires the monitoring platform and services provided by the infrastructure module.

Therefore, in the infra.yml and deploy playbooks, an “interleaved deployment” technique is used:

  • First, initialize the NODE module on all regular nodes, but skip monitoring config since infrastructure isn’t deployed yet.
  • Then, initialize the INFRA module on the INFRA node—monitoring is now available.
  • Finally, reconfigure monitoring on all regular nodes, connecting to the now-deployed monitoring platform.

If you don’t need “one-shot” deployment of all nodes, you can use phased deployment: initialize INFRA nodes first, then regular nodes.

How Are Nodes Coupled to Infrastructure?

Regular nodes reference an INFRA node via the admin_ip parameter as their infrastructure provider.

For example, when you configure global admin_ip = 10.10.10.10, all nodes will typically use infrastructure services at this IP.

This design allows quick, batch switching of infrastructure providers. Parameters that may reference ${admin_ip}:

ParameterModuleDefault ValueDescription
repo_endpointINFRAhttp://${admin_ip}:80Software repo URL
repo_upstream.baseurlINFRAhttp://${admin_ip}/pigstyLocal repo baseurl
infra_portal.endpointINFRA${admin_ip}:<port>Nginx proxy backend
dns_recordsINFRA["${admin_ip} i.pigsty", ...]DNS records
node_default_etc_hostsNODE["${admin_ip} i.pigsty"]Default static DNS
node_etc_hostsNODE[]Custom static DNS
node_dns_serversNODE["${admin_ip}"]Dynamic DNS servers
node_ntp_serversNODE["pool pool.ntp.org iburst"]NTP servers (optional)

For example, when a node installs software, the local repo points to the Nginx local software repository at admin_ip:80/pigsty. The DNS server also points to DNSMASQ at admin_ip:53. However, this isn’t mandatory—nodes can ignore the local repo and install directly from upstream internet sources (most single-node config templates); DNS servers can also remain unconfigured, as Pigsty has no DNS dependency.


INFRA Node vs ADMIN Node

The management-initiating ADMIN node typically coincides with the INFRA node. In single-node deployment, this is exactly the case. In multi-node deployment with multiple INFRA nodes, the admin node is usually the first in the infra group; others serve as backups. However, exceptions exist. You might separate them for various reasons:

For example, in large-scale production deployments, a classic pattern uses 1-2 dedicated management hosts (tiny VMs suffice) belonging to the DBA team as the control hub, with 2-3 high-spec physical machines (or more!) as monitoring infrastructure. Here, admin nodes are separate from infrastructure nodes. In this case, the admin_ip in your config should point to an INFRA node’s IP, not the current ADMIN node’s IP. This is for historical reasons: initially ADMIN and INFRA nodes were tightly coupled concepts, with separation capabilities evolving later, so the parameter name wasn’t changed.

Another common scenario is managing cloud nodes locally. For example, you can install Ansible on your laptop and specify cloud nodes as “managed targets.” In this case, your laptop acts as the ADMIN node, while cloud servers act as INFRA nodes.

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 , ansible_host: your_ssh_alias } } }  # <--- Use ansible_host to point to cloud node (fill in ssh alias)
    etcd:    { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }    # SSH connection will use: ssh your_ssh_alias
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.10
    region: default

Multiple INFRA Nodes

By default, Pigsty only needs one INFRA node for most requirements. Even if the INFRA module goes down, it won’t affect database services on other nodes.

However, in production environments with high monitoring and alerting requirements, you may want multiple INFRA nodes to improve infrastructure availability. A common deployment uses two Infra nodes for redundancy, monitoring each other… or more nodes to deploy a distributed Victoria cluster for unlimited horizontal scaling.

Each Infra node is independent—Nginx points to services on the local machine. VictoriaMetrics independently scrapes metrics from all services in the environment, and logs are pushed to all VictoriaLogs collection endpoints by default. The only exception is Grafana: every Grafana instance registers all VictoriaMetrics / Logs / Traces / PostgreSQL instances as datasources. Therefore, each Grafana instance can see complete monitoring data.

If you modify Grafana—such as adding new dashboards or changing datasource configs—these changes only affect the Grafana instance on that node. To keep Grafana consistent across all nodes, use a PostgreSQL database as shared storage. See Tutorial: Configure Grafana High Availability for details.

1.3 - PGSQL Arch

PostgreSQL module component interactions and data flow.

The PGSQL module organizes PostgreSQL in production as clusterslogical entities composed of a group of database instances associated by primary-replica relationships.


Overview

The PGSQL module includes the following components, working together to provide production-grade PostgreSQL HA cluster services:

ComponentTypeDescription
postgresDatabaseThe world’s most advanced open-source relational database, PGSQL core
patroniHAManages PostgreSQL, coordinates failover, leader election, config changes
pgbouncerPoolLightweight connection pooling middleware, reduces overhead, adds flexibility
pgbackrestBackupFull/incremental backup and WAL archiving, supports local and object storage
pg_exporterMetricsExports PostgreSQL monitoring metrics for Prometheus scraping
pgbouncer_exporterMetricsExports Pgbouncer connection pool metrics
pgbackrest_exporterMetricsExports backup status metrics
vip-managerVIPBinds L2 VIP to current primary node for transparent failover [Optional]

The vip-manager is an on-demand component. Additionally, PGSQL uses components from other modules:

ComponentModuleTypeDescription
haproxyNODELBExposes service ports, routes traffic to primary or replicas
vectorNODELoggingCollects PostgreSQL, Patroni, Pgbouncer logs and ships to center
etcdETCDDCSDistributed consistent store for cluster metadata and leader info

By analogy, the PostgreSQL database kernel is the CPU, while the PGSQL module packages it as a complete computer. Patroni and Etcd form the HA subsystem, pgBackRest and MinIO form the backup subsystem. HAProxy, Pgbouncer, and vip-manager form the access subsystem. Various Exporters and Vector build the observability subsystem; finally, you can swap different kernel CPUs and extension cards.

SubsystemComponentsFunction
HA SubsystemPatroni + etcdFailure detection, auto-failover, config management
Access SubsystemHAProxy + Pgbouncer + vip-managerService exposure, load balancing, pooling, VIP
Backup SubsystempgBackRest (+ MinIO)Full/incremental backup, WAL archiving, PITR
Observability Subsystempg_exporter / pgbouncer_exporter / pgbackrest_exporter + VectorMetrics collection, log aggregation

Component Interaction

pigsty-arch

  • Cluster DNS is resolved by DNSMASQ on infra nodes
  • Cluster VIP is managed by vip-manager, which binds pg_vip_address to the cluster primary node.
  • Cluster services are exposed by HAProxy on nodes, different services distinguished by node ports (543x).
  • Pgbouncer is connection pooling middleware, listening on port 6432 by default, buffering connections, exposing additional metrics, and providing extra flexibility.
  • PostgreSQL listens on port 5432, providing relational database services
    • Installing PGSQL module on multiple nodes with the same cluster name automatically forms an HA cluster via streaming replication
    • PostgreSQL process is managed by patroni by default.
  • Patroni listens on port 8008 by default, supervising PostgreSQL server processes
    • Patroni starts Postgres server as child process
    • Patroni uses etcd as DCS: stores config, failure detection, and leader election.
    • Patroni provides Postgres info (e.g., primary/replica) via health checks, HAProxy uses this to distribute traffic
  • pg_exporter exposes postgres monitoring metrics on port 9630
  • pgbouncer_exporter exposes pgbouncer metrics on port 9631
  • pgBackRest uses local backup repository by default (pgbackrest_method = local)
    • If using local (default), pgBackRest creates local repository under pg_fs_bkup on primary node
    • If using minio, pgBackRest creates backup repository on dedicated MinIO cluster
  • Vector collects Postgres-related logs (postgres, pgbouncer, patroni, pgbackrest)
    • vector listens on port 9598, also exposes its own metrics to VictoriaMetrics on infra nodes
    • vector sends logs to VictoriaLogs on infra nodes

HA Subsystem

The HA subsystem consists of Patroni and etcd, responsible for PostgreSQL cluster failure detection, automatic failover, and configuration management.

How it works: Patroni runs on each node, managing the local PostgreSQL process and writing cluster state (leader, members, config) to etcd. When the primary fails, Patroni coordinates election via etcd, promoting the healthiest replica to new primary. The entire process is automatic, with RTO typically under 30 seconds.

Key Interactions:

  • PostgreSQL: Starts, stops, reloads PG as parent process, controls its lifecycle
  • etcd: External dependency, writes/watches leader key for distributed consensus and failure detection
  • HAProxy: Provides health checks via REST API (:8008), reporting instance role
  • vip-manager: Watches leader key in etcd, auto-migrates VIP

For more information, see: High Availability and Config: PGSQL - PG_BOOTSTRAP


Access Subsystem

The access subsystem consists of HAProxy, Pgbouncer, and vip-manager, responsible for service exposure, traffic routing, and connection pooling.

There are multiple access methods. A typical traffic path is: Client → DNS/VIP → HAProxy (543x) → Pgbouncer (6432) → PostgreSQL (5432)

LayerComponentPortRole
L2 VIPvip-manager-Binds L2 VIP to primary (optional)
L4 Load BalHAProxy543xService exposure, load balancing, health checks
L7 PoolPgbouncer6432Connection reuse, session management, transaction pooling

Service Ports:

  • 5433 primary: Read-write service, routes to primary Pgbouncer
  • 5434 replica: Read-only service, routes to replica Pgbouncer
  • 5436 default: Default service, direct to primary (bypasses pool)
  • 5438 offline: Offline service, direct to offline replica (ETL/analytics)

Key Features:

  • HAProxy uses Patroni REST API to determine instance role, auto-routes traffic
  • Pgbouncer uses transaction-level pooling, absorbs connection spikes, reduces PG connection overhead
  • vip-manager watches etcd leader key, auto-migrates VIP during failover

For more information, see: Service Access and Config: PGSQL - PG_ACCESS


Backup Subsystem

The backup subsystem consists of pgBackRest (optionally with MinIO as remote repository), responsible for data backup and point-in-time recovery (PITR).

Backup Types:

  • Full backup: Complete database copy
  • Incremental/differential backup: Only backs up changed data blocks
  • WAL archiving: Continuous transaction log archiving, enables any point-in-time recovery

Storage Backends:

  • local (default): Local disk, backups stored at pg_fs_bkup mount point
  • minio: S3-compatible object storage, supports centralized backup management and off-site DR

Key Interactions:

For more information, see: PITR, Backup & Recovery, and Config: PGSQL - PG_BACKUP


Observability Subsystem

The observability subsystem consists of three Exporters and Vector, responsible for metrics collection and log aggregation.

ComponentPortTargetKey Metrics
pg_exporter9630PostgreSQLSessions, transactions, replication lag, buffer hits
pgbouncer_exporter9631PgbouncerPool utilization, wait queue, hit rate
pgbackrest_exporter9854pgBackRestLatest backup time, size, type
vector9598postgres/patroni/pgbouncer logsStructured log stream

Data Flow:

  • Metrics: Exporter → VictoriaMetrics (INFRA) → Grafana dashboards
  • Logs: Vector → VictoriaLogs (INFRA) → Grafana log queries

pg_exporter / pgbouncer_exporter connect to target services via local Unix socket, decoupled from HA topology. In slim install mode, these components can be disabled.

For more information, see: Config: PGSQL - PG_MONITOR


PostgreSQL

PostgreSQL is the PGSQL module core, listening on port 5432 by default for relational database services, deployed 1:1 with nodes.

Pigsty currently supports PostgreSQL 14-18 (lifecycle major versions), installed via binary packages from the PGDG official repo. Pigsty also allows you to use other PG kernel forks to replace the default PostgreSQL kernel, and install up to 440 extension plugins on top of the PG kernel.

PostgreSQL processes are managed by default by the HA agent—Patroni. When a cluster has only one node, that instance is the primary; when the cluster has multiple nodes, other instances automatically join as replicas: through physical replication, syncing data changes from the primary in real-time. Replicas can handle read-only requests and automatically take over when the primary fails.

pigsty-ha.png

You can access PostgreSQL directly, or through HAProxy and Pgbouncer connection pool.

For more information, see: Config: PGSQL - PG_BOOTSTRAP


Patroni

Patroni is the PostgreSQL HA control component, listening on port 8008 by default.

Patroni takes over PostgreSQL startup, shutdown, configuration, and health status, writing leader and member information to etcd. It handles automatic failover, maintains replication factor, coordinates parameter changes, and provides a REST API for HAProxy, monitoring, and administrators.

HAProxy uses Patroni health check endpoints to determine instance roles and route traffic to the correct primary or replica. vip-manager monitors the leader key in etcd and automatically migrates the VIP when the primary changes.

patroni

For more information, see: Config: PGSQL - PG_BOOTSTRAP


Pgbouncer

Pgbouncer is a lightweight connection pooling middleware, listening on port 6432 by default, deployed 1:1 with PostgreSQL database and node.

Pgbouncer runs statelessly on each instance, connecting to PostgreSQL via local Unix socket, using Transaction Pooling by default for pool management, absorbing burst client connections, stabilizing database sessions, reducing lock contention, and significantly improving performance under high concurrency.

Pigsty routes production traffic (read-write service 5433 / read-only service 5434) through Pgbouncer by default, while only the default service (5436) and offline service (5438) bypass the pool for direct PostgreSQL connections.

Pool mode is controlled by pgbouncer_poolmode, defaulting to transaction (transaction-level pooling). Connection pooling can be disabled via pgbouncer_enabled.

pgbouncer.png

For more information, see: Config: PGSQL - PG_ACCESS


pgBackRest

pgBackRest is a professional PostgreSQL backup/recovery tool, one of the strongest in the PG ecosystem, supporting full/incremental/differential backup and WAL archiving.

Pigsty uses pgBackRest for PostgreSQL PITR capability, allowing you to roll back clusters to any point within the backup retention window.

pgBackRest works with PostgreSQL to create backup repositories on the primary, executing backup and archive tasks. By default, it uses local backup repository (pgbackrest_method = local), but can be configured for MinIO or other object storage for centralized backup management.

After initialization, pgbackrest_init_backup can automatically trigger the first full backup. Recovery integrates with Patroni, supporting bootstrapping replicas as new primaries or standbys.

pgbackrest

For more information, see: Backup & Recovery and Config: PGSQL - PG_BACKUP


HAProxy

HAProxy is the service entry point and load balancer, exposing multiple database service ports.

PortServiceTargetDescription
9101Admin-HAProxy statistics and admin page
5433primaryPrimary PgbouncerRead-write service, routes to primary pool
5434replicaReplica PgbouncerRead-only service, routes to replica pool
5436defaultPrimary PostgresDefault service, direct to primary (bypasses pool)
5438offlineOffline PostgresOffline service, direct to offline replica (ETL/analytics)

HAProxy uses Patroni REST API health checks to determine instance roles and route traffic to the appropriate primary or replica. Service definitions are composed from pg_default_services and pg_services.

A dedicated HAProxy node group can be specified via pg_service_provider to handle higher traffic; by default, HAProxy on local nodes publishes services.

haproxy

For more information, see: Service Access and Config: PGSQL - PG_ACCESS


vip-manager

vip-manager binds L2 VIP to the current primary node. This is an optional component; enable it if your network supports L2 VIP.

vip-manager runs on each PG node, monitoring the leader key written by Patroni in etcd, and binds pg_vip_address to the current primary node’s network interface. When cluster failover occurs, vip-manager immediately releases the VIP from the old primary and rebinds it on the new primary, switching traffic to the new primary.

This component is optional, enabled via pg_vip_enabled. When enabled, ensure all nodes are in the same VLAN; otherwise, VIP migration will fail. Public cloud networks typically don’t support L2 VIP; it’s recommended only for on-premises and private cloud environments.

node-vip

For more information, see: Tutorial: VIP Configuration and Config: PGSQL - PG_ACCESS


pg_exporter

pg_exporter exports PostgreSQL monitoring metrics, listening on port 9630 by default.

pg_exporter runs on each PG node, connecting to PostgreSQL via local Unix socket, exporting rich metrics covering sessions, buffer hits, replication lag, transaction rates, etc., scraped by VictoriaMetrics on INFRA nodes.

Collection configuration is specified by pg_exporter_config, with support for automatic database discovery (pg_exporter_auto_discovery), and tiered cache strategies via pg_exporter_cache_ttls.

You can disable this component via parameters; in slim install, this component is not enabled.

pg-exporter

For more information, see: Config: PGSQL - PG_MONITOR


pgbouncer_exporter

pgbouncer_exporter exports Pgbouncer connection pool metrics, listening on port 9631 by default.

pgbouncer_exporter uses the same pg_exporter binary but with a dedicated metrics config file, supporting pgbouncer 1.8-1.25+. pgbouncer_exporter reads Pgbouncer statistics views, providing pool utilization, wait queue, and hit rate metrics.

If Pgbouncer is disabled, this component is also disabled. In slim install, this component is not enabled.

For more information, see: Config: PGSQL - PG_MONITOR


pgbackrest_exporter

pgbackrest_exporter exports backup status metrics, listening on port 9854 by default.

pgbackrest_exporter parses pgBackRest status, generating metrics for most recent backup time, size, type, etc. Combined with alerting policies, it quickly detects expired or failed backups, ensuring data safety. Note that when there are many backups or using large network repositories, collection overhead can be significant, so pgbackrest_exporter has a default 2-minute collection interval. In the worst case, you may see the latest backup status in the monitoring system 2 minutes after a backup completes.

For more information, see: Config: PGSQL - PG_MONITOR


etcd

etcd is a distributed consistent store (DCS), providing cluster metadata storage and leader election capability for Patroni.

etcd is deployed and managed by the independent ETCD module, not part of the PGSQL module itself, but critical for PostgreSQL HA. Patroni writes cluster state, leader info, and config parameters to etcd; all nodes reach consensus through etcd. vip-manager also reads the leader key from etcd to enable automatic VIP migration.

For more information, see: ETCD Module


vector

Vector is a high-performance log collection component, deployed by the NODE module, responsible for collecting PostgreSQL-related logs.

Vector runs on nodes, tracking PostgreSQL, Pgbouncer, Patroni, and pgBackRest log directories, sending structured logs to VictoriaLogs on INFRA nodes for centralized storage and querying.

For more information, see: NODE Module

2 - ER Model

How Pigsty abstracts different functionality into modules, and the E-R diagrams for these modules.

The largest entity concept in Pigsty is a Deployment. The main entities and relationships (E-R diagram) in a deployment are shown below:

A deployment can also be understood as an Environment. For example, Production (Prod), User Acceptance Testing (UAT), Staging, Testing, Development (Devbox), etc. Each environment corresponds to a Pigsty inventory that describes all entities and attributes in that environment.

Typically, an environment includes shared infrastructure (INFRA), which broadly includes ETCD (HA DCS) and MINIO (centralized backup repository), serving multiple PostgreSQL database clusters (and other database module components). (Exception: there are also deployments without infrastructure)

In Pigsty, almost all database modules are organized as “Clusters”. Each cluster is an Ansible group containing several node resources. For example, PostgreSQL HA database clusters, Redis, Etcd/MinIO all exist as clusters. An environment can contain multiple clusters.

2.1 - PGSQL Cluster Model

Entity-Relationship model for PostgreSQL clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The PGSQL module organizes PostgreSQL in production as clusterslogical entities composed of a group of database instances associated by primary-replica relationships.

Each cluster is an autonomous business unit consisting of at least one primary instance, exposing capabilities through services.

There are four core entities in Pigsty’s PGSQL module:

  • Cluster: An autonomous PostgreSQL business unit serving as the top-level namespace for other entities.
  • Service: A named abstraction that exposes capabilities, routes traffic, and exposes services using node ports.
  • Instance: A single PostgreSQL server consisting of running processes and database files on a single node.
  • Node: A hardware resource abstraction running Linux + Systemd environment—can be bare metal, VM, container, or Pod.

Along with two business entities—“Database” and “Role”—these form the complete logical view as shown below:

er-pgsql


Examples

Let’s look at two concrete examples. Using the four-node Pigsty sandbox, there’s a three-node pg-test cluster:

    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica }
        10.10.10.13: { pg_seq: 3, pg_role: replica }
      vars: { pg_cluster: pg-test }

The above config fragment defines a high-availability PostgreSQL cluster with these related entities:

ClusterDescription
pg-testPostgreSQL 3-node HA cluster
InstanceDescription
pg-test-1PostgreSQL instance #1, default primary
pg-test-2PostgreSQL instance #2, initial replica
pg-test-3PostgreSQL instance #3, initial replica
ServiceDescription
pg-test-primaryRead-write service (routes to primary pgbouncer)
pg-test-replicaRead-only service (routes to replica pgbouncer)
pg-test-defaultDirect read-write service (routes to primary postgres)
pg-test-offlineOffline read service (routes to dedicated postgres)
NodeDescription
node-110.10.10.11 Node #1, hosts pg-test-1 PG instance
node-210.10.10.12 Node #2, hosts pg-test-2 PG instance
node-310.10.10.13 Node #3, hosts pg-test-3 PG instance

ha


Identity Parameters

Pigsty uses the PG_ID parameter group to assign deterministic identities to each PGSQL module entity. Three parameters are required:

ParameterTypeLevelDescriptionFormat
pg_clusterstringClusterPG cluster name, requiredValid DNS name, regex [a-zA-Z0-9-]+
pg_seqintInstancePG instance number, requiredNatural number, starting from 0 or 1, unique within cluster
pg_roleenumInstancePG instance role, requiredEnum: primary, replica, offline

With cluster name defined at cluster level and instance number/role assigned at instance level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Instance{{ pg_cluster }}-{{ pg_seq }}pg-test-1, pg-test-2, pg-test-3
Service{{ pg_cluster }}-{{ pg_role }}pg-test-primary, pg-test-replica, pg-test-offline
NodeExplicitly specified or borrowed from PGpg-test-1, pg-test-2, pg-test-3

Because Pigsty adopts a 1:1 exclusive deployment model for nodes and PG instances, by default the host node identifier borrows from the PG instance identifier (node_id_from_pg). You can also explicitly specify nodename to override, or disable nodename_overwrite to use the current default.


Sharding Identity Parameters

When using multiple PostgreSQL clusters (sharding) to serve the same business, two additional identity parameters are used: pg_shard and pg_group.

In this case, this group of PostgreSQL clusters shares the same pg_shard name with their own pg_group numbers, like this Citus cluster:

In this case, pg_cluster cluster names are typically composed of: {{ pg_shard }}{{ pg_group }}, e.g., pg-citus0, pg-citus1, etc.

all:
  children:
    pg-citus0: # citus shard 0
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus0 , pg_group: 0 }
    pg-citus1: # citus shard 1
      hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus1 , pg_group: 1 }
    pg-citus2: # citus shard 2
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus2 , pg_group: 2 }
    pg-citus3: # citus shard 3
      hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus3 , pg_group: 3 }

Pigsty provides dedicated monitoring dashboards for horizontal sharding clusters, making it easy to compare performance and load across shards, but this requires using the above entity naming convention.

There are also other identity parameters for special scenarios, such as pg_upstream for specifying backup clusters/cascading replication upstream, gp_role for Greenplum cluster identity, pg_exporters for external monitoring instances, pg_offline_query for offline query instances, etc. See PG_ID parameter docs.


Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various PostgreSQL entities.

pg_up{cls="pg-test", ins="pg-test-1", ip="10.10.10.11", job="pgsql"}
pg_up{cls="pg-test", ins="pg-test-2", ip="10.10.10.12", job="pgsql"}
pg_up{cls="pg-test", ins="pg-test-3", ip="10.10.10.13", job="pgsql"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all native monitoring metrics collected by VictoriaMetrics and VictoriaLogs log streams.

The job name for collecting PostgreSQL metrics is fixed as pgsql; The job name for monitoring remote PG instances is fixed as pgrds. The job name for collecting PostgreSQL CSV logs is fixed as postgres; The job name for collecting pgbackrest logs is fixed as pgbackrest, other PG components collect logs via job: syslog.

Additionally, some entity identity labels appear in specific entity-related monitoring metrics, such as:

  • datname: Database name, if a metric belongs to a specific database.
  • relname: Table name, if a metric belongs to a specific table.
  • idxname: Index name, if a metric belongs to a specific index.
  • funcname: Function name, if a metric belongs to a specific function.
  • seqname: Sequence name, if a metric belongs to a specific sequence.
  • query: Query fingerprint, if a metric belongs to a specific query.

2.2 - ETCD Cluster Model

Entity-Relationship model for ETCD clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The ETCD module organizes ETCD in production as clusterslogical entities composed of a group of ETCD instances associated through the Raft consensus protocol.

Each cluster is an autonomous distributed key-value storage unit consisting of at least one ETCD instance, exposing service capabilities through client ports.

There are three core entities in Pigsty’s ETCD module:

  • Cluster: An autonomous ETCD service unit serving as the top-level namespace for other entities.
  • Instance: A single ETCD server process running on a node, participating in Raft consensus.
  • Node: A hardware resource abstraction running Linux + Systemd environment, implicitly declared.

Compared to PostgreSQL clusters, the ETCD cluster model is simpler, without Services or complex Role distinctions. All ETCD instances are functionally equivalent, electing a Leader through the Raft protocol while others become Followers. During scale-out intermediate states, non-voting Learner instance members are also allowed.


Examples

Let’s look at a concrete example with a three-node ETCD cluster:

etcd:
  hosts:
    10.10.10.10: { etcd_seq: 1 }
    10.10.10.11: { etcd_seq: 2 }
    10.10.10.12: { etcd_seq: 3 }
  vars:
    etcd_cluster: etcd

The above config fragment defines a three-node ETCD cluster with these related entities:

ClusterDescription
etcdETCD 3-node HA cluster
InstanceDescription
etcd-1ETCD instance #1
etcd-2ETCD instance #2
etcd-3ETCD instance #3
NodeDescription
10.10.10.10Node #1, hosts etcd-1 instance
10.10.10.11Node #2, hosts etcd-2 instance
10.10.10.12Node #3, hosts etcd-3 instance

Identity Parameters

Pigsty uses the ETCD parameter group to assign deterministic identities to each ETCD module entity. Two parameters are required:

ParameterTypeLevelDescriptionFormat
etcd_clusterstringClusterETCD cluster name, requiredValid DNS name, defaults to fixed etcd
etcd_seqintInstanceETCD instance number, requiredNatural number, starting from 1, unique within cluster

With cluster name defined at cluster level and instance number assigned at instance level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Instance{{ etcd_cluster }}-{{ etcd_seq }}etcd-1, etcd-2, etcd-3

The ETCD module does not assign additional identity to host nodes; nodes are identified by their existing hostname or IP address.


Ports & Protocols

Each ETCD instance listens on the following two ports:

PortParameterPurpose
2379etcd_portClient port, accessed by Patroni, vip-manager, etc.
2380etcd_peer_portPeer communication port, used for Raft consensus

ETCD clusters enable TLS encrypted communication by default and use RBAC authentication mechanism. Clients need correct certificates and passwords to access ETCD services.


Cluster Size

As a distributed coordination service, ETCD cluster size directly affects availability, requiring more than half (quorum) of nodes to be alive to maintain service.

Cluster SizeQuorumFault ToleranceUse Case
1 node10Dev, test, demo
3 nodes21Small-medium production
5 nodes32Large-scale production

Therefore, even-numbered ETCD clusters are meaningless, and clusters over five nodes are uncommon. Typical sizes are single-node, three-node, and five-node.


Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various ETCD entities.

etcd_up{cls="etcd", ins="etcd-1", ip="10.10.10.10", job="etcd"}
etcd_up{cls="etcd", ins="etcd-2", ip="10.10.10.11", job="etcd"}
etcd_up{cls="etcd", ins="etcd-3", ip="10.10.10.12", job="etcd"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all ETCD monitoring metrics collected by VictoriaMetrics. The job name for collecting ETCD metrics is fixed as etcd.

2.3 - MinIO Cluster Model

Entity-Relationship model for MinIO clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The MinIO module organizes MinIO in production as clusterslogical entities composed of a group of distributed MinIO instances, collectively providing highly available object storage services.

Each cluster is an autonomous S3-compatible object storage unit consisting of at least one MinIO instance, exposing service capabilities through the S3 API port.

There are three core entities in Pigsty’s MinIO module:

  • Cluster: An autonomous MinIO service unit serving as the top-level namespace for other entities.
  • Instance: A single MinIO server process running on a node, managing local disk storage.
  • Node: A hardware resource abstraction running Linux + Systemd environment, implicitly declared.

Additionally, MinIO has the concept of Storage Pool, used for smooth cluster scaling. A cluster can contain multiple storage pools, each composed of a group of nodes and disks.


Deployment Modes

MinIO supports three main deployment modes for different scenarios:

ModeCodeDescriptionUse Case
Single-Node Single-DriveSNSDSingle node, single data directory or diskDev, test, demo
Single-Node Multi-DriveSNMDSingle node, multiple disks, typically 4+Resource-constrained small deployments
Multi-Node Multi-DriveMNMDMultiple nodes, multiple disks per nodeProduction recommended

SNSD mode can use any directory as storage for quick experimentation; SNMD and MNMD modes require real disk mount points, otherwise startup is refused.


Examples

Let’s look at a concrete multi-node multi-drive example with a four-node MinIO cluster:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
    10.10.10.13: { minio_seq: 4 }
  vars:
    minio_cluster: minio
    minio_data: '/data{1...4}'
    minio_node: '${minio_cluster}-${minio_seq}.pigsty'

The above config fragment defines a four-node MinIO cluster with four disks per node:

ClusterDescription
minioMinIO 4-node HA cluster
InstanceDescription
minio-1MinIO instance #1, managing 4 disks
minio-2MinIO instance #2, managing 4 disks
minio-3MinIO instance #3, managing 4 disks
minio-4MinIO instance #4, managing 4 disks
NodeDescription
10.10.10.10Node #1, hosts minio-1 instance
10.10.10.11Node #2, hosts minio-2 instance
10.10.10.12Node #3, hosts minio-3 instance
10.10.10.13Node #4, hosts minio-4 instance

Identity Parameters

Pigsty uses the MINIO parameter group to assign deterministic identities to each MinIO module entity. Two parameters are required:

ParameterTypeLevelDescriptionFormat
minio_clusterstringClusterMinIO cluster name, requiredValid DNS name, defaults to minio
minio_seqintInstanceMinIO instance number, requiredNatural number, starting from 1, unique within cluster

With cluster name defined at cluster level and instance number assigned at instance level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Instance{{ minio_cluster }}-{{ minio_seq }}minio-1, minio-2, minio-3, minio-4

The MinIO module does not assign additional identity to host nodes; nodes are identified by their existing hostname or IP address. The minio_node parameter generates node names for MinIO cluster internal use (written to /etc/hosts for cluster discovery), not host node identity.


Core Configuration Parameters

Beyond identity parameters, the following parameters are critical for MinIO cluster configuration:

ParameterTypeDescription
minio_datapathData directory, use {x...y} for multi-drive
minio_nodestringNode name pattern for multi-node deployment
minio_domainstringService domain, defaults to sss.pigsty

These parameters together determine MinIO’s core config MINIO_VOLUMES:

  • SNSD: Direct minio_data value, e.g., /data/minio
  • SNMD: Expanded minio_data directories, e.g., /data{1...4}
  • MNMD: Combined minio_node and minio_data, e.g., https://minio-{1...4}.pigsty:9000/data{1...4}

Ports & Services

Each MinIO instance listens on the following ports:

PortParameterPurpose
9000minio_portS3 API service port
9001minio_admin_portWeb admin console port

MinIO enables HTTPS encrypted communication by default (controlled by minio_https). This is required for backup tools like pgBackREST to access MinIO.

Multi-node MinIO clusters can be accessed through any node. Best practice is to use a load balancer (e.g., HAProxy + VIP) for unified access point.


Resource Provisioning

After MinIO cluster deployment, Pigsty automatically creates the following resources (controlled by minio_provision):

Default Buckets (defined by minio_buckets):

BucketPurpose
pgsqlPostgreSQL pgBackREST backup storage
metaMetadata storage, versioning enabled
dataGeneral data storage

Default Users (defined by minio_users):

UserDefault PasswordPolicyPurpose
pgbackrestS3User.BackuppgsqlPostgreSQL backup dedicated user
s3user_metaS3User.MetametaAccess meta bucket
s3user_dataS3User.DatadataAccess data bucket

pgbackrest is used for PostgreSQL cluster backups; s3user_meta and s3user_data are reserved users not actively used.


Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various MinIO entities.

minio_up{cls="minio", ins="minio-1", ip="10.10.10.10", job="minio"}
minio_up{cls="minio", ins="minio-2", ip="10.10.10.11", job="minio"}
minio_up{cls="minio", ins="minio-3", ip="10.10.10.12", job="minio"}
minio_up{cls="minio", ins="minio-4", ip="10.10.10.13", job="minio"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all MinIO monitoring metrics collected by VictoriaMetrics. The job name for collecting MinIO metrics is fixed as minio.

2.4 - Redis Cluster Model

Entity-Relationship model for Redis clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The Redis module organizes Redis in production as clusterslogical entities composed of a group of Redis instances deployed on one or more nodes.

Each cluster is an autonomous high-performance cache/storage unit consisting of at least one Redis instance, exposing service capabilities through ports.

There are three core entities in Pigsty’s Redis module:

  • Cluster: An autonomous Redis service unit serving as the top-level namespace for other entities.
  • Instance: A single Redis server process running on a specific port on a node.
  • Node: A hardware resource abstraction running Linux + Systemd environment, can host multiple Redis instances, implicitly declared.

Unlike PostgreSQL, Redis uses a single-node multi-instance deployment model: one physical/virtual machine node typically deploys multiple Redis instances to fully utilize multi-core CPUs. Therefore, nodes and instances have a 1:N relationship. Additionally, production typically advises against Redis instances with memory > 12GB.


Operating Modes

Redis has three different operating modes, specified by the redis_mode parameter:

ModeCodeDescriptionHA Mechanism
StandalonestandaloneClassic master-replica, default modeRequires Sentinel
SentinelsentinelHA monitoring and auto-failover for standaloneMulti-node quorum
Native ClusterclusterRedis native distributed cluster, no sentinel neededBuilt-in auto-failover
  • Standalone: Default mode, replication via replica_of parameter. Requires additional Sentinel cluster for HA.
  • Sentinel: Stores no business data, dedicated to monitoring standalone Redis clusters for auto-failover; multi-node itself provides HA.
  • Native Cluster: Data auto-sharded across multiple primaries, each can have multiple replicas, built-in HA, no sentinel needed.

Examples

Let’s look at concrete examples for each mode:

Standalone Cluster

Classic master-replica on a single node:

redis-ms:
  hosts:
    10.10.10.10:
      redis_node: 1
      redis_instances:
        6379: { }
        6380: { replica_of: '10.10.10.10 6379' }
  vars:
    redis_cluster: redis-ms
    redis_password: 'redis.ms'
    redis_max_memory: 64MB
ClusterDescription
redis-msRedis standalone cluster
NodeDescription
redis-ms-110.10.10.10 Node #1, hosts 2 instances
InstanceDescription
redis-ms-1-6379Primary instance, listening on port 6379
redis-ms-1-6380Replica instance, port 6380, replicates from 6379

Sentinel Cluster

Three sentinel instances on a single node for monitoring standalone clusters. Sentinel clusters specify monitored standalone clusters via redis_sentinel_monitor:

redis-sentinel:
  hosts:
    10.10.10.11:
      redis_node: 1
      redis_instances: { 26379: {}, 26380: {}, 26381: {} }
  vars:
    redis_cluster: redis-sentinel
    redis_password: 'redis.sentinel'
    redis_mode: sentinel
    redis_max_memory: 16MB
    redis_sentinel_monitor:
      - { name: redis-ms, host: 10.10.10.10, port: 6379, password: redis.ms, quorum: 2 }

Native Cluster

A Redis native distributed cluster with two nodes and six instances (minimum spec: 3 primaries, 3 replicas):

redis-test:
  hosts:
    10.10.10.12: { redis_node: 1, redis_instances: { 6379: {}, 6380: {}, 6381: {} } }
    10.10.10.13: { redis_node: 2, redis_instances: { 6379: {}, 6380: {}, 6381: {} } }
  vars:
    redis_cluster: redis-test
    redis_password: 'redis.test'
    redis_mode: cluster
    redis_max_memory: 32MB

This creates a 3 primary 3 replica native Redis cluster.

ClusterDescription
redis-testRedis native cluster (3P3R)
InstanceDescription
redis-test-1-6379Instance on node 1, port 6379
redis-test-1-6380Instance on node 1, port 6380
redis-test-1-6381Instance on node 1, port 6381
redis-test-2-6379Instance on node 2, port 6379
redis-test-2-6380Instance on node 2, port 6380
redis-test-2-6381Instance on node 2, port 6381
NodeDescription
redis-test-110.10.10.12 Node #1, hosts 3 instances
redis-test-210.10.10.13 Node #2, hosts 3 instances

Identity Parameters

Pigsty uses the REDIS parameter group to assign deterministic identities to each Redis module entity. Three parameters are required:

ParameterTypeLevelDescriptionFormat
redis_clusterstringClusterRedis cluster name, requiredValid DNS name, regex [a-z][a-z0-9-]*
redis_nodeintNodeRedis node number, requiredNatural number, starting from 1, unique within cluster
redis_instancesdictNodeRedis instance definition, requiredJSON object, key is port, value is instance config

With cluster name defined at cluster level and node number/instance definition assigned at node level, Pigsty automatically generates unique identifiers for each entity:

EntityGeneration RuleExample
Instance{{ redis_cluster }}-{{ redis_node }}-{{ port }}redis-ms-1-6379, redis-ms-1-6380

The Redis module does not assign additional identity to host nodes; nodes are identified by their existing hostname or IP address. redis_node is used for instance naming, not host node identity.


Instance Definition

redis_instances is a JSON object with port number as key and instance config as value:

redis_instances:
  6379: { }                                      # Primary instance, no extra config
  6380: { replica_of: '10.10.10.10 6379' }       # Replica, specify upstream primary
  6381: { replica_of: '10.10.10.10 6379' }       # Replica, specify upstream primary

Each Redis instance listens on a unique port within the node. You can choose any port number, but avoid system reserved ports (< 1024) or conflicts with Pigsty used ports. The replica_of parameter sets replication relationship in standalone mode, format '<ip> <port>', specifying upstream primary address and port.

Additionally, each Redis node runs a Redis Exporter collecting metrics from all local instances:

PortParameterPurpose
9121redis_exporter_portRedis Exporter port

Redis’s single-node multi-instance deployment model has some limitations:

  • Node Exclusive: A node can only belong to one Redis cluster, not assigned to different clusters simultaneously.
  • Port Unique: Redis instances on the same node must use different ports to avoid conflicts.
  • Password Shared: Multiple instances on the same node cannot have different passwords (redis_exporter limitation).
  • Manual HA: Standalone Redis clusters require additional Sentinel configuration for auto-failover.

Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various Redis entities.

redis_up{cls="redis-ms", ins="redis-ms-1-6379", ip="10.10.10.10", job="redis"}
redis_up{cls="redis-ms", ins="redis-ms-1-6380", ip="10.10.10.10", job="redis"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all Redis monitoring metrics collected by VictoriaMetrics. The job name for collecting Redis metrics is fixed as redis.

2.5 - INFRA Node Model

Entity-Relationship model for INFRA infrastructure nodes in Pigsty, component composition, and naming conventions.

The INFRA module plays a special role in Pigsty: it’s not a traditional “cluster” but rather a management hub composed of a group of infrastructure nodes, providing core services for the entire Pigsty deployment. Each INFRA node is an autonomous infrastructure service unit running core components like Nginx, Grafana, and VictoriaMetrics, collectively providing observability and management capabilities for managed database clusters.

There are two core entities in Pigsty’s INFRA module:

  • Node: A server running infrastructure components—can be bare metal, VM, container, or Pod.
  • Component: Various infrastructure services running on nodes, such as Nginx, Grafana, VictoriaMetrics, etc.

INFRA nodes typically serve as Admin Nodes, the control plane of Pigsty.


Component Composition

Each INFRA node runs the following core components:

ComponentPortDescription
Nginx80/443Web portal, local repo, unified reverse proxy
Grafana3000Visualization platform, dashboards, data apps
VictoriaMetrics8428Time-series database, Prometheus API compatible
VictoriaLogs9428Log database, receives structured logs from Vector
VictoriaTraces10428Trace storage for slow SQL / request tracing
VMAlert8880Alert rule evaluator based on VictoriaMetrics
Alertmanager9059Alert aggregation and dispatch
Blackbox Exporter9115ICMP/TCP/HTTP black-box probing
DNSMASQ53DNS server for internal domain resolution
Chronyd123NTP time server

These components together form Pigsty’s observability infrastructure.


Examples

Let’s look at a concrete example with a two-node INFRA deployment:

infra:
  hosts:
    10.10.10.10: { infra_seq: 1 }
    10.10.10.11: { infra_seq: 2 }

The above config fragment defines a two-node INFRA deployment:

GroupDescription
infraINFRA infrastructure node group
NodeDescription
infra-110.10.10.10 INFRA node #1
infra-210.10.10.11 INFRA node #2

For production environments, deploying at least two INFRA nodes is recommended for infrastructure component redundancy.


Identity Parameters

Pigsty uses the INFRA_ID parameter group to assign deterministic identities to each INFRA module entity. One parameter is required:

ParameterTypeLevelDescriptionFormat
infra_seqintNodeINFRA node sequence, requiredNatural number, starting from 1, unique within group

With node sequence assigned at node level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Nodeinfra-{{ infra_seq }}infra-1, infra-2

The INFRA module assigns infra-N format identifiers to nodes for distinguishing multiple infrastructure nodes in the monitoring system. However, this doesn’t change the node’s hostname or system identity; nodes still use their existing hostname or IP address for identification.


Service Portal

INFRA nodes provide unified web service entry through Nginx. The infra_portal parameter defines services exposed through Nginx:

infra_portal:
  home         : { domain: i.pigsty }
  grafana      : { domain: g.pigsty, endpoint: "${admin_ip}:3000", websocket: true }
  prometheus   : { domain: p.pigsty, endpoint: "${admin_ip}:8428" }   # VMUI
  alertmanager : { domain: a.pigsty, endpoint: "${admin_ip}:9059" }

Users access different domains, and Nginx routes requests to corresponding backend services:

DomainServiceDescription
i.pigstyHomePigsty homepage
g.pigstyGrafanaMonitoring dashboard
p.pigstyVictoriaMetricsTSDB Web UI
a.pigstyAlertmanagerAlert management UI

Accessing Pigsty services via domain names is recommended over direct IP + port.


Deployment Scale

The number of INFRA nodes depends on deployment scale and HA requirements:

ScaleINFRA NodesDescription
Dev/Test1Single-node deployment, all on one node
Small Prod1-2Single or dual node, can share with other services
Medium Prod2-3Dedicated INFRA nodes, redundant components
Large Prod3+Multiple INFRA nodes, component separation

In singleton deployment, INFRA components share the same node with PGSQL, ETCD, etc. In small-scale deployments, INFRA nodes typically also serve as “Admin Node” / backup admin node and local software repository (/www/pigsty). In larger deployments, these responsibilities can be separated to dedicated nodes.


Monitoring Label System

Pigsty’s monitoring system collects metrics from INFRA components themselves. Unlike database modules, each component in the INFRA module is treated as an independent monitoring object, distinguished by the cls (class) label.

LabelDescriptionExample
clsComponent type, each forming a “class”nginx
insInstance name, format {component}-{infra_seq}nginx-1
ipINFRA node IP running the component10.10.10.10
jobVictoriaMetrics scrape job, fixed as infrainfra

Using a two-node INFRA deployment (infra_seq: 1 and infra_seq: 2) as example, component monitoring labels are:

Componentclsins ExamplePort
Nginxnginxnginx-1, nginx-29113
Grafanagrafanagrafana-1, grafana-23000
VictoriaMetricsvmetricsvmetrics-1, vmetrics-28428
VictoriaLogsvlogsvlogs-1, vlogs-29428
VictoriaTracesvtracesvtraces-1, vtraces-210428
VMAlertvmalertvmalert-1, vmalert-28880
Alertmanageralertmanageralertmanager-1, alertmanager-29059
Blackboxblackboxblackbox-1, blackbox-29115

All INFRA component metrics use a unified job="infra" label, distinguished by the cls label:

nginx_up{cls="nginx", ins="nginx-1", ip="10.10.10.10", job="infra"}
grafana_info{cls="grafana", ins="grafana-1", ip="10.10.10.10", job="infra"}
vm_app_version{cls="vmetrics", ins="vmetrics-1", ip="10.10.10.10", job="infra"}
vlogs_rows_ingested_total{cls="vlogs", ins="vlogs-1", ip="10.10.10.10", job="infra"}
alertmanager_alerts{cls="alertmanager", ins="alertmanager-1", ip="10.10.10.10", job="infra"}

3 - Infra as Code

Pigsty uses Infrastructure as Code (IaC) philosophy to manage all components, providing declarative management for large-scale clusters.

Pigsty follows the IaC and GitOPS philosophy: use a declarative config inventory to describe the entire environment, and materialize it through idempotent playbooks.

Users describe their desired state declaratively through parameters, and playbooks idempotently adjust target nodes to reach that state. This is similar to Kubernetes CRDs & Operators, but Pigsty implements this functionality on bare metal and virtual machines through Ansible.

Pigsty was born to solve the operational management problem of ultra-large-scale PostgreSQL clusters. The idea behind it is simple — we need the ability to replicate the entire infrastructure (100+ database clusters + PG/Redis + observability) on ready servers within ten minutes. No GUI + ClickOps can complete such a complex task in such a short time, making CLI + IaC the only choice — it provides precise, efficient control.

The config inventory pigsty.yml file describes the state of the entire deployment. Whether it’s production (prod), staging, test, or development (devbox) environments, the difference between infrastructures lies only in the config inventory, while the deployment delivery logic is exactly the same.

You can use git for version control and auditing of this deployment “seed/gene”, and Pigsty even supports storing the config inventory as database tables in PostgreSQL CMDB, further achieving Infra as Data capability. Seamlessly integrate with your existing workflows.

IaC is designed for professional users and enterprise scenarios but is also deeply optimized for individual developers and SMBs. Even if you’re not a professional DBA, you don’t need to understand these hundreds of adjustment knobs and switches. All parameters come with well-performing default values. You can get an out-of-the-box single-node database with zero configuration; Simply add two more IP addresses to get an enterprise-grade high-availability PostgreSQL cluster.


Declare Modules

Take the following default config snippet as an example. This config describes a node 10.10.10.10 with INFRA, NODE, ETCD, and PGSQL modules installed.

# monitoring, alerting, DNS, NTP and other infrastructure cluster...
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

# minio cluster, s3 compatible object storage
minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

# etcd cluster, used as DCS for PostgreSQL high availability
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

# PGSQL example cluster: pg-meta
pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary }, vars: { pg_cluster: pg-meta } }

To actually install these modules, execute the following playbooks:

./infra.yml -l 10.10.10.10  # Initialize infra module on node 10.10.10.10
./etcd.yml  -l 10.10.10.10  # Initialize etcd module on node 10.10.10.10
./minio.yml -l 10.10.10.10  # Initialize minio module on node 10.10.10.10
./pgsql.yml -l 10.10.10.10  # Initialize pgsql module on node 10.10.10.10

Declare Clusters

You can declare PostgreSQL database clusters by installing the PGSQL module on multiple nodes, making them a service unit:

For example, to deploy a three-node high-availability PostgreSQL cluster using streaming replication on the following three Pigsty-managed nodes, you can add the following definition to the all.children section of the config file pigsty.yml:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: offline }
  vars:  { pg_cluster: pg-test }

After defining, you can use playbooks to create the cluster:

bin/pgsql-add pg-test   # Create the pg-test cluster

pigsty-iac.jpg

You can use different instance roles such as primary, replica, offline, delayed, sync standby; as well as different clusters: such as standby clusters, Citus clusters, and even Redis / MinIO / Etcd clusters


Customize Cluster Content

Not only can you define clusters declaratively, but you can also define databases, users, services, and HBA rules within the cluster. For example, the following config file deeply customizes the content of the default pg-meta single-node database cluster:

Including: declaring six business databases and seven business users, adding an extra standby service (synchronous standby, providing read capability with no replication delay), defining some additional pg_hba rules, an L2 VIP address pointing to the cluster primary, and a customized backup strategy.

pg-meta:
  hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
  vars:
    pg_cluster: pg-meta
    pg_databases:                       # define business databases on this cluster, array of database definition
      - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
        baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
        pgbouncer: true                 # optional, add this database to pgbouncer database list? true by default
        schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
        extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
          - { name: postgis , schema: public }
          - { name: timescaledb }
        comment: pigsty meta database   # optional, comment string for this database
        owner: postgres                # optional, database owner, postgres by default
        template: template1            # optional, which template to use, template1 by default
        encoding: UTF8                 # optional, database encoding, UTF8 by default. (MUST same as template database)
        locale: C                      # optional, database locale, C by default.  (MUST same as template database)
        lc_collate: C                  # optional, database collate, C by default. (MUST same as template database)
        lc_ctype: C                    # optional, database ctype, C by default.   (MUST same as template database)
        tablespace: pg_default         # optional, default tablespace, 'pg_default' by default.
        allowconn: true                # optional, allow connection, true by default. false will disable connect at all
        revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
        register_datasource: true      # optional, register this database to grafana datasources? true by default
        connlimit: -1                  # optional, database connection limit, default -1 disable limit
        pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
        pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
        pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
        pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
        pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
        pool_max_db_conn: 100          # optional, max database connections at database level, default 100
      - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
      - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
      - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
      - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
      - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
    pg_users:                           # define business users/roles on this cluster, array of user definition
      - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
        password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
        login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
        superuser: false                # optional, is superuser? false by default
        createdb: false                 # optional, can create database? false by default
        createrole: false               # optional, can create role? false by default
        inherit: true                   # optional, can this role use inherited privileges? true by default
        replication: false              # optional, can this role do replication? false by default
        bypassrls: false                # optional, can this role bypass row level security? false by default
        pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
        connlimit: -1                   # optional, user connection limit, default -1 disable limit
        expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
        expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
        comment: pigsty admin user      # optional, comment string for this user/role
        roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
        parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
        pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
        pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
      - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
      - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
      - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
      - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway   }
      - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
      - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }
    pg_services:                        # extra services in addition to pg_default_services, array of service definition
      # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
      - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
        port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
        ip: "*"                         # optional, service bind ip address, `*` for all ip by default
        selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
        dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
        check: /sync                    # optional, health check url path, / by default
        backup: "[? pg_role == `primary`]"  # backup server selector
        maxconn: 3000                   # optional, max allowed front-end connection
        balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
        options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
    pg_hba_rules:
      - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
    pg_vip_enabled: true
    pg_vip_address: 10.10.10.2/24
    pg_vip_interface: eth1
    node_crontab:  # make a full backup 1 am everyday
      - '00 01 * * * postgres /pg/bin/pg-backup full'

Declare Access Control

You can also deeply customize Pigsty’s access control capabilities through declarative configuration. For example, the following config file provides deep security customization for the pg-meta cluster:

Uses the three-node core cluster template: crit.yml, to ensure data consistency is prioritized with zero data loss during failover. Enables L2 VIP and restricts database and connection pool listening addresses to local loopback IP + internal network IP + VIP three specific addresses. The template enforces Patroni’s SSL API and Pgbouncer’s SSL, and in HBA rules, enforces SSL usage for accessing the database cluster. Also enables the $libdir/passwordcheck extension in pg_libs to enforce password strength security policy.

Finally, a separate pg-meta-delay cluster is declared as pg-meta’s delayed replica from one hour ago, for emergency data deletion recovery.

pg-meta:      # 3 instance postgres cluster `pg-meta`
  hosts:
    10.10.10.10: { pg_seq: 1, pg_role: primary }
    10.10.10.11: { pg_seq: 2, pg_role: replica }
    10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
  vars:
    pg_cluster: pg-meta
    pg_conf: crit.yml
    pg_users:
      - { name: dbuser_meta , password: DBUser.Meta   , pgbouncer: true , roles: [ dbrole_admin ] , comment: pigsty admin user }
      - { name: dbuser_view , password: DBUser.Viewer , pgbouncer: true , roles: [ dbrole_readonly ] , comment: read-only viewer for meta database }
    pg_databases:
      - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: postgis, schema: public}, {name: timescaledb}]}
    pg_default_service_dest: postgres
    pg_services:
      - { name: standby ,src_ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
    pg_vip_enabled: true
    pg_vip_address: 10.10.10.2/24
    pg_vip_interface: eth1
    pg_listen: '${ip},${vip},${lo}'
    patroni_ssl_enabled: true
    pgbouncer_sslmode: require
    pgbackrest_method: minio
    pg_libs: 'timescaledb, $libdir/passwordcheck, pg_stat_statements, auto_explain' # add passwordcheck extension to enforce strong password
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly]               ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite]  ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
      - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [pg_monitor, dbrole_readonly]   ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,expire_in: 7300 ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_hba_rules:             # postgres host-based auth rules by default
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  }
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' }
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: ssl   ,title: 'replicator replication from localhost'}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: ssl   ,title: 'replicator replication from intranet' }
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: ssl   ,title: 'replicator postgres db from intranet' }
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' }
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: ssl   ,title: 'monitor from infra host with password'}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   }
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: cert  ,title: 'admin @ everywhere with ssl & cert'   }
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: ssl   ,title: 'pgbouncer read/write via local socket'}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: ssl   ,title: 'read/write biz user via password'     }
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: ssl   ,title: 'allow etl offline tasks from intranet'}
    pgb_default_hba_rules:            # pgbouncer host-based authentication rules
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident'}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' }
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: ssl   ,title: 'monitor access via intranet with pwd' }
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' }
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: ssl   ,title: 'admin access via intranet with pwd'   }
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   }
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: ssl   ,title: 'allow all user intra access with pwd' }

# OPTIONAL delayed cluster for pg-meta
pg-meta-delay:                    # delayed instance for pg-meta (1 hour ago)
  hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
  vars: { pg_cluster: pg-meta-delay }

Citus Distributed Cluster

Below is a declarative configuration for a four-node Citus distributed cluster:

all:
  children:
    pg-citus0: # citus coordinator, pg_group = 0
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus0 , pg_group: 0 }
    pg-citus1: # citus data node 1
      hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus1 , pg_group: 1 }
    pg-citus2: # citus data node 2
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus2 , pg_group: 2 }
    pg-citus3: # citus data node 3, with an extra replica
      hosts:
        10.10.10.13: { pg_seq: 1, pg_role: primary }
        10.10.10.14: { pg_seq: 2, pg_role: replica }
      vars: { pg_cluster: pg-citus3 , pg_group: 3 }
  vars:                               # global parameters for all citus clusters
    pg_mode: citus                    # pgsql cluster mode: citus
    pg_shard: pg-citus                # citus shard name: pg-citus
    patroni_citus_db: meta            # citus distributed database name
    pg_dbsu_password: DBUser.Postgres # all dbsu password access for citus cluster
    pg_users: [ { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] } ]
    pg_databases: [ { name: meta ,extensions: [ { name: citus }, { name: postgis }, { name: timescaledb } ] } ]
    pg_hba_rules:
      - { user: 'all' ,db: all  ,addr: 127.0.0.1/32 ,auth: ssl ,title: 'all user ssl access from localhost' }
      - { user: 'all' ,db: all  ,addr: intra        ,auth: ssl ,title: 'all user ssl access from intranet'  }

Redis Clusters

Below are declarative configuration examples for Redis primary-replica cluster, sentinel cluster, and Redis Cluster:

redis-ms: # redis classic primary & replica
  hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
  vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

redis-meta: # redis sentinel x 3
  hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
  vars:
    redis_cluster: redis-meta
    redis_password: 'redis.meta'
    redis_mode: sentinel
    redis_max_memory: 16MB
    redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
      - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

redis-test: # redis native cluster: 3m x 3s
  hosts:
    10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
    10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
  vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }

ETCD Cluster

Below is a declarative configuration example for a three-node Etcd cluster:

etcd: # dcs service for postgres/patroni ha consensus
  hosts:  # 1 node for testing, 3 or 5 for production
    10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
    10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
    10.10.10.12: { etcd_seq: 3 }  # odd number please
  vars: # cluster level parameter override roles/etcd
    etcd_cluster: etcd  # mark etcd cluster name etcd
    etcd_safeguard: false # safeguard against purging
    etcd_clean: true # purge etcd during init process

MinIO Cluster

Below is a declarative configuration example for a three-node MinIO cluster:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
  vars:
    minio_cluster: minio
    minio_data: '/data{1...2}'          # use two disks per node
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # node name pattern
    haproxy_services:
      - name: minio                     # [required] service name, must be unique
        port: 9002                      # [required] service port, must be unique
        options:
          - option httpchk
          - option http-keep-alive
          - http-check send meth OPTIONS uri /minio/health/live
          - http-check expect status 200
        servers:
          - { name: minio-1 ,ip: 10.10.10.10 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-2 ,ip: 10.10.10.11 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-3 ,ip: 10.10.10.12 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

3.1 - Inventory

Describe your infrastructure and clusters using declarative configuration files

Every Pigsty deployment corresponds to an Inventory that describes key properties of the infrastructure and database clusters.


Configuration File

Pigsty uses Ansible YAML configuration format by default, with a single YAML configuration file pigsty.yml as the inventory.

~/pigsty
  ^---- pigsty.yml   # <---- Default configuration file

You can directly edit this configuration file to customize your deployment, or use the configure wizard script provided by Pigsty to automatically generate an appropriate configuration file.


Configuration Structure

The inventory uses standard Ansible YAML configuration format, consisting of two parts: global parameters (all.vars) and multiple groups (all.children).

You can define new clusters in all.children and describe the infrastructure using global variables: all.vars, which looks like this:

all:                  # Top-level object: all
  vars: {...}         # Global parameters
  children:           # Group definitions
    infra:            # Group definition: 'infra'
      hosts: {...}        # Group members: 'infra'
      vars:  {...}        # Group parameters: 'infra'
    etcd:    {...}    # Group definition: 'etcd'
    pg-meta: {...}    # Group definition: 'pg-meta'
    pg-test: {...}    # Group definition: 'pg-test'
    redis-test: {...} # Group definition: 'redis-test'
    # ...

Cluster Definition

Each Ansible group may represent a cluster, which can be a node cluster, PostgreSQL cluster, Redis cluster, Etcd cluster, MinIO cluster, etc.

A cluster definition consists of two parts: cluster members (hosts) and cluster parameters (vars). You can define cluster members in <cls>.hosts and describe the cluster using configuration parameters in <cls>.vars. Here’s an example of a 3-node high-availability PostgreSQL cluster definition:

all:
  children:    # Ansible group list
    pg-test:   # Ansible group name
      hosts:   # Ansible group instances (cluster members)
        10.10.10.11: { pg_seq: 1, pg_role: primary } # Host 1
        10.10.10.12: { pg_seq: 2, pg_role: replica } # Host 2
        10.10.10.13: { pg_seq: 3, pg_role: offline } # Host 3
      vars:    # Ansible group variables (cluster parameters)
        pg_cluster: pg-test

Cluster-level vars (cluster parameters) override global parameters, and instance-level vars override both cluster parameters and global parameters.


Splitting Configuration

If your deployment is large or you want to better organize configuration files, you can split the inventory into multiple files for easier management and maintenance.

inventory/
├── hosts.yml              # Host and cluster definitions
├── group_vars/
│   ├── all.yml            # Global default variables (corresponds to all.vars)
│   ├── infra.yml          # infra group variables
│   ├── etcd.yml           # etcd group variables
│   └── pg-meta.yml        # pg-meta cluster variables
└── host_vars/
    ├── 10.10.10.10.yml    # Specific host variables
    └── 10.10.10.11.yml

You can place cluster member definitions in the hosts.yml file and put cluster-level configuration parameters in corresponding files under the group_vars directory.


Switching Configuration

You can temporarily specify a different inventory file when running playbooks using the -i parameter.

./pgsql.yml -i another_config.yml
./infra.yml -i nginx_config.yml

Additionally, Ansible supports multiple configuration methods. You can use local yaml|ini configuration files, or use CMDB and any dynamic configuration scripts as configuration sources.

In Pigsty, we specify pigsty.yml in the same directory as the default inventory through ansible.cfg in the Pigsty home directory. You can modify it as needed.

[defaults]
inventory = pigsty.yml

Additionally, Pigsty supports using a CMDB metabase to store the inventory, facilitating integration with existing systems.

3.2 - Configure

Use the configure script to automatically generate recommended configuration files based on your environment.

Pigsty provides a configure script as a configuration wizard that automatically generates an appropriate pigsty.yml configuration file based on your current environment.

This is an optional script: if you already understand how to configure Pigsty, you can directly edit the pigsty.yml configuration file and skip the wizard.


Quick Start

Enter the pigsty source home directory and run ./configure to automatically start the configuration wizard. Without any arguments, it defaults to the meta single-node configuration template:

cd ~/pigsty
./configure          # Interactive configuration wizard, auto-detect environment and generate config

This command will use the selected template as a base, detect the current node’s IP address and region, and generate a pigsty.yml configuration file suitable for the current environment.

Features

The configure script performs the following adjustments based on environment and input, generating a pigsty.yml configuration file in the current directory.

  • Detects the current node IP address; if multiple IPs exist, prompts the user to input a primary IP address as the node’s identity
  • Uses the IP address to replace the placeholder 10.10.10.10 in the configuration template and sets it as the admin_ip parameter value
  • Detects the current region, setting region to default (global default repos) or china (using Chinese mirror repos)
  • For micro instances (vCPU < 4), uses the tiny parameter template for node_tune and pg_conf to optimize resource usage
  • If -v PG major version is specified, sets pg_version and all PG alias parameters to the corresponding major version
  • If -g is specified, replaces all default passwords with randomly generated strong passwords for enhanced security (strongly recommended)
  • When PG major version ≥ 17, prioritizes the built-in C.UTF-8 locale, or the OS-supported C.UTF-8
  • Checks if the core dependency ansible for deployment is available in the current environment
  • Also checks if the deployment target node is SSH-reachable and can execute commands with sudo (-s to skip)

Usage Examples

# Basic usage
./configure                       # Interactive configuration wizard
./configure -i 10.10.10.10        # Specify primary IP address

# Specify configuration template
./configure -c meta               # Use default single-node template (default)
./configure -c rich               # Use feature-rich single-node template
./configure -c slim               # Use minimal template (PGSQL + ETCD only)
./configure -c ha/full            # Use 4-node HA sandbox template
./configure -c ha/trio            # Use 3-node HA template
./configure -c app/supa           # Use Supabase self-hosted template

# Specify PostgreSQL version
./configure -v 17                 # Use PostgreSQL 17
./configure -v 16                 # Use PostgreSQL 16
./configure -c rich -v 16         # rich template + PG 16

# Region and proxy
./configure -r china              # Use Chinese mirrors
./configure -r europe             # Use European mirrors
./configure -x                    # Import current proxy environment variables

# Skip and automation
./configure -s                    # Skip IP detection, keep placeholder
./configure -n -i 10.10.10.10     # Non-interactive mode with specified IP
./configure -c ha/full -s         # 4-node template, skip IP replacement

# Security enhancement
./configure -g                    # Generate random passwords
./configure -c meta -g -i 10.10.10.10  # Complete production configuration

# Specify output and SSH port
./configure -o prod.yml           # Output to prod.yml
./configure -p 2222               # Use SSH port 2222

Command Arguments

./configure
    [-c|--conf <template>]      # Configuration template name (meta|rich|slim|ha/full|...)
    [-i|--ip <ipaddr>]          # Specify primary IP address
    [-v|--version <pgver>]      # PostgreSQL major version (13|14|15|16|17|18)
    [-r|--region <region>]      # Upstream software repo region (default|china|europe)
    [-o|--output <file>]        # Output configuration file path (default: pigsty.yml)
    [-s|--skip]                 # Skip IP address detection and replacement
    [-x|--proxy]                # Import proxy settings from environment variables
    [-n|--non-interactive]      # Non-interactive mode (don't ask any questions)
    [-p|--port <port>]          # Specify SSH port
    [-g|--generate]             # Generate random passwords
    [-h|--help]                 # Display help information

Argument Details

ArgumentDescription
-c, --confGenerate config from conf/<template>.yml, supports subdirectories like ha/full
-i, --ipReplace placeholder 10.10.10.10 in config template with specified IP
-v, --versionSpecify PostgreSQL major version (13-18), keeps template default if not specified
-r, --regionSet software repo mirror region: default, china (Chinese mirrors), europe (European)
-o, --outputSpecify output file path, defaults to pigsty.yml
-s, --skipSkip IP address detection and replacement, keep 10.10.10.10 placeholder in template
-x, --proxyWrite current environment proxy variables (HTTP_PROXY, HTTPS_PROXY, ALL_PROXY, NO_PROXY) to config
-n, --non-interactiveNon-interactive mode, don’t ask any questions (requires -i to specify IP)
-p, --portSpecify SSH port (when using non-default port 22)
-g, --generateGenerate random values for passwords in config file, improving security (strongly recommended)

Execution Flow

The configure script executes detection and configuration in the following order:

┌─────────────────────────────────────────────────────────────┐
│                  configure Execution Flow                   │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  1. check_region          Detect network region (GFW check) │
│         ↓                                                   │
│  2. check_version         Validate PostgreSQL version       │
│         ↓                                                   │
│  3. check_kernel          Detect OS kernel (Linux/Darwin)   │
│         ↓                                                   │
│  4. check_machine         Detect CPU arch (x86_64/aarch64)  │
│         ↓                                                   │
│  5. check_package_manager Detect package manager (dnf/yum/apt) │
│         ↓                                                   │
│  6. check_vendor_version  Detect OS distro and version      │
│         ↓                                                   │
│  7. check_sudo            Detect passwordless sudo          │
│         ↓                                                   │
│  8. check_ssh             Detect passwordless SSH to self   │
│         ↓                                                   │
│  9. check_proxy           Handle proxy environment vars     │
│         ↓                                                   │
│ 10. check_ipaddr          Detect/input primary IP address   │
│         ↓                                                   │
│ 11. check_admin           Validate admin SSH + Sudo access  │
│         ↓                                                   │
│ 12. check_conf            Select configuration template     │
│         ↓                                                   │
│ 13. check_config          Generate configuration file       │
│         ↓                                                   │
│ 14. check_utils           Check if Ansible etc. installed   │
│         ↓                                                   │
│     ✓ Configuration complete, output pigsty.yml             │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Automatic Behaviors

Region Detection

The script automatically detects the network environment to determine if you’re in mainland China (behind GFW):

# Check network environment by accessing Google
curl -I -s --connect-timeout 1 www.google.com
  • If Google is inaccessible, automatically sets region: china to use domestic mirrors
  • If accessible, uses region: default default mirrors
  • Can manually specify region via -r argument

IP Address Handling

The script determines the primary IP address in the following priority:

  1. Command line argument: If IP is specified via -i, use it directly
  2. Single IP detection: If the current node has only one IP, use it automatically
  3. Demo IP detection: If 10.10.10.10 is detected, select it automatically (for sandbox environments)
  4. Interactive input: When multiple IPs exist, prompt user to choose or input
[WARN] Multiple IP address candidates found:
    (1) 192.168.1.100   inet 192.168.1.100/24 scope global eth0
    (2) 10.10.10.10     inet 10.10.10.10/24 scope global eth1
[ IN ] INPUT primary_ip address (of current meta node, e.g 10.10.10.10):
=> 10.10.10.10

Low-End Hardware Optimization

When CPU core count ≤ 4 is detected, the script automatically adjusts configuration:

[WARN] replace oltp template with tiny due to cpu < 4

This ensures smooth operation on low-spec virtual machines.

Locale Settings

The script automatically enables C.UTF-8 as the default locale when:

  • PostgreSQL version ≥ 17 (built-in Locale Provider support)
  • Or the current system supports C.UTF-8 / C.utf8 locale
pg_locale: C.UTF-8
pg_lc_collate: C.UTF-8
pg_lc_ctype: C.UTF-8

China Region Special Handling

When region is set to china, the script automatically:

  • Enables docker_registry_mirrors Docker mirror acceleration
  • Enables PIP_MIRROR_URL Python mirror acceleration

Password Generation

When using the -g argument, the script generates 24-character random strings for the following passwords:

Password ParameterDescription
grafana_admin_passwordGrafana admin password
pg_admin_passwordPostgreSQL admin password
pg_monitor_passwordPostgreSQL monitor user password
pg_replication_passwordPostgreSQL replication user password
patroni_passwordPatroni API password
haproxy_admin_passwordHAProxy admin password
minio_secret_keyMinIO Secret Key
etcd_root_passwordETCD Root password

It also replaces the following placeholder passwords:

  • DBUser.Meta → random password
  • DBUser.Viewer → random password
  • S3User.Backup → random password
  • S3User.Meta → random password
  • S3User.Data → random password
$ ./configure -g
[INFO] generating random passwords...
    grafana_admin_password   : xK9mL2nP4qR7sT1vW3yZ5bD8
    pg_admin_password        : aB3cD5eF7gH9iJ1kL2mN4oP6
    ...
[INFO] random passwords generated, check and save them

Configuration Templates

The script reads configuration templates from the conf/ directory, supporting the following templates:

Core Templates

TemplateDescription
metaDefault template: Single-node installation with INFRA + NODE + ETCD + PGSQL
richFeature-rich version: Includes almost all extensions, MinIO, local repo
slimMinimal version: PostgreSQL + ETCD only, no monitoring infrastructure
fatComplete version: rich base with more extensions installed
pgsqlPure PostgreSQL template
infraPure infrastructure template

HA Templates (ha/)

TemplateDescription
ha/dual2-node HA cluster
ha/trio3-node HA cluster
ha/full4-node complete sandbox environment
ha/safeSecurity-hardened HA configuration
ha/simu42-node large-scale simulation environment

Application Templates (app/)

TemplateDescription
supabaseSupabase self-hosted configuration
app/difyDify AI platform configuration
app/odooOdoo ERP configuration
app/teableTeable table database configuration
app/registryDocker Registry configuration

Special Kernel Templates

TemplateDescription
ivoryIvorySQL: Oracle-compatible PostgreSQL
mssqlBabelfish: SQL Server-compatible PostgreSQL
polarPolarDB: Alibaba Cloud open-source distributed PostgreSQL
citusCitus: Distributed PostgreSQL
orioleOrioleDB: Next-generation storage engine

Demo Templates (demo/)

TemplateDescription
demo/demoDemo environment configuration
demo/redisRedis cluster demo
demo/minioMinIO cluster demo

Output Example

$ ./configure
configure pigsty v4.0.0 begin
[ OK ] region = china
[ OK ] kernel  = Linux
[ OK ] machine = x86_64
[ OK ] package = rpm,dnf
[ OK ] vendor  = rocky (Rocky Linux)
[ OK ] version = 9 (9.5)
[ OK ] sudo = vagrant ok
[ OK ] ssh = [email protected] ok
[WARN] Multiple IP address candidates found:
    (1) 192.168.121.193	    inet 192.168.121.193/24 brd 192.168.121.255 scope global dynamic noprefixroute eth0
    (2) 10.10.10.10	    inet 10.10.10.10/24 brd 10.10.10.255 scope global noprefixroute eth1
[ OK ] primary_ip = 10.10.10.10 (from demo)
[ OK ] admin = [email protected] ok
[ OK ] mode = meta (el9)
[ OK ] locale  = C.UTF-8
[ OK ] ansible = ready
[ OK ] pigsty configured
[WARN] don't forget to check it and change passwords!
proceed with ./deploy.yml

Environment Variables

The script supports the following environment variables:

Environment VariableDescriptionDefault
PIGSTY_HOMEPigsty installation directory~/pigsty
METADB_URLMetabase connection URLservice=meta
HTTP_PROXYHTTP proxy-
HTTPS_PROXYHTTPS proxy-
ALL_PROXYUniversal proxy-
NO_PROXYProxy whitelistBuilt-in default

Notes

  1. Passwordless access: Before running configure, ensure the current user has passwordless sudo privileges and passwordless SSH to localhost. This can be automatically configured via the bootstrap script.

  2. IP address selection: Choose an internal IP as the primary IP address, not a public IP or 127.0.0.1.

  3. Password security: In production environments, always modify default passwords in the configuration file, or use the -g argument to generate random passwords.

  4. Configuration review: After the script completes, it’s recommended to review the generated pigsty.yml file to confirm the configuration meets expectations.

  5. Multiple executions: You can run configure multiple times to regenerate configuration; each run will overwrite the existing pigsty.yml.

  6. macOS limitations: When running on macOS, the script skips some Linux-specific checks and uses placeholder IP 10.10.10.10. macOS can only serve as an admin node.


FAQ

How to use a custom configuration template?

Place your configuration file in the conf/ directory, then specify it with the -c argument:

cp my-config.yml ~/pigsty/conf/myconf.yml
./configure -c myconf

How to generate different configurations for multiple clusters?

Use the -o argument to specify different output files:

./configure -c ha/full -o cluster-a.yml
./configure -c ha/trio -o cluster-b.yml

Then specify the configuration file when running playbooks:

./deploy.yml -i cluster-a.yml

How to handle multiple IPs in non-interactive mode?

You must explicitly specify the IP address using the -i argument:

./configure -n -i 10.10.10.10

How to keep the placeholder IP in the template?

Use the -s argument to skip IP replacement:

./configure -c ha/full -s   # Keep 10.10.10.10 placeholder

  • Inventory: Understand the Ansible inventory structure
  • Parameters: Understand Pigsty parameter hierarchy and priority
  • Templates: View all available configuration templates
  • Installation: Understand the complete installation process
  • Metabase: Use PostgreSQL as a dynamic configuration source

3.3 - Parameters

Fine-tune Pigsty customization using configuration parameters

In the inventory, you can use various parameters to fine-tune Pigsty customization. These parameters cover everything from infrastructure settings to database configuration.


Parameter List

Pigsty provides approximately 380+ configuration parameters distributed across 8 default modules for fine-grained control of various system aspects. See Reference - Parameter List for the complete list.

ModuleGroupsParamsDescription
PGSQL9123Core configuration for PostgreSQL database clusters
INFRA1082Infrastructure: repos, Nginx, DNS, monitoring, Grafana, etc.
NODE1183Host node tuning: identity, DNS, packages, tuning, security, admin, time, VIP, etc.
ETCD213Distributed configuration store and service discovery
REDIS121Redis cache and data structure server
MINIO221S3-compatible object storage service
FERRET19MongoDB-compatible database FerretDB
DOCKER18Docker container engine

Parameter Form

Parameters are key-value pairs that describe entities. The Key is a string, and the Value can be one of five types: boolean, string, number, array, or object.

all:                            # <------- Top-level object: all
  vars:
    admin_ip: 10.10.10.10       # <------- Global configuration parameter
  children:
    pg-meta:                    # <------- pg-meta group
      vars:
        pg_cluster: pg-meta     # <------- Cluster-level parameter
      hosts:
        10.10.10.10:            # <------- Host node IP
          pg_seq: 1
          pg_role: primary      # <------- Instance-level parameter

Parameter Priority

Parameters can be set at different levels with the following priority:

LevelLocationDescriptionPriority
CLI-e command line argumentPassed via command lineHighest (5)
Host/Instance<group>.hosts.<host>Parameters specific to a single hostHigher (4)
Group/Cluster<group>.varsParameters shared by hosts in group/clusterMedium (3)
Globalall.varsParameters shared by all hostsLower (2)
Default<roles>/default/main.ymlRole implementation defaultsLowest (1)

Here are some examples of parameter priority:

  • Use command line parameter -e grafana_clean=true when running playbooks to wipe Grafana data
  • Use instance-level parameter pg_role on host variables to override pg instance role
  • Use cluster-level parameter pg_cluster on group variables to override pg cluster name
  • Use global parameter node_ntp_servers on global variables to specify global NTP servers
  • If pg_version is not set, Pigsty will use the default value from the pgsql role implementation (default is 18)

Except for identity parameters, every parameter has an appropriate default value, so explicit setting is not required.


Identity Parameters

Identity parameters are special parameters that serve as entity ID identifiers, therefore they have no default values and must be explicitly set.

ModuleIdentity Parameters
PGSQLpg_cluster, pg_seq, pg_role, …
NODEnodename, node_cluster
ETCDetcd_cluster, etcd_seq
MINIOminio_cluster, minio_seq
REDISredis_cluster, redis_node, redis_instances
INFRAinfra_seq

Exceptions are etcd_cluster and minio_cluster which have default values. This assumes each deployment has only one etcd cluster for DCS and one optional MinIO cluster for centralized backup storage, so they are assigned default cluster names etcd and minio. However, you can still deploy multiple etcd or MinIO clusters using different names.

3.4 - Conf Templates

Use pre-made configuration templates to quickly generate configuration files adapted to your environment

In Pigsty, deployment blueprint details are defined by the inventory, which is the pigsty.yml configuration file. You can customize it through declarative configuration.

However, writing configuration files directly can be daunting for new users. To address this, we provide some ready-to-use configuration templates covering common usage scenarios.

Each template is a predefined pigsty.yml configuration file containing reasonable defaults suitable for specific scenarios.

You can choose a template as your customization starting point, then modify it as needed to meet your specific requirements.


Using Templates

Pigsty provides the configure script as an optional configuration wizard that generates an inventory with good defaults based on your environment and input.

Use ./configure -c <conf> to specify a configuration template, where <conf> is the path relative to the conf directory (the .yml suffix can be omitted).

./configure                     # Default to meta.yml configuration template
./configure -c meta             # Explicitly specify meta.yml single-node template
./configure -c rich             # Use feature-rich template with all extensions and MinIO
./configure -c slim             # Use minimal single-node template

# Use different database kernels
./configure -c pgsql            # Native PostgreSQL kernel, basic features (13~18)
./configure -c citus            # Citus distributed HA PostgreSQL (14~17)
./configure -c mssql            # Babelfish kernel, SQL Server protocol compatible (15)
./configure -c polar            # PolarDB PG kernel, Aurora/RAC style (15)
./configure -c ivory            # IvorySQL kernel, Oracle syntax compatible (18)
./configure -c mysql            # OpenHalo kernel, MySQL compatible (14)
./configure -c pgtde            # Percona PostgreSQL Server transparent encryption (18)
./configure -c oriole           # OrioleDB kernel, OLTP enhanced (17)
./configure -c supabase         # Supabase self-hosted configuration (15~18)

# Use multi-node HA templates
./configure -c ha/dual          # Use 2-node HA template
./configure -c ha/trio          # Use 3-node HA template
./configure -c ha/full          # Use 4-node HA template

If no template is specified, Pigsty defaults to the meta.yml single-node configuration template.


Template List

Main Templates

The following are single-node configuration templates for installing Pigsty on a single server:

TemplateDescription
meta.ymlDefault template, single-node PostgreSQL online installation
rich.ymlFeature-rich template with local repo, MinIO, and more examples
slim.ymlMinimal template, PostgreSQL only without monitoring and infrastructure

Database Kernel Templates

Templates for various database management systems and kernels:

TemplateDescription
pgsql.ymlNative PostgreSQL kernel, basic features (13~18)
citus.ymlCitus distributed HA PostgreSQL (14~17)
mssql.ymlBabelfish kernel, SQL Server protocol compatible (15)
polar.ymlPolarDB PG kernel, Aurora/RAC style (15)
ivory.ymlIvorySQL kernel, Oracle syntax compatible (17)
mysql.ymlOpenHalo kernel, MySQL compatible (14)
pgtde.ymlPercona PostgreSQL Server transparent encryption (17)
oriole.ymlOrioleDB kernel, OLTP enhanced (17, Debian pkg pending)
supabase.ymlSupabase self-hosted configuration (15~17)

You can add more nodes later or use HA templates to plan your cluster from the start.


HA Templates

You can configure Pigsty to run on multiple nodes, forming a high-availability (HA) cluster:

TemplateDescription
dual.yml2-node semi-HA deployment
trio.yml3-node standard HA deployment
full.yml4-node standard deployment
safe.yml4-node security-enhanced deployment with delayed replica
simu.yml20-node production environment simulation

Application Templates

You can use the following templates to run Docker applications/software:

TemplateDescription
supa.ymlStart single-node Supabase
odoo.ymlStart Odoo ERP system
dify.ymlStart Dify AI workflow system
electric.ymlStart Electric sync engine

Demo Templates

Besides main templates, Pigsty provides a set of demo templates for different scenarios:

TemplateDescription
el.ymlFull-parameter config file for EL 8/9 systems
debian.ymlFull-parameter config file for Debian/Ubuntu systems
remote.ymlExample config for monitoring remote PostgreSQL clusters or RDS
redis.ymlRedis cluster example configuration
minio.yml3-node MinIO cluster example configuration
demo.ymlConfiguration file for Pigsty public demo site

Build Templates

The following configuration templates are for development and testing purposes:

TemplateDescription
build.ymlOpen source build config for EL 9/10, Debian 12/13, Ubuntu 22.04/24.04

3.5 - Use CMDB as Config Inventory

Use PostgreSQL as a CMDB metabase to store Ansible inventory.

Pigsty allows you to use a PostgreSQL metabase as a dynamic configuration source, replacing static YAML configuration files for more powerful configuration management capabilities.


Overview

CMDB (Configuration Management Database) is a method of storing configuration information in a database for management.

In Pigsty, the default configuration source is a static YAML file pigsty.yml, which serves as Ansible’s inventory.

This approach is simple and direct, but when infrastructure scales and requires complex, fine-grained management and external integration, a single static file becomes insufficient.

FeatureStatic YAML FileCMDB Metabase
QueryingManual search/grepSQL queries with any conditions, aggregation analysis
VersioningDepends on Git or manual backupDatabase transactions, audit logs, time-travel snapshots
Access ControlFile system permissions, coarse-grainedPostgreSQL fine-grained access control
Concurrent EditingRequires file locking or merge conflictsDatabase transactions naturally support concurrency
External IntegrationRequires YAML parsingStandard SQL interface, easy integration with any language
ScalabilityDifficult to maintain when file becomes too largeScales to physical limits
Dynamic GenerationStatic file, changes require manual applicationImmediate effect, real-time configuration changes

Pigsty provides the CMDB database schema in the sample database pg-meta.meta schema baseline definition.


How It Works

The core idea of CMDB is to replace the static configuration file with a dynamic script. Ansible supports using executable scripts as inventory, as long as the script outputs inventory data in JSON format. When you enable CMDB, Pigsty creates a dynamic inventory script named inventory.sh:

#!/bin/bash
psql ${METADB_URL} -AXtwc 'SELECT text FROM pigsty.inventory;'

This script’s function is simple: every time Ansible needs to read the inventory, it queries configuration data from the PostgreSQL database’s pigsty.inventory view and returns it in JSON format.

The overall architecture is as follows:

flowchart LR
    conf["bin/inventory_conf"]
    tocmdb["bin/inventory_cmdb"]
    load["bin/inventory_load"]
    ansible["🚀 Ansible"]

    subgraph static["📄 Static Config Mode"]
        yml[("pigsty.yml")]
    end

    subgraph dynamic["🗄️ CMDB Dynamic Mode"]
        sh["inventory.sh"]
        cmdb[("PostgreSQL CMDB")]
    end

    conf -->|"switch"| yml
    yml -->|"load config"| load
    load -->|"write"| cmdb
    tocmdb -->|"switch"| sh
    sh --> cmdb

    yml --> ansible
    cmdb --> ansible

Data Model

The CMDB database schema is defined in files/cmdb.sql, with all objects in the pigsty schema.

Core Tables

TableDescriptionPrimary Key
pigsty.groupCluster/group definitions, corresponds to Ansible groupscls
pigsty.hostHost definitions, belongs to a group(cls, ip)
pigsty.global_varGlobal variables, corresponds to all.varskey
pigsty.group_varGroup variables, corresponds to all.children.<cls>.vars(cls, key)
pigsty.host_varHost variables, host-level variables(cls, ip, key)
pigsty.default_varDefault variable definitions, stores parameter metadatakey
pigsty.jobJob records table, records executed tasksid

Table Structure Details

Cluster Table pigsty.group

CREATE TABLE pigsty.group (
    cls     TEXT PRIMARY KEY,        -- Cluster name, primary key
    ctime   TIMESTAMPTZ DEFAULT now(), -- Creation time
    mtime   TIMESTAMPTZ DEFAULT now()  -- Modification time
);

Host Table pigsty.host

CREATE TABLE pigsty.host (
    cls    TEXT NOT NULL REFERENCES pigsty.group(cls),  -- Parent cluster
    ip     INET NOT NULL,                               -- Host IP address
    ctime  TIMESTAMPTZ DEFAULT now(),
    mtime  TIMESTAMPTZ DEFAULT now(),
    PRIMARY KEY (cls, ip)
);

Global Variables Table pigsty.global_var

CREATE TABLE pigsty.global_var (
    key   TEXT PRIMARY KEY,           -- Variable name
    value JSONB NULL,                 -- Variable value (JSON format)
    mtime TIMESTAMPTZ DEFAULT now()   -- Modification time
);

Group Variables Table pigsty.group_var

CREATE TABLE pigsty.group_var (
    cls   TEXT NOT NULL REFERENCES pigsty.group(cls),
    key   TEXT NOT NULL,
    value JSONB NULL,
    mtime TIMESTAMPTZ DEFAULT now(),
    PRIMARY KEY (cls, key)
);

Host Variables Table pigsty.host_var

CREATE TABLE pigsty.host_var (
    cls   TEXT NOT NULL,
    ip    INET NOT NULL,
    key   TEXT NOT NULL,
    value JSONB NULL,
    mtime TIMESTAMPTZ DEFAULT now(),
    PRIMARY KEY (cls, ip, key),
    FOREIGN KEY (cls, ip) REFERENCES pigsty.host(cls, ip)
);

Core Views

CMDB provides a series of views for querying and displaying configuration data:

ViewDescription
pigsty.inventoryCore view: Generates Ansible dynamic inventory JSON
pigsty.raw_configRaw configuration in JSON format
pigsty.global_configGlobal config view, merges defaults and global vars
pigsty.group_configGroup config view, includes host list and group vars
pigsty.host_configHost config view, merges group and host-level vars
pigsty.pg_clusterPostgreSQL cluster view
pigsty.pg_instancePostgreSQL instance view
pigsty.pg_databasePostgreSQL database definition view
pigsty.pg_usersPostgreSQL user definition view
pigsty.pg_servicePostgreSQL service definition view
pigsty.pg_hbaPostgreSQL HBA rules view
pigsty.pg_remoteRemote PostgreSQL instance view

pigsty.inventory is the core view that converts database configuration data to the JSON format required by Ansible:

SELECT text FROM pigsty.inventory;

Utility Scripts

Pigsty provides three convenience scripts for managing CMDB:

ScriptFunction
bin/inventory_loadLoad YAML configuration file into PostgreSQL database
bin/inventory_cmdbSwitch configuration source to CMDB (dynamic inventory script)
bin/inventory_confSwitch configuration source to static config file pigsty.yml

inventory_load

Parse and import YAML configuration file into CMDB:

bin/inventory_load                     # Load default pigsty.yml to default CMDB
bin/inventory_load -p /path/to/conf.yml  # Specify configuration file path
bin/inventory_load -d "postgres://..."   # Specify database connection URL
bin/inventory_load -n myconfig           # Specify configuration name

The script performs the following operations:

  1. Clears existing data in the pigsty schema
  2. Parses the YAML configuration file
  3. Writes global variables to the global_var table
  4. Writes cluster definitions to the group table
  5. Writes cluster variables to the group_var table
  6. Writes host definitions to the host table
  7. Writes host variables to the host_var table

Environment Variables

  • PIGSTY_HOME: Pigsty installation directory, defaults to ~/pigsty
  • METADB_URL: Database connection URL, defaults to service=meta

inventory_cmdb

Switch Ansible to use CMDB as the configuration source:

bin/inventory_cmdb

The script performs the following operations:

  1. Creates dynamic inventory script ${PIGSTY_HOME}/inventory.sh
  2. Modifies ansible.cfg to set inventory to inventory.sh

The generated inventory.sh contents:

#!/bin/bash
psql ${METADB_URL} -AXtwc 'SELECT text FROM pigsty.inventory;'

inventory_conf

Switch back to using static YAML configuration file:

bin/inventory_conf

The script modifies ansible.cfg to set inventory back to pigsty.yml.


Usage Workflow

First-time CMDB Setup

  1. Initialize CMDB schema (usually done automatically during Pigsty installation):
psql -f ~/pigsty/files/cmdb.sql
  1. Load configuration to database:
bin/inventory_load
  1. Switch to CMDB mode:
bin/inventory_cmdb
  1. Verify configuration:
ansible all --list-hosts          # List all hosts
ansible-inventory --list          # View complete inventory

Query Configuration

After enabling CMDB, you can flexibly query configuration using SQL:

-- View all clusters
SELECT cls FROM pigsty.group;

-- View all hosts in a cluster
SELECT ip FROM pigsty.host WHERE cls = 'pg-meta';

-- View global variables
SELECT key, value FROM pigsty.global_var;

-- View cluster variables
SELECT key, value FROM pigsty.group_var WHERE cls = 'pg-meta';

-- View all PostgreSQL clusters
SELECT cls, name, pg_databases, pg_users FROM pigsty.pg_cluster;

-- View all PostgreSQL instances
SELECT cls, ins, ip, seq, role FROM pigsty.pg_instance;

-- View all database definitions
SELECT cls, datname, owner, encoding FROM pigsty.pg_database;

-- View all user definitions
SELECT cls, name, login, superuser FROM pigsty.pg_users;

Modify Configuration

You can modify configuration directly via SQL:

-- Add new cluster
INSERT INTO pigsty.group (cls) VALUES ('pg-new');

-- Add cluster variable
INSERT INTO pigsty.group_var (cls, key, value)
VALUES ('pg-new', 'pg_cluster', '"pg-new"');

-- Add host
INSERT INTO pigsty.host (cls, ip) VALUES ('pg-new', '10.10.10.20');

-- Add host variables
INSERT INTO pigsty.host_var (cls, ip, key, value)
VALUES ('pg-new', '10.10.10.20', 'pg_seq', '1'),
       ('pg-new', '10.10.10.20', 'pg_role', '"primary"');

-- Modify global variable
UPDATE pigsty.global_var SET value = '"new-value"' WHERE key = 'some_param';

-- Delete cluster (cascades to hosts and variables)
DELETE FROM pigsty.group WHERE cls = 'pg-old';

Changes take effect immediately without reloading or restarting any service.

Switch Back to Static Configuration

To switch back to static configuration file mode:

bin/inventory_conf

Advanced Usage

Export Configuration

Export CMDB configuration to YAML format:

psql service=meta -AXtwc "SELECT jsonb_pretty(jsonb_build_object('all', jsonb_build_object('children', children, 'vars', vars))) FROM pigsty.raw_config;"

Or use the ansible-inventory command:

ansible-inventory --list --yaml > exported_config.yml

Configuration Auditing

Track configuration changes using the mtime field:

-- View recently modified global variables
SELECT key, value, mtime FROM pigsty.global_var
ORDER BY mtime DESC LIMIT 10;

-- View changes after a specific time
SELECT * FROM pigsty.group_var
WHERE mtime > '2024-01-01'::timestamptz;

Integration with External Systems

CMDB uses standard PostgreSQL, making it easy to integrate with other systems:

  • Web Management Interface: Expose configuration data through REST API (e.g., PostgREST)
  • CI/CD Pipelines: Read/write database directly in deployment scripts
  • Monitoring & Alerting: Generate monitoring rules based on configuration data
  • ITSM Systems: Sync with enterprise CMDB systems

Considerations

  1. Data Consistency: After modifying configuration, you need to re-run the corresponding Ansible playbooks to apply changes to the actual environment

  2. Backup: Configuration data in CMDB is critical, ensure regular backups

  3. Permissions: Configure appropriate database access permissions for CMDB to avoid accidental modifications

  4. Transactions: When making batch configuration changes, perform them within a transaction for rollback on errors

  5. Connection Pooling: The inventory.sh script creates a new connection on each execution; if Ansible runs frequently, consider using connection pooling


Summary

CMDB is Pigsty’s advanced configuration management solution, suitable for scenarios requiring large-scale cluster management, complex queries, external integration, or fine-grained access control. By storing configuration data in PostgreSQL, you can fully leverage the database’s powerful capabilities to manage infrastructure configuration.

FeatureDescription
StoragePostgreSQL pigsty schema
Dynamic Inventoryinventory.sh script
Config Loadbin/inventory_load
Switch to CMDBbin/inventory_cmdb
Switch to YAMLbin/inventory_conf
Core Viewpigsty.inventory

4 - High Availability

Pigsty uses Patroni to implement PostgreSQL high availability, ensuring automatic failover when the primary becomes unavailable.

Overview

Pigsty’s PostgreSQL clusters come with out-of-the-box high availability, powered by Patroni, Etcd, and HAProxy.

When your PostgreSQL cluster has two or more instances, you automatically have self-healing database high availability without any additional configuration — as long as any instance in the cluster survives, the cluster can provide complete service. Clients only need to connect to any node in the cluster to get full service without worrying about primary-replica topology changes.

With default configuration, the primary failure Recovery Time Objective (RTO) ≈ 30s, and Recovery Point Objective (RPO) < 1MB; for replica failures, RPO = 0 and RTO ≈ 0 (brief interruption). In consistency-first mode, failover can guarantee zero data loss: RPO = 0. All these metrics can be configured as needed based on your actual hardware conditions and reliability requirements.

Pigsty includes built-in HAProxy load balancers for automatic traffic switching, providing DNS/VIP/LVS and other access methods for clients. Failover and switchover are almost transparent to the business side except for brief interruptions - applications don’t need to modify connection strings or restart. The minimal maintenance window requirements bring great flexibility and convenience: you can perform rolling maintenance and upgrades on the entire cluster without application coordination. The feature that hardware failures can wait until the next day to handle lets developers, operations, and DBAs sleep well during incidents.

pigsty-ha

Many large organizations and core institutions have been using Pigsty in production for extended periods. The largest deployment has 25K CPU cores and 220+ PostgreSQL ultra-large instances (64c / 512g / 3TB NVMe SSD). In this deployment case, dozens of hardware failures and various incidents occurred over five years, yet overall availability of over 99.999% was maintained.


What problems does High Availability solve?

  • Elevates data security C/IA availability to a new level: RPO ≈ 0, RTO < 30s.
  • Gains seamless rolling maintenance capability, minimizing maintenance window requirements and bringing great convenience.
  • Hardware failures can self-heal immediately without human intervention, allowing operations and DBAs to sleep well.
  • Replicas can handle read-only requests, offloading primary load and fully utilizing resources.

What are the costs of High Availability?

  • Infrastructure dependency: HA requires DCS (etcd/zk/consul) for consensus.
  • Higher starting threshold: A meaningful HA deployment requires at least three nodes.
  • Extra resource consumption: Each new replica consumes additional resources, though this is usually not a major concern.
  • Significantly increased complexity: Backup costs increase significantly, requiring tools to manage complexity.

Limitations of High Availability

Since replication happens in real-time, all changes are immediately applied to replicas. Therefore, streaming replication-based HA solutions cannot handle data deletion or modification caused by human errors and software defects. (e.g., DROP TABLE or DELETE data) Such failures require using delayed clusters or performing point-in-time recovery using previous base backups and WAL archives.

Configuration StrategyRTORPO
Standalone + Nothing Data permanently lost, unrecoverable All data lost
Standalone + Base Backup Depends on backup size and bandwidth (hours) Lose data since last backup (hours to days)
Standalone + Base Backup + WAL Archive Depends on backup size and bandwidth (hours) Lose unarchived data (tens of MB)
Primary-Replica + Manual Failover ~10 minutes Lose data in replication lag (~100KB)
Primary-Replica + Auto Failover Within 1 minute Lose data in replication lag (~100KB)
Primary-Replica + Auto Failover + Sync Commit Within 1 minute No data loss

How It Works

In Pigsty, the high availability architecture works as follows:

  • PostgreSQL uses standard streaming replication to build physical replicas; replicas take over when the primary fails.
  • Patroni manages PostgreSQL server processes and handles high availability matters.
  • Etcd provides distributed configuration storage (DCS) capability and is used for leader election after failures.
  • Patroni relies on Etcd to reach cluster leader consensus and provides health check interfaces externally.
  • HAProxy exposes cluster services externally and uses Patroni health check interfaces to automatically distribute traffic to healthy nodes.
  • vip-manager provides an optional Layer 2 VIP, retrieves leader information from Etcd, and binds the VIP to the node where the cluster primary resides.

When the primary fails, a new round of leader election is triggered. The healthiest replica in the cluster (highest LSN position, minimum data loss) wins and is promoted to the new primary. After the winning replica is promoted, read-write traffic is immediately routed to the new primary. The impact of primary failure is brief write service unavailability: write requests will be blocked or fail directly from primary failure until new primary promotion, with unavailability typically lasting 15 to 30 seconds, usually not exceeding 1 minute.

When a replica fails, read-only traffic is routed to other replicas. Only when all replicas fail will read-only traffic ultimately be handled by the primary. The impact of replica failure is partial read-only query interruption: queries currently running on that replica will abort due to connection reset and be immediately taken over by other available replicas.

Failure detection is performed jointly by Patroni and Etcd. The cluster leader holds a lease; if the cluster leader fails to renew the lease in time (10s) due to failure, the lease is released, triggering a Failover and new cluster election.

Even without any failures, you can proactively change the cluster primary through Switchover. In this case, write queries on the primary will experience a brief interruption and be immediately routed to the new primary. This operation is typically used for rolling maintenance/upgrades of database servers.


Tradeoffs

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are two parameters that require careful tradeoffs when designing high availability clusters.

The default RTO and RPO values used by Pigsty meet reliability requirements for most scenarios. You can adjust them based on your hardware level, network quality, and business requirements.

The upper limit of unavailability during failover is controlled by the pg_rto parameter. RTO defaults to 30s. Increasing it will result in longer primary failure write unavailability, while decreasing it will increase the rate of false positive failovers (e.g., repeated switching due to brief network jitter).

The upper limit of potential data loss is controlled by the pg_rpo parameter, defaulting to 1MB. Reducing this value can lower the data loss ceiling during failover but also increases the probability of refusing automatic failover when replicas are not healthy enough (lagging too far behind).

Pigsty uses availability-first mode by default, meaning it will failover as quickly as possible when the primary fails, and data not yet replicated to replicas may be lost (under typical 10GbE networks, replication lag is usually a few KB to 100KB).

If you need to ensure zero data loss during failover, you can use the crit.yml template to ensure no data loss during failover, but this sacrifices some performance as a tradeoff.


pg_rto

Parameter name: pg_rto, Type: int, Level: C

Recovery Time Objective (RTO) in seconds. This is used to calculate Patroni’s TTL value, defaulting to 30 seconds.

If the primary instance is missing for this long, a new leader election will be triggered. This value is not always better when lower; it involves tradeoffs: Reducing this value can decrease unavailability during cluster failover (inability to write), but makes the cluster more sensitive to short-term network jitter, increasing the probability of false positive failover triggers. You need to configure this value based on network conditions and business constraints, making a tradeoff between failure probability and failure impact.

pg_rpo

Parameter name: pg_rpo, Type: int, Level: C

Recovery Point Objective (RPO) in bytes, default: 1048576.

Defaults to 1MiB, meaning up to 1MiB of data loss can be tolerated during failover.

When the primary goes down and all replicas are lagging, you must make a difficult choice: Either promote a replica to become the new primary immediately, accepting acceptable data loss (e.g., less than 1MB), and restore service as quickly as possible. Or wait for the primary to come back online (which may never happen) to avoid any data loss, or abandon automatic failover and wait for human intervention to make the final decision. You need to configure this value based on business preference, making a tradeoff between availability and consistency.

Additionally, you can always ensure RPO = 0 by enabling synchronous commit (e.g., using the crit.yml template), sacrificing some cluster latency/throughput performance to guarantee data consistency.

4.1 - Tradeoffs

How to balance RTO and RPO parameters to achieve an HA solution that fits your business needs?

Overview

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are two parameters requiring careful tradeoffs when designing high availability clusters.

  • RTO (Recovery Time Objective) defines the maximum time needed to restore write capability when the primary fails.
  • RPO (Recovery Point Objective) defines the maximum amount of data that can be lost when the primary fails.

The default RTO and RPO values used by Pigsty meet reliability requirements for most scenarios. You can adjust them based on your hardware level, network quality, and business requirements.

The upper limit of unavailability during failover is controlled by the pg_rto parameter. RTO defaults to 30s. Increasing it results in longer primary failure write unavailability, while decreasing it increases false positive failover rates (e.g., repeated switching due to brief network jitter).

The upper limit of potential data loss is controlled by the pg_rpo parameter, defaulting to 1MB. Reducing this value lowers the data loss ceiling during failover but increases the probability of refusing automatic failover when replicas are not healthy enough (lagging too far).


Parameter Details

pg_rto

Recovery Time Objective (RTO) in seconds, defaults to 30 seconds, meaning the primary can be expected to restore write capability within 30 seconds after failure.

If the primary instance is missing for this long, a new leader election will be triggered. This value is not always better when lower; it involves tradeoffs: Reducing this value decreases cluster failover unavailability (inability to write) but makes the cluster more sensitive to short-term network jitter, increasing false positive failover triggers. You need to configure this value based on network conditions and business constraints, making a tradeoff between failure probability and failure impact.

pg_rpo

Recovery Point Objective (RPO) in bytes, default: 1048576 (1MiB), meaning up to 1MiB of data loss can be tolerated during failover.

When the primary goes down and all replicas are lagging, you must make a difficult choice: Either promote a replica immediately, accepting acceptable data loss (e.g., less than 1MB), and restore service quickly. Or wait for the primary to come back online (which may never happen) to avoid any data loss, or abandon automatic failover and wait for human intervention. You need to configure this value based on business preference, making a tradeoff between availability and consistency.

Additionally, you can always ensure RPO = 0 by enabling synchronous commit (e.g., using the crit.yml template), sacrificing cluster latency/throughput performance.


Protection Modes

For RPO (data loss) tradeoffs, you can reference Oracle Data Guard’s three protection mode design philosophy.

Oracle Data Guard provides three protection modes. PostgreSQL can also implement these three classic HA switchover strategies through configuration.

Maximum Performance

  • Default mode, using Patroni config templates other than crit.yml, Patroni synchronous_mode: false, sync replication disabled
  • Transaction commit only requires local WAL persistence, no replica wait, replica failures are completely transparent to primary
  • Primary failure may lose unsent/unreceived WAL (typically < 1MB, normally 10ms/100ms latency, 10KB/100KB magnitude under good network)
  • Optimized for performance, suitable for regular business scenarios that can tolerate minor data loss during failures

Maximum Availability

  • Pigsty uses crit.yml template with synchronous_mode: true + synchronous_mode_strict: false
  • Under normal conditions, waits for at least one replica to confirm, achieving zero data loss. When all sync replicas fail, automatically degrades to async mode to continue service
  • Balances data safety and service availability, recommended configuration for production core business

Maximum Protection

  • Pigsty uses crit.yml template with Patroni synchronous_mode: true and further configured to strict sync mode
  • Configure synchronous_mode_strict: true, when all sync replicas fail, primary refuses writes to prevent data loss
  • Can configure synchronous_commit: 'remote_apply' for read-write consistency
  • Transactions must be persisted on at least one replica before returning success, can specify sync replica list, configure more sync replicas for better disaster tolerance
  • Suitable for financial transactions, medical records, and other scenarios requiring extremely high data integrity
DimensionMax PerformanceMax AvailabilityMax Protection
NameMaximum PerformanceMaximum AvailabilityMaximum Protection
ReplicationAsyncSyncSync
Data LossPossible (replication lag)Zero normally, minor when degradedZero
Primary Write LatencyLowestMedium (+1 network round trip)Medium (+1 network round trip)
ThroughputHighestLowerLower
Replica Failure ImpactNoneAuto degrade, continue servicePrimary stops writes
RPO< 1MB= 0 (normal) / < 1MB (degraded)= 0
Use CaseRegular business, performance-firstImportant business, safety-firstFinancial core, compliance-first
Pigsty ConfigDefaultpg_conf: crit.ymlpg_conf: crit.yml + strict mode

The three modes differ in how Patroni’s two core parameters are configured: synchronous_mode and synchronous_mode_strict

  • synchronous_mode: Whether Patroni enables sync replication. If enabled, check synchronous_mode_strict for strict sync mode.
  • synchronous_mode_strict = false: Default, allows degrading to async mode when replicas fail, primary continues service (Maximum Availability)
  • synchronous_mode_strict = true: Disables degradation, primary stops writes until sync replicas recover (Maximum Protection)
Modesynchronous_modesynchronous_mode_strictReplica Failure Behavior
Max Performancefalse-No impact
Max AvailabilitytruefalseAuto degrade to async
Max ProtectiontruetruePrimary refuses writes

Pigsty uses availability-first mode by default, meaning it will failover as quickly as possible when the primary fails, and data not yet replicated to replicas may be lost (under typical 10GbE networks, replication lag is usually a few KB to 100KB). If you need zero data loss during failover, use the crit.yml template for consistency-first mode to ensure no data loss during failover, but this sacrifices some performance (latency/throughput).

Maximum Protection mode requires additional synchronous_mode_strict: true configuration, which causes the primary to stop write service when all sync replicas fail! Ensure you have at least two sync replicas, or accept this risk before enabling this mode.


Failover Timing

For RTO (recovery time) tradeoffs, analyze using the Patroni failure detection and switchover timing diagram below.

Recovery Time Objective (RTO) consists of multiple phases:

gantt
    title RTO Time Breakdown (Default config pg_rto=30s)
    dateFormat ss
    axisFormat %Ss

    section Failure Detection
    Patroni detect/stop lease renewal    :a1, 00, 10s

    section Election Phase
    Etcd lease expires           :a2, after a1, 2s
    Candidate election (compare LSN)    :a3, after a2, 3s

    section Promotion Phase
    Execute promote            :a4, after a3, 3s
    Update Etcd state          :a5, after a4, 2s

    section Traffic Switch
    HAProxy detects new primary      :a6, after a5, 5s
    HAProxy confirm (rise)     :a7, after a6, 3s
    Service restored                :milestone, after a7, 0s
ParameterImpactTuning Advice
pg_rtoBaseline for TTL/loop_wait/retry_timeoutCan reduce to 15-20s on stable networks
ttlFailure detection time window= pg_rto
loop_waitPatroni check interval= pg_rto / 3
interHAProxy health check intervalCan reduce to 1-2s
fallFailure determination countCan reduce to 2
riseRecovery determination countCan reduce to 2

When the primary fails, the system goes through these phases:

sequenceDiagram
    autonumber
    participant Primary as 🟢 Primary
    participant Patroni_P as Patroni (Primary)
    participant Etcd as 🟠 Etcd Cluster
    participant Patroni_R as Patroni (Replica)
    participant Replica as 🔵 Replica
    participant HAProxy as HAProxy

    Note over Primary: T=0s Primary failure occurs

    rect rgb(255, 235, 235)
        Note right of Primary: Failure Detection Phase (0-10s)
        Primary-x Patroni_P: Process crash
        Patroni_P--x Etcd: Stop lease renewal
        HAProxy--x Patroni_P: Health check fails
        Etcd->>Etcd: Lease countdown starts
    end

    rect rgb(255, 248, 225)
        Note right of Etcd: Election Phase (10-20s)
        Etcd->>Etcd: Lease expires, release leader lock
        Patroni_R->>Etcd: Check eligibility (LSN, replication lag)
        Etcd->>Patroni_R: Grant leader lock
    end

    rect rgb(232, 245, 233)
        Note right of Replica: Promotion Phase (20-30s)
        Patroni_R->>Replica: Execute PROMOTE
        Replica-->>Replica: Promoted to new primary
        Patroni_R->>Etcd: Update state
        HAProxy->>Patroni_R: Health check /primary
        Patroni_R-->>HAProxy: 200 OK
    end

    Note over HAProxy: T≈30s Service restored
    HAProxy->>Replica: Route write traffic to new primary

Key timing formula:

RTO ≈ TTL + Election_Time + Promote_Time + HAProxy_Detection

Where:
- TTL = pg_rto (default 30s)
- Election_Time ≈ 1-2s
- Promote_Time ≈ 1-5s
- HAProxy_Detection = fall × inter + rise × fastinter ≈ 12s

Actual RTO is typically 15-40s, depending on:
- Network latency
- Replica WAL replay progress
- PostgreSQL recovery speed

4.2 - Service Access

Pigsty uses HAProxy to provide service access, with optional pgBouncer for connection pooling, and optional L2 VIP and DNS access.

Split read and write operations, route traffic correctly, and deliver PostgreSQL cluster capabilities reliably.

Service is an abstraction: it represents the form in which database clusters expose their capabilities externally, encapsulating underlying cluster details.

Services are crucial for stable access in production environments, showing their value during automatic failover in high availability clusters. Personal users typically don’t need to worry about this concept.


Personal Users

The concept of “service” is for production environments. Personal users with single-node clusters can skip the complexity and directly use instance names or IP addresses to access the database.

For example, Pigsty’s default single-node pg-meta.meta database can be connected directly using three different users:

psql postgres://dbuser_dba:[email protected]/meta     # Connect directly with DBA superuser
psql postgres://dbuser_meta:[email protected]/meta   # Connect with default business admin user
psql postgres://dbuser_view:DBUser.View@pg-meta/meta       # Connect with default read-only user via instance domain name

Service Overview

In real-world production environments, we use primary-replica database clusters based on replication. Within a cluster, one and only one instance serves as the leader (primary) that can accept writes. Other instances (replicas) continuously fetch change logs from the cluster leader to stay synchronized. Replicas can also handle read-only requests, significantly offloading the primary in read-heavy, write-light scenarios. Therefore, distinguishing write requests from read-only requests is a common practice.

Additionally, for production environments with high-frequency, short-lived connections, we pool requests through connection pool middleware (Pgbouncer) to reduce connection and backend process creation overhead. However, for scenarios like ETL and change execution, we need to bypass the connection pool and directly access the database. Meanwhile, high-availability clusters may undergo failover during failures, causing cluster leadership changes. Therefore, high-availability database solutions require write traffic to automatically adapt to cluster leadership changes. These varying access needs (read-write separation, pooled vs. direct connections, failover auto-adaptation) ultimately lead to the abstraction of the Service concept.

Typically, database clusters must provide this most basic service:

  • Read-write service (primary): Can read from and write to the database

For production database clusters, at least these two services should be provided:

  • Read-write service (primary): Write data: Can only be served by the primary.
  • Read-only service (replica): Read data: Can be served by replicas; falls back to primary when no replicas are available

Additionally, depending on specific business scenarios, there may be other services, such as:

  • Default direct service (default): Allows (admin) users to bypass the connection pool and directly access the database
  • Offline replica service (offline): Dedicated replica not serving online read traffic, used for ETL and analytical queries
  • Sync replica service (standby): Read-only service with no replication delay, handled by synchronous standby/primary for read queries
  • Delayed replica service (delayed): Access data from the same cluster as it was some time ago, handled by delayed replicas

Access Services

Pigsty’s service delivery boundary stops at the cluster’s HAProxy. Users can access these load balancers through various means.

The typical approach is to use DNS or VIP access, binding them to all or any number of load balancers in the cluster.

pigsty-access.jpg

You can use different host & port combinations, which provide PostgreSQL service in different ways.

Host

TypeSampleDescription
Cluster Domain Namepg-testAccess via cluster domain name (resolved by dnsmasq @ infra nodes)
Cluster VIP Address10.10.10.3Access via L2 VIP address managed by vip-manager, bound to primary node
Instance Hostnamepg-test-1Access via any instance hostname (resolved by dnsmasq @ infra nodes)
Instance IP Address10.10.10.11Access any instance’s IP address

Port

Pigsty uses different ports to distinguish pg services

PortServiceTypeDescription
5432postgresDatabaseDirect access to postgres server
6432pgbouncerMiddlewareAccess postgres through connection pool middleware
5433primaryServiceAccess primary pgbouncer (or postgres)
5434replicaServiceAccess replica pgbouncer (or postgres)
5436defaultServiceAccess primary postgres
5438offlineServiceAccess offline postgres

Combinations

# Access via cluster domain
postgres://test@pg-test:5432/test # DNS -> L2 VIP -> primary direct connection
postgres://test@pg-test:6432/test # DNS -> L2 VIP -> primary connection pool -> primary
postgres://test@pg-test:5433/test # DNS -> L2 VIP -> HAProxy -> primary connection pool -> primary
postgres://test@pg-test:5434/test # DNS -> L2 VIP -> HAProxy -> replica connection pool -> replica
postgres://dbuser_dba@pg-test:5436/test # DNS -> L2 VIP -> HAProxy -> primary direct connection (for admin)
postgres://dbuser_stats@pg-test:5438/test # DNS -> L2 VIP -> HAProxy -> offline direct connection (for ETL/personal queries)

# Access via cluster VIP directly
postgres://[email protected]:5432/test # L2 VIP -> primary direct access
postgres://[email protected]:6432/test # L2 VIP -> primary connection pool -> primary
postgres://[email protected]:5433/test # L2 VIP -> HAProxy -> primary connection pool -> primary
postgres://[email protected]:5434/test # L2 VIP -> HAProxy -> replica connection pool -> replica
postgres://[email protected]:5436/test # L2 VIP -> HAProxy -> primary direct connection (for admin)
postgres://[email protected]::5438/test # L2 VIP -> HAProxy -> offline direct connection (for ETL/personal queries)

# Directly specify any cluster instance name
postgres://test@pg-test-1:5432/test # DNS -> database instance direct connection (singleton access)
postgres://test@pg-test-1:6432/test # DNS -> connection pool -> database
postgres://test@pg-test-1:5433/test # DNS -> HAProxy -> connection pool -> database read/write
postgres://test@pg-test-1:5434/test # DNS -> HAProxy -> connection pool -> database read-only
postgres://dbuser_dba@pg-test-1:5436/test # DNS -> HAProxy -> database direct connection
postgres://dbuser_stats@pg-test-1:5438/test # DNS -> HAProxy -> database offline read/write

# Directly specify any cluster instance IP access
postgres://[email protected]:5432/test # Database instance direct connection (directly specify instance, no automatic traffic distribution)
postgres://[email protected]:6432/test # Connection pool -> database
postgres://[email protected]:5433/test # HAProxy -> connection pool -> database read/write
postgres://[email protected]:5434/test # HAProxy -> connection pool -> database read-only
postgres://[email protected]:5436/test # HAProxy -> database direct connection
postgres://[email protected]:5438/test # HAProxy -> database offline read-write

# Smart client: read/write separation via URL
postgres://[email protected]:6432,10.10.10.12:6432,10.10.10.13:6432/test?target_session_attrs=primary
postgres://[email protected]:6432,10.10.10.12:6432,10.10.10.13:6432/test?target_session_attrs=prefer-standby

4.3 - Failure Model

What failures can high availability solve, and what can’t it solve?

Failure Scenario Analysis

Single Node Failures

Primary Process Crash

Scenario: PostgreSQL primary process is kill -9 or crashes

flowchart LR
    subgraph Detection["🔍 Failure Detection"]
        D1["Patroni detects process gone"]
        D2["Attempts to restart PostgreSQL"]
        D3["Restart fails, stop lease renewal"]
        D1 --> D2 --> D3
    end

    subgraph Failover["🔄 Failover"]
        F1["Etcd lease expires (~10s)"]
        F2["Trigger election, latest replica wins"]
        F3["New primary promoted"]
        F4["HAProxy detects new primary"]
        F1 --> F2 --> F3 --> F4
    end

    subgraph Impact["📊 Impact"]
        I1["Write service down: 15-30s"]
        I2["Read service: brief interruption"]
        I3["Data loss: < 1MB or 0"]
    end

    Detection --> Failover --> Impact

    style D1 fill:#ffcdd2
    style F3 fill:#c8e6c9
    style I1 fill:#fff9c4

Patroni Process Failure

Scenario: Patroni process is killed or crashes

flowchart TB
    FAULT["Patroni process failure"]

    subgraph Detection["Failure Detection"]
        D1["Patroni stops lease renewal"]
        D2["PostgreSQL continues running<br/>(orphan state)"]
        D3["Etcd lease countdown"]
    end

    subgraph FailsafeOn["failsafe_mode: true"]
        FS1["Check if can access other Patroni"]
        FS2["✅ Can → Continue as primary"]
        FS3["❌ Cannot → Self-demote"]
    end

    subgraph FailsafeOff["failsafe_mode: false"]
        FF1["Trigger switchover after lease expires"]
        FF2["Original primary demotes"]
    end

    FAULT --> Detection
    Detection --> FailsafeOn
    Detection --> FailsafeOff

    style FAULT fill:#f44336,color:#fff
    style FS2 fill:#4CAF50,color:#fff
    style FS3 fill:#ff9800,color:#fff

Replica Failure

Scenario: Any replica node fails

Impact:

  • Read-only traffic redistributed to other replicas
  • If no other replicas, primary handles read-only traffic
  • ✅ Write service completely unaffected

Recovery:

  • Node recovery triggers Patroni auto-start
  • Auto-resync from primary
  • Restored as replica role

Multi-Node Failures

Three Nodes Down to Two (2/3 Failure)

Scenario: 3-node cluster, 2 nodes fail simultaneously

flowchart TB
    subgraph Analysis["Situation Analysis"]
        A1["Etcd loses majority (1/3 < 2/3)"]
        A2["Cannot perform leader election"]
        A3["Auto-switchover mechanism fails"]
    end

    subgraph Survivor["Surviving Node Status"]
        S1{"Surviving node is?"}
        S2["🟢 Primary<br/>Continues running under failsafe_mode"]
        S3["🔵 Replica<br/>Cannot auto-promote"]
    end

    A1 --> A2 --> A3 --> S1
    S1 -->|"Primary"| S2
    S1 -->|"Replica"| S3

    style A1 fill:#ffcdd2
    style S2 fill:#c8e6c9
    style S3 fill:#fff9c4

Emergency Recovery Procedure:

# 1. Confirm surviving node status
patronictl -c /etc/patroni/patroni.yml list

# 2. If surviving node is replica, manually promote
pg_ctl promote -D /pg/data

# 3. Or use pg-promote script
/pg/bin/pg-promote

# 4. Modify HAProxy config, point directly to surviving node
# Comment out health checks, hardcode routing

# 5. After recovering Etcd cluster, reinitialize

Two Nodes Down to One (1/2 Failure)

Scenario: 2-node cluster, primary fails

Problem:

  • Etcd has only 2 nodes, no majority
  • Cannot complete election
  • Replica cannot auto-promote

Solutions:

  1. Option 1: Add external Etcd arbiter node
  2. Option 2: Manual intervention to promote replica
  3. Option 3: Use Witness node

Manual Promotion Steps:

  1. Confirm primary is truly unrecoverable
  2. Stop replica Patroni: systemctl stop patroni
  3. Manual promote: pg_ctl promote -D /pg/data
  4. Start PostgreSQL directly: systemctl start postgres
  5. Update application connection strings or HAProxy config

Etcd Cluster Failures

Etcd Single Node Failure

Scenario: 3-node Etcd cluster, 1 node fails

Impact:

  • ✅ Etcd still has majority (2/3)
  • ✅ Service runs normally
  • ✅ PostgreSQL HA unaffected

Recovery:

  • Fix failed node
  • Use etcd-add to rejoin
  • Or replace with new node

Etcd Majority Lost

Scenario: 3-node Etcd cluster, 2 nodes fail

flowchart TB
    subgraph Impact["❌ Impact"]
        I1["Etcd cannot write"]
        I2["Patroni cannot renew lease"]
        I3["failsafe_mode activates"]
        I4["Cannot perform failover"]
    end

    subgraph PG["PostgreSQL Behavior"]
        P1["🟢 Primary: Continues running"]
        P2["🔵 Replica: Continues replicating"]
        P3["✅ New writes can continue"]
    end

    subgraph Limit["⚠️ Limitations"]
        L1["Cannot switchover"]
        L2["Cannot failover"]
        L3["Config changes cannot take effect"]
    end

    Impact --> PG --> Limit

    style I1 fill:#ffcdd2
    style P1 fill:#c8e6c9
    style L1 fill:#fff9c4

Recovery Priority:

  1. Restore Etcd majority
  2. Verify PostgreSQL status
  3. Check Patroni lease renewal

Network Partition

Primary Network Isolation

Scenario: Primary cannot reach Etcd/other nodes

flowchart LR
    subgraph Isolated["🔒 Isolated Side (Primary)"]
        P1["Primary"]
        CHECK{"failsafe_mode<br/>check"}
        CONT["Continue running"]
        DEMOTE["Self-demote"]

        P1 --> CHECK
        CHECK -->|"Can access other Patroni"| CONT
        CHECK -->|"Cannot access"| DEMOTE
    end

    subgraph Majority["✅ Majority Side"]
        E[("Etcd")]
        P2["Replica"]
        ELECT["Trigger election"]
        NEWPRI["New primary emerges"]

        E --> ELECT --> P2 --> NEWPRI
    end

    Isolated -.->|"Network partition"| Majority

    style P1 fill:#ff9800,color:#fff
    style DEMOTE fill:#f44336,color:#fff
    style NEWPRI fill:#4CAF50,color:#fff

Split-brain Protection:

  • Patroni failsafe_mode
  • Old primary self-detection
  • Fencing (optional)
  • Watchdog (optional)

Watchdog Mechanism

For protection in extreme scenarios:

watchdog:
  mode: automatic                     # off|automatic|required
  device: /dev/watchdog
  safety_margin: 5                    # Safety margin (seconds)

How it works:

  • Patroni periodically writes to watchdog device
  • If Patroni unresponsive, kernel triggers restart
  • Ensures old primary doesn’t continue serving
  • Prevents severe split-brain scenarios

Best Practices

Production Environment Checklist

Infrastructure:

  • At least 3 nodes (PostgreSQL)
  • At least 3 nodes (Etcd, can share with PG)
  • Nodes distributed across different failure domains (racks/AZs)
  • Network latency < 10ms (same city) or < 50ms (cross-region)
  • 10GbE network (recommended)

Parameter Configuration:

  • pg_rto adjusted based on network conditions (15-60s)
  • pg_rpo set based on business requirements (0 or 1MB)
  • pg_conf choose appropriate template (oltp/crit)
  • patroni_watchdog_mode evaluate necessity

Monitoring Alerts:

  • Patroni status monitoring (leader/replication lag)
  • Etcd cluster health monitoring
  • Replication lag alerts (lag > 1MB)
  • failsafe_mode activation alerts

Disaster Recovery Drills:

  • Regularly execute failover drills
  • Verify RTO/RPO meets expectations
  • Test backup recovery procedures
  • Verify monitoring alert effectiveness

Common Troubleshooting

Failover Failures:

# Check Patroni status
patronictl -c /etc/patroni/patroni.yml list

# Check Etcd cluster health
etcdctl endpoint health

# Check replication lag
psql -c "SELECT * FROM pg_stat_replication"

# View Patroni logs
journalctl -u patroni -f

Split-brain Handling:

# 1. Confirm which is the "true" primary
psql -c "SELECT pg_is_in_recovery()"

# 2. Stop the "wrong" primary
systemctl stop patroni

# 3. Use pg_rewind to sync
pg_rewind --target-pgdata=/pg/data --source-server="host=<true_primary>"

# 4. Restart Patroni
systemctl start patroni

4.4 - HA Cluster

In-depth introduction to PostgreSQL high availability cluster architecture design, component interaction, failure scenarios, and recovery mechanisms in Pigsty.

Pigsty’s PostgreSQL clusters come with out-of-the-box high availability, powered by Patroni, Etcd, and HAProxy.

When your PostgreSQL cluster has two or more instances, you automatically have self-healing database high availability without any additional configuration — as long as any instance survives, the cluster can provide complete service. Clients only need to connect to any node to get full service without worrying about primary-replica topology changes.

With default configuration, the primary failure Recovery Time Objective (RTO) ≈ 30s, and Recovery Point Objective (RPO) < 1MB; for replica failures, RPO = 0 and RTO ≈ 0 (brief interruption). In consistency-first mode, failover can guarantee zero data loss: RPO = 0. All metrics can be configured as needed based on your hardware conditions and reliability requirements.

Pigsty includes built-in HAProxy load balancers for automatic traffic switching, providing DNS/VIP/LVS and other access methods. Failover and switchover are almost transparent to business except for brief interruptions - applications don’t need to modify connection strings or restart. Minimal maintenance window requirements bring great flexibility: you can perform rolling maintenance on the entire cluster without application coordination. Hardware failures can wait until the next day to handle, letting developers, operations, and DBAs sleep well during incidents.

Many large organizations have been using Pigsty in production for extended periods. The largest deployment has 25K CPU cores and 220+ PostgreSQL ultra-large instances (64c / 512g / 3TB NVMe SSD). Over five years with dozens of hardware failures, overall availability of over 99.999% was maintained.


Architecture Overview

Pigsty’s HA architecture consists of four core components working together to achieve automatic failure detection, leader election, and traffic switching:

flowchart TB
    subgraph Client["🖥️ Client Access Layer"]
        C[("Client")]
        ACCESS["DNS / VIP / HAProxy / L4 LVS"]
    end

    subgraph Node1["📦 Node 1"]
        HAP1["HAProxy :9101<br/>Primary :5433 | Replica :5434"]
        subgraph Stack1["Patroni :8008"]
            PG1[("PostgreSQL<br/>[Primary] :5432")]
            PGB1["PgBouncer :6432"]
        end
    end

    subgraph Node2["📦 Node 2"]
        HAP2["HAProxy :9101<br/>Primary :5433 | Replica :5434"]
        subgraph Stack2["Patroni :8008"]
            PG2[("PostgreSQL<br/>[Replica] :5432")]
            PGB2["PgBouncer :6432"]
        end
    end

    subgraph Node3["📦 Node 3"]
        HAP3["HAProxy :9101<br/>Primary :5433 | Replica :5434"]
        subgraph Stack3["Patroni :8008"]
            PG3[("PostgreSQL<br/>[Replica] :5432")]
            PGB3["PgBouncer :6432"]
        end
    end

    subgraph ETCD["🔐 Etcd Cluster (Raft Consensus)"]
        E1[("Etcd-1<br/>:2379")]
        E2[("Etcd-2<br/>:2379")]
        E3[("Etcd-3<br/>:2379")]
    end

    C --> ACCESS
    ACCESS --> HAP1 & HAP2 & HAP3

    HAP1 -.->|"HTTP Health Check"| Stack1
    HAP2 -.->|"HTTP Health Check"| Stack2
    HAP3 -.->|"HTTP Health Check"| Stack3

    HAP1 --> PGB1
    HAP2 --> PGB2
    HAP3 --> PGB3

    PG1 ==>|"Streaming Replication"| PG2
    PG1 ==>|"Streaming Replication"| PG3

    Stack1 <-->|"Leader Lease"| ETCD
    Stack2 <-->|"Leader Lease"| ETCD
    Stack3 <-->|"Leader Lease"| ETCD

    E1 <--> E2 <--> E3
    E1 <--> E3

    style PG1 fill:#4CAF50,color:#fff
    style PG2 fill:#2196F3,color:#fff
    style PG3 fill:#2196F3,color:#fff
    style ETCD fill:#FF9800,color:#fff

Component Details

PostgreSQL

PostgreSQL is the core database service, using standard Streaming Replication to build physical replicas:

  • Primary: Accepts read-write requests, generates WAL logs
  • Replica: Receives WAL in real-time via streaming replication, provides read-only queries
  • Replication Slot: Ensures WAL isn’t cleaned up prematurely
  • Synchronous Commit: Optional sync replication mode ensuring RPO = 0

Key configuration (dynamically managed by Patroni):

wal_level: logical                    # Enable logical replication level
max_wal_senders: 50                   # Max WAL sender processes
max_replication_slots: 50             # Max replication slots
hot_standby: on                       # Replica readable
wal_log_hints: on                     # Support pg_rewind
track_commit_timestamp: on            # Track transaction timestamps
synchronous_standby_names: ''         # Sync replica list (dynamically managed)

Patroni

Patroni is the core HA engine, managing PostgreSQL lifecycle and cluster state:

Core Responsibilities:

  • Manage PostgreSQL process start/stop and configuration
  • Maintain Leader Lease
  • Execute automatic Failover and manual Switchover
  • Provide REST API for health checks and cluster management
  • Handle replica auto-rebuild and pg_rewind

Key Timing Parameters (control RTO):

ParameterDefaultDescription
ttl30sLeader lease validity, i.e., failure detection time window
loop_wait10sPatroni main loop interval
retry_timeout10sDCS and PostgreSQL operation retry timeout
primary_start_timeout10sPrimary startup timeout
primary_stop_timeout30sPrimary graceful stop timeout (effective in sync mode)

These parameters are derived from pg_rto, default 30s RTO corresponds to:

ttl: 30                               # Leader lease TTL
loop_wait: 10                         # Main loop interval = RTO/3
retry_timeout: 10                     # Retry timeout = RTO/3
primary_start_timeout: 10             # Primary startup timeout = RTO/3

Constraint: ttl >= loop_wait + retry_timeout * 2

Health Check Endpoints (for HAProxy):

EndpointPurposeReturns 200 When
/primaryPrimary serviceCurrent node is Leader
/replicaReplica serviceCurrent node is Replica
/read-onlyRead-only serviceNode is readable (primary or replica)
/healthHealth checkPostgreSQL running normally
/leaderLeader checkHolds leader lock
/asyncAsync replicaAsync replication replica
/syncSync replicaSync replication replica

Etcd

Etcd serves as the distributed configuration store (DCS), providing cluster consensus:

Core Responsibilities:

  • Store cluster configuration and state
  • Provide atomic operations for leader election
  • Implement failure detection via lease mechanism
  • Store PostgreSQL dynamic configuration

Storage Structure (using /pg namespace as example):

/pg/
├── <cluster_name>/
│   ├── leader          # Current leader identifier
│   ├── config          # Cluster config (DCS config)
│   ├── history         # Failover history
│   ├── initialize      # Cluster initialization marker
│   ├── members/        # Member info directory
│   │   ├── pg-test-1   # Instance 1 metadata
│   │   ├── pg-test-2   # Instance 2 metadata
│   │   └── pg-test-3   # Instance 3 metadata
│   └── sync            # Sync replica state

Key Configuration:

election_timeout: 1000ms              # Election timeout (affects Etcd's own HA)
heartbeat_interval: 100ms             # Heartbeat interval
quota_backend_bytes: 16GB             # Storage quota
auto_compaction_mode: periodic        # Auto compaction
auto_compaction_retention: 24h        # Retain 24h history

Etcd Cluster Requirements:

  • Must be odd nodes: 3, 5, 7 nodes ensuring majority quorum
  • Recommend independent deployment on admin nodes, separate from PostgreSQL nodes
  • Network latency should stay under 10ms

HAProxy

HAProxy handles service discovery and traffic distribution:

Core Responsibilities:

  • Discover primary/replica roles via HTTP health checks
  • Route traffic to correct backend nodes
  • Provide load balancing and connection pooling
  • Implement automatic service failover

Default Service Definitions:

ServicePortTargetHealth CheckPurpose
primary5433pgbouncer/primaryRead-write service, routes to primary
replica5434pgbouncer/read-onlyRead-only service, prefers replicas
default5436postgres/primaryDirect primary connection (bypass pool)
offline5438postgres/replicaOffline replica (ETL/backup)

Health Check Configuration:

listen pg-test-primary
    bind *:5433
    mode tcp
    option httpchk
    http-check send meth OPTIONS uri /primary
    http-check expect status 200
    default-server inter 3s fastinter 1s downinter 5s rise 3 fall 3
                   on-marked-down shutdown-sessions slowstart 30s
                   maxconn 3000 maxqueue 128 weight 100
    server pg-test-1 10.10.10.11:6432 check port 8008
    server pg-test-2 10.10.10.12:6432 check port 8008 backup
    server pg-test-3 10.10.10.13:6432 check port 8008 backup

Health Check Timing Parameters (affect RTO sensitivity):

ParameterDefaultDescription
inter3sNormal check interval
fastinter1sFast check interval after state change
downinter5sCheck interval after node down
rise3Consecutive successes needed for recovery
fall3Consecutive failures needed for down

Traffic Switch Timing (primary failure):

  • Failure detection: fall × inter = 3 × 3s = 9s
  • Fast probe: Once anomaly detected, switch to fastinter (1s)
  • Service recovery: After new primary promotion, rise × fastinter = 3 × 1s = 3s

VIP Manager (Optional)

vip-manager provides optional Layer 2 VIP support:

How it works:

  1. Watches leader key in Etcd (/pg/<cluster>/leader)
  2. When this node becomes leader, binds VIP to specified interface
  3. Sends gratuitous ARP to notify network devices to update MAC mapping
  4. When losing leader status, unbinds VIP

Configuration Example:

interval: 1000                        # Check interval (milliseconds)
trigger-key: "/pg/pg-test/leader"     # Etcd key to watch
trigger-value: "pg-test-1"            # Leader value to match
ip: 10.10.10.100                      # VIP address
netmask: 24                           # Subnet mask
interface: eth0                       # Binding interface
dcs-type: etcd                        # DCS type
retry-num: 2                          # Retry count
retry-after: 250                      # Retry interval (milliseconds)

Limitations:

  • Requires all nodes on same Layer 2 network
  • Cloud environments typically don’t support, need cloud provider VIP or DNS solutions
  • Switch time approximately 1-2 seconds

HA Deployment Modes

Three-Node Standard Mode

Most recommended production deployment mode, provides complete automatic failover capability:

flowchart TB
    subgraph Cluster["🏢 Three-Node HA Architecture"]
        direction TB

        subgraph Node1["Node 1"]
            E1[("Etcd")]
            H1["HAProxy"]
            P1["Patroni + PostgreSQL<br/>🟢 Primary"]
        end

        subgraph Node2["Node 2"]
            E2[("Etcd")]
            H2["HAProxy"]
            P2["Patroni + PostgreSQL<br/>🔵 Replica"]
        end

        subgraph Node3["Node 3"]
            E3[("Etcd")]
            H3["HAProxy"]
            P3["Patroni + PostgreSQL<br/>🔵 Replica"]
        end
    end

    E1 <-->|"Raft"| E2
    E2 <-->|"Raft"| E3
    E1 <-->|"Raft"| E3

    P1 ==>|"Replication"| P2
    P1 ==>|"Replication"| P3

    style P1 fill:#4CAF50,color:#fff
    style P2 fill:#2196F3,color:#fff
    style P3 fill:#2196F3,color:#fff
    style E1 fill:#FF9800,color:#fff
    style E2 fill:#FF9800,color:#fff
    style E3 fill:#FF9800,color:#fff

Fault Tolerance:

  • ✅ Any 1 node failure: Auto-switch, service continues
  • ⚠️ 2 node failures: Requires manual intervention

Configuration Example:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: replica }
  vars:
    pg_cluster: pg-test

Five-Node Enhanced Mode

For higher availability requirements, tolerates 2 node failures:

flowchart LR
    subgraph Cluster["🏛️ Five-Node HA Architecture"]
        direction TB

        subgraph Row1[""]
            direction LR
            N1["Node 1<br/>Etcd + 🟢 Primary"]
            N2["Node 2<br/>Etcd + 🔵 Replica"]
            N3["Node 3<br/>Etcd + 🔵 Replica"]
            N4["Node 4<br/>Etcd + 🔵 Replica"]
            N5["Node 5<br/>Etcd + 🔵 Replica"]
        end
    end

    N1 ==> N2 & N3 & N4 & N5

    N1 <-.->|"Etcd Raft"| N2
    N2 <-.->|"Etcd Raft"| N3
    N3 <-.->|"Etcd Raft"| N4
    N4 <-.->|"Etcd Raft"| N5

    style N1 fill:#4CAF50,color:#fff
    style N2 fill:#2196F3,color:#fff
    style N3 fill:#2196F3,color:#fff
    style N4 fill:#2196F3,color:#fff
    style N5 fill:#2196F3,color:#fff

Etcd Quorum: 3/5 majority | PostgreSQL: 1 primary, 4 replicas

Fault Tolerance:

  • ✅ Any 2 node failures: Auto-switch
  • ⚠️ 3 node failures: Requires manual intervention

Use Cases:

  • Financial core systems
  • Cross-datacenter deployment (2+2+1 distribution)
  • Scenarios requiring dedicated offline replicas

Two-Node Semi-HA Mode

Compromise solution when resources are limited, provides limited auto-switch capability:

flowchart TB
    subgraph Cluster["⚠️ Two-Node Semi-HA Architecture"]
        direction LR

        subgraph Node1["Node 1 (Infra)"]
            E1[("Etcd")]
            H1["HAProxy"]
            P1["Patroni + PostgreSQL<br/>🟢 Primary"]
        end

        subgraph Node2["Node 2"]
            E2[("Etcd")]
            H2["HAProxy"]
            P2["Patroni + PostgreSQL<br/>🔵 Replica"]
        end

        subgraph Arbiter["❓ Needs Arbiter"]
            E3[("Etcd<br/>(External)")]
        end
    end

    E1 <-->|"Cannot form majority"| E2
    E1 <-.-> E3
    E2 <-.-> E3

    P1 ==>|"Replication"| P2

    style P1 fill:#4CAF50,color:#fff
    style P2 fill:#2196F3,color:#fff
    style E1 fill:#FF9800,color:#fff
    style E2 fill:#FF9800,color:#fff
    style E3 fill:#9E9E9E,color:#fff,stroke-dasharray: 5 5

Problem: Etcd has only 2 nodes, cannot form majority

Solutions:

  1. Add 3rd external Etcd node (pure arbiter)
  2. Use failsafe_mode to prevent split-brain
  3. Accept asymmetric failover

Asymmetric Failover:

  • Replica failure: ✅ Auto-handled, primary continues service
  • Primary failure: ⚠️ Requires manual intervention (cannot auto-elect)

Configuration Advice:

# Enable failsafe mode to prevent false switches
patroni_watchdog_mode: off            # Disable watchdog
pg_rto: 60                            # Increase RTO to reduce false positives

pg_rto

Parameter name: pg_rto, Type: int, Level: C

Recovery Time Objective (RTO) in seconds. Defaults to 30 seconds.

This parameter derives Patroni’s key timing parameters:

  • ttl = pg_rto
  • loop_wait = pg_rto / 3
  • retry_timeout = pg_rto / 3
  • primary_start_timeout = pg_rto / 3

Reducing this value speeds up failure recovery but increases false positive switch risk.

pg_rpo

Parameter name: pg_rpo, Type: int, Level: C

Recovery Point Objective (RPO) in bytes, defaults to 1048576 (1MB).

  • Set to 0 to enable sync replication, ensuring zero data loss
  • Set to larger value to allow more replication lag, improving availability
  • This value is also used for maximum_lag_on_failover parameter

pg_conf

Parameter name: pg_conf, Type: string, Level: C

Patroni config template, defaults to oltp.yml. Options:

TemplatePurposeSync ReplicationUse Case
oltp.ymlOLTP workloadNoRegular business systems
olap.ymlOLAP workloadNoAnalytical applications
crit.ymlCritical systemsYesFinancial core systems
tiny.ymlTiny instancesNoDev/test environments

patroni_watchdog_mode

Parameter name: patroni_watchdog_mode, Type: string, Level: C

Watchdog mode, defaults to off. Options:

  • off: Disable watchdog
  • automatic: Use if available
  • required: Must use, refuse to start otherwise

Watchdog ensures node self-restart in extreme scenarios (e.g., Patroni hang), preventing split-brain.

pg_vip_enabled

Parameter name: pg_vip_enabled, Type: bool, Level: C

Whether to enable L2 VIP, defaults to false.

When enabled, configure:

  • pg_vip_address: VIP address (CIDR format)
  • pg_vip_interface: Binding interface

Note: Cloud environments typically don’t support L2 VIP.


References

5 - Point-in-Time Recovery

Pigsty uses pgBackRest to implement PostgreSQL point-in-time recovery, allowing users to roll back to any point in time within the backup policy window.

Overview

You can restore and roll back your cluster to any point in the past, avoiding data loss caused by software defects and human errors.

Pigsty’s PostgreSQL clusters come with auto-configured Point-in-Time Recovery (PITR) capability, powered by the backup component pgBackRest and optional object storage repository MinIO.

High availability solutions can address hardware failures but are powerless against data deletion/overwriting/database drops caused by software defects and human errors. For such situations, Pigsty provides out-of-the-box Point-in-Time Recovery (PITR) capability, enabled by default without additional configuration.

Pigsty provides default configurations for base backups and WAL archiving. You can use local directories and disks, or dedicated MinIO clusters or S3 object storage services to store backups and achieve geo-redundant disaster recovery. When using local disks, the default capability to recover to any point within the past day is retained. When using MinIO or S3, the default capability to recover to any point within the past week is retained. As long as storage space permits, you can retain any arbitrarily long recoverable time window, as your budget allows.


What problems does PITR solve?

  • Enhanced disaster recovery: RPO drops from ∞ to tens of MB, RTO drops from ∞ to hours/minutes.
  • Ensures data security: Data integrity in C/I/A: avoids data consistency issues caused by accidental deletion.
  • Ensures data security: Data availability in C/I/A: provides fallback for “permanently unavailable” disaster scenarios
Standalone Configuration StrategyEventRTORPO
NothingCrash Permanently lost All lost
Base BackupCrash Depends on backup size and bandwidth (hours) Lose data since last backup (hours to days)
Base Backup + WAL ArchiveCrash Depends on backup size and bandwidth (hours) Lose unarchived data (tens of MB)

What are the costs of PITR?

  • Reduces C in data security: Confidentiality, creates additional leak points, requires additional backup protection.
  • Extra resource consumption: Local storage or network traffic/bandwidth overhead, usually not a concern.
  • Increased complexity: Users need to pay backup management costs.

Limitations of PITR

If only PITR is used for failure recovery, RTO and RPO metrics are inferior compared to high availability solutions, and typically both should be used together.

  • RTO: With only standalone + PITR, recovery time depends on backup size and network/disk bandwidth, ranging from tens of minutes to hours or days.
  • RPO: With only standalone + PITR, some data may be lost during crashes - one or several WAL segment files may not yet be archived, losing 16 MB to tens of MB of data.

Besides PITR, you can also use delayed clusters in Pigsty to address data deletion/modification caused by human errors or software defects.


How It Works

Point-in-time recovery allows you to restore and roll back your cluster to “any point” in the past, avoiding data loss caused by software defects and human errors. To achieve this, two preparations are needed: Base Backup and WAL Archiving. Having a base backup allows users to restore the database to its state at backup time, while having WAL archives starting from a base backup allows users to restore the database to any point after the base backup time.

fig-10-02.png

For specific operations, refer to PGSQL Admin: Backup and Recovery.

Base Backup

Pigsty uses pgBackRest to manage PostgreSQL backups. pgBackRest initializes empty repositories on all cluster instances but only actually uses the repository on the cluster primary.

pgBackRest supports three backup modes: full backup, incremental backup, and differential backup, with the first two being most commonly used. Full backup takes a complete physical snapshot of the database cluster at the current moment; incremental backup records the differences between the current database cluster and the previous full backup.

Pigsty provides a wrapper command for backups: /pg/bin/pg-backup [full|incr]. You can schedule regular base backups as needed through Crontab or any other task scheduling system.

WAL Archiving

Pigsty enables WAL archiving on the cluster primary by default and uses the pgbackrest command-line tool to continuously push WAL segment files to the backup repository.

pgBackRest automatically manages required WAL files and timely cleans up expired backups and their corresponding WAL archive files based on the backup retention policy.

If you don’t need PITR functionality, you can disable WAL archiving by configuring the cluster: archive_mode: off and remove node_crontab to stop scheduled backup tasks.


Implementation

By default, Pigsty provides two preset backup strategies: The default uses local filesystem backup repository, performing one full backup daily to ensure users can roll back to any point within the past day. The alternative strategy uses dedicated MinIO clusters or S3 storage for backups, with weekly full backups, daily incremental backups, and two weeks of backup and WAL archive retention by default.

Pigsty uses pgBackRest to manage backups, receive WAL archives, and perform PITR. Backup repositories can be flexibly configured (pgbackrest_repo): defaults to primary’s local filesystem (local), but can also use other disk paths, or the included optional MinIO service (minio) and cloud S3 services.

pgbackrest_enabled: true          # enable pgBackRest on pgsql host?
pgbackrest_clean: true            # remove pg backup data during init?
pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
pgbackrest_method: local          # pgbackrest repo method: local, minio, [user-defined...]
pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
  local:                          # default pgbackrest repo with local posix fs
    path: /pg/backup              # local backup directory, `/pg/backup` by default
    retention_full_type: count    # retention full backup by count
    retention_full: 2             # keep at most 3 full backup, at least 2, when using local fs repo
  minio:                          # optional minio repo for pgbackrest
    type: s3                      # minio is s3-compatible, so use s3
    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    s3_region: us-east-1          # minio region, us-east-1 by default, not used for minio
    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    s3_key: pgbackrest            # minio user access key for pgbackrest
    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    s3_uri_style: path            # use path style uri for minio rather than host style
    path: /pgbackrest             # minio backup path, `/pgbackrest` by default
    storage_port: 9000            # minio port, 9000 by default
    storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
    bundle: y                     # bundle small files into a single file
    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    retention_full_type: time     # retention full backup by time on minio repo
    retention_full: 14            # keep full backup for last 14 days
  # You can also add other optional backup repos, such as S3, for geo-redundant disaster recovery

Pigsty parameter pgbackrest_repo target repositories are converted to repository definitions in the /etc/pgbackrest/pgbackrest.conf configuration file. For example, if you define a US West S3 repository for storing cold backups, you can use the following reference configuration.

s3:    # ------> /etc/pgbackrest/pgbackrest.conf
  repo1-type: s3                                   # ----> repo1-type=s3
  repo1-s3-region: us-west-1                       # ----> repo1-s3-region=us-west-1
  repo1-s3-endpoint: s3-us-west-1.amazonaws.com    # ----> repo1-s3-endpoint=s3-us-west-1.amazonaws.com
  repo1-s3-key: '<your_access_key>'                # ----> repo1-s3-key=<your_access_key>
  repo1-s3-key-secret: '<your_secret_key>'         # ----> repo1-s3-key-secret=<your_secret_key>
  repo1-s3-bucket: pgsql                           # ----> repo1-s3-bucket=pgsql
  repo1-s3-uri-style: host                         # ----> repo1-s3-uri-style=host
  repo1-path: /pgbackrest                          # ----> repo1-path=/pgbackrest
  repo1-bundle: y                                  # ----> repo1-bundle=y
  repo1-cipher-type: aes-256-cbc                   # ----> repo1-cipher-type=aes-256-cbc
  repo1-cipher-pass: pgBackRest                    # ----> repo1-cipher-pass=pgBackRest
  repo1-retention-full-type: time                  # ----> repo1-retention-full-type=time
  repo1-retention-full: 90                         # ----> repo1-retention-full=90

Recovery

You can directly use the following wrapper commands for PostgreSQL database cluster point-in-time recovery.

Pigsty uses incremental differential parallel recovery by default, allowing you to recover to a specified point in time at maximum speed.

pg-pitr                                 # Restore to the end of WAL archive stream (e.g., for entire datacenter failure)
pg-pitr -i                              # Restore to the most recent backup completion time (rarely used)
pg-pitr --time="2022-12-30 14:44:44+08" # Restore to a specified point in time (for database or table drops)
pg-pitr --name="my-restore-point"       # Restore to a named restore point created with pg_create_restore_point
pg-pitr --lsn="0/7C82CB8" -X            # Restore to immediately before the LSN
pg-pitr --xid="1234567" -X -P           # Restore to immediately before the specified transaction ID, then promote cluster to primary
pg-pitr --backup=latest                 # Restore to the latest backup set
pg-pitr --backup=20221108-105325        # Restore to a specific backup set, backup sets can be listed with pgbackrest info

pg-pitr                                 # pgbackrest --stanza=pg-meta restore
pg-pitr -i                              # pgbackrest --stanza=pg-meta --type=immediate restore
pg-pitr -t "2022-12-30 14:44:44+08"     # pgbackrest --stanza=pg-meta --type=time --target="2022-12-30 14:44:44+08" restore
pg-pitr -n "my-restore-point"           # pgbackrest --stanza=pg-meta --type=name --target=my-restore-point restore
pg-pitr -b 20221108-105325F             # pgbackrest --stanza=pg-meta --type=name --set=20221230-120101F restore
pg-pitr -l "0/7C82CB8" -X               # pgbackrest --stanza=pg-meta --type=lsn --target="0/7C82CB8" --target-exclusive restore
pg-pitr -x 1234567 -X -P                # pgbackrest --stanza=pg-meta --type=xid --target="0/7C82CB8" --target-exclusive --target-action=promote restore

When performing PITR, you can use Pigsty’s monitoring system to observe the cluster LSN position status and determine whether recovery to the specified point in time, transaction point, LSN position, or other point was successful.

pitr




6 - Monitoring System

How Pigsty’s monitoring system is architected and how monitored targets are automatically managed.

7 - Security and Compliance

Authentication, access control, encrypted communication, audit logs—meeting SOC2 compliance requirements.

Pigsty’s Security Philosophy

Secure by Default: Out-of-the-box security configuration—basic protection without additional setup.

Progressive Configuration: Enterprise users can gradually enhance security measures through configuration.

Defense in Depth: Multiple security layers—even if one layer is breached, others remain protective.

Least Privilege: Grant users only the minimum permissions needed to complete tasks, reducing risk.


Default Security Configuration

Pigsty enables these security features by default:

FeatureDefault ConfigDescription
Password Encryptionscram-sha-256PostgreSQL’s most secure password hash algorithm
SSL SupportEnabledClients can optionally use SSL encrypted connections
Local CAAuto-generatedSelf-signed CA issues server certificates
HBA LayeringSource-based controlDifferent auth strength for different sources
Role SystemFour-tier permissionsRead-only/Read-write/Admin/Offline
Data ChecksumsEnabledDetects storage-layer data corruption
Audit LogsEnabledRecords connections and slow queries

Enhanced Configuration

Additional configuration enables higher security levels:

FeatureConfiguration MethodSecurity Level
Password strength checkEnable passwordcheck extensionEnterprise
Enforce SSLHBA uses hostsslEnterprise
Client certificatesHBA uses cert authFinancial-grade
Backup encryptionConfigure cipher_typeCompliance
FirewallConfigure node_firewall_modeInfrastructure

If you only have one minute, remember this diagram:

flowchart TB
    subgraph L1["Layer 1: Network Security"]
        L1A["Firewall + SSL/TLS Encryption + HAProxy Proxy"]
        L1B["Who can connect? Is the connection encrypted?"]
    end

    subgraph L2["Layer 2: Authentication"]
        L2A["HBA Rules + SCRAM-SHA-256 Passwords + Certificate Auth"]
        L2B["Who are you? How do you prove it?"]
    end

    subgraph L3["Layer 3: Access Control"]
        L3A["Role System + Object Permissions + Database Isolation"]
        L3B["What can you do? What data can you access?"]
    end

    subgraph L4["Layer 4: Data Security"]
        L4A["Data Checksums + Backup Encryption + Audit Logs"]
        L4B["Is data intact? Are operations logged?"]
    end

    L1 --> L2 --> L3 --> L4

Core Value: Enterprise-grade security configuration out of the box, best practices enabled by default, additional configuration achieves SOC 2 compliance.


Contents

SectionDescriptionCore Question
Security OverviewSecurity capability overview and checklistWhat’s the overall security architecture?
AuthenticationHBA rules, password policies, certificate authHow to verify user identity?
Access ControlRole system, permission model, database isolationHow to control user permissions?
Encrypted CommunicationSSL/TLS, local CA, certificate managementHow to protect data in transit?
Compliance ChecklistDetailed SOC2 mappingHow to meet compliance requirements?

Why Security Matters

The Cost of Data Breaches

flowchart LR
    Breach["Data Breach"]

    subgraph Direct["Direct Losses"]
        D1["Regulatory Fines<br/>GDPR up to 4% global revenue"]
        D2["Legal Costs"]
        D3["Customer Compensation"]
    end

    subgraph Indirect["Indirect Losses"]
        I1["Brand Reputation Damage"]
        I2["Customer Trust Loss"]
        I3["Business Disruption"]
    end

    subgraph Compliance["Compliance Risk"]
        C1["Liability"]
        C2["SOC 2: Certification Revocation"]
        C3["Industry Access: Banned from Operating"]
    end

    Breach --> Direct
    Breach --> Indirect
    Breach --> Compliance

Default Users and Passwords

Pigsty creates these system users by default:

UserPurposeDefault PasswordPost-Deploy Action
postgresSystem superuserNo password (local only)Keep passwordless
dbuser_dbaAdmin userDBUser.DBAMust change
dbuser_monitorMonitor userDBUser.MonitorMust change
replicatorReplication userDBUser.ReplicatorMust change
# pigsty.yml - Change default passwords
pg_admin_password: 'YourSecurePassword123!'
pg_monitor_password: 'AnotherSecurePass456!'
pg_replication_password: 'ReplicationPass789!'

Important: After production deployment, immediately change these default passwords!


Role and Permission System

Pigsty provides a four-tier role system out of the box:

flowchart TB
    subgraph Admin["dbrole_admin (Admin)"]
        A1["Inherits dbrole_readwrite"]
        A2["Can CREATE/DROP/ALTER objects (DDL)"]
        A3["For: Business admins, apps needing table creation"]
    end

    subgraph RW["dbrole_readwrite (Read-Write)"]
        RW1["Inherits dbrole_readonly"]
        RW2["Can INSERT/UPDATE/DELETE"]
        RW3["For: Production business accounts"]
    end

    subgraph RO["dbrole_readonly (Read-Only)"]
        RO1["Can SELECT all tables"]
        RO2["For: Reporting, data analysis"]
    end

    subgraph Offline["dbrole_offline (Offline)"]
        OFF1["Can only access offline instances"]
        OFF2["For: ETL, personal analysis, slow queries"]
    end

    Admin --> |inherits| RW
    RW --> |inherits| RO

Creating Business Users

pg_users:
  # Read-only user - for reporting
  - name: dbuser_report
    password: ReportUser123
    roles: [dbrole_readonly]
    pgbouncer: true

  # Read-write user - for production
  - name: dbuser_app
    password: AppUser456
    roles: [dbrole_readwrite]
    pgbouncer: true

  # Admin user - for DDL operations
  - name: dbuser_admin
    password: AdminUser789
    roles: [dbrole_admin]
    pgbouncer: true

HBA Access Control

HBA (Host-Based Authentication) controls “who can connect from where”:

flowchart LR
    subgraph Sources["Connection Sources"]
        S1["Local Socket"]
        S2["localhost"]
        S3["Intranet CIDR"]
        S4["Admin Nodes"]
        S5["External"]
    end

    subgraph Auth["Auth Methods"]
        A1["ident/peer<br/>OS user mapping, most secure"]
        A2["scram-sha-256<br/>Password auth"]
        A3["scram-sha-256 + SSL<br/>Enforce SSL"]
    end

    S1 --> A1
    S2 --> A2
    S3 --> A2
    S4 --> A3
    S5 --> A3

    Note["Rules matched in order<br/>First matching rule applies"]

Custom HBA Rules

pg_hba_rules:
  # Allow app servers from intranet
  - {user: dbuser_app, db: mydb, addr: '10.10.10.0/24', auth: scram-sha-256}

  # Force SSL for certain users
  - {user: admin, db: all, addr: world, auth: ssl}

  # Require certificate auth (highest security)
  - {user: secure_user, db: all, addr: world, auth: cert}

Encrypted Communication

SSL/TLS Architecture

sequenceDiagram
    participant Client as Client
    participant Server as PostgreSQL

    Client->>Server: 1. ClientHello
    Server->>Client: 2. ServerHello
    Server->>Client: 3. Server Certificate
    Client->>Server: 4. Client Key
    Client->>Server: 5. Encrypted Channel Established
    Server->>Client: 5. Encrypted Channel Established

    rect rgb(200, 255, 200)
        Note over Client,Server: Encrypted Data Transfer
        Client->>Server: 6. Application Data (encrypted)
        Server->>Client: 6. Application Data (encrypted)
    end

    Note over Client,Server: Prevents eavesdropping, tampering, verifies server identity

Local CA

Pigsty automatically generates a local CA and issues certificates:

/etc/pki/
├── ca.crt              # CA certificate (public)
├── ca.key              # CA private key (keep secret!)
└── server.crt/key      # Server certificate/key

Important: Securely back up ca.key—if lost, all certificates must be reissued!


Compliance Mapping

SOC 2 Type II

Control PointPigsty SupportDescription
CC6.1 Logical Access ControlYesHBA + Role System
CC6.6 Transmission EncryptionYesSSL/TLS
CC7.2 System MonitoringYesPrometheus + Grafana
CC9.1 Business ContinuityYesHA + PITR
A1.2 Data RecoveryYespgBackRest Backup

Legend: Yes = Default satisfaction · Partial = Needs additional config


Security Checklist

Before Deployment

  • Prepare strong passwords (use password manager)
  • Plan network partitions (intranet/external CIDRs)
  • Decide SSL strategy (self-signed/external CA)

After Deployment (Required)

  • Change all default passwords
  • Verify HBA rules match expectations
  • Test SSL connections work
  • Configure auth failure alerts
  • Securely back up CA private key

Regular Maintenance

  • Audit user permissions
  • Check for expired accounts
  • Update certificates (if needed)
  • Review audit logs

Quick Config Examples

Production Security Configuration

# pigsty.yml - Production security config example
all:
  vars:
    # Change default passwords (required!)
    pg_admin_password: 'SecureDBAPassword2024!'
    pg_monitor_password: 'SecureMonitorPass2024!'
    pg_replication_password: 'SecureReplPass2024!'

    # Enable password strength check
    pg_libs: 'passwordcheck, pg_stat_statements, auto_explain'

    # Custom HBA rules
    pg_hba_rules:
      # App servers
      - {user: app, db: appdb, addr: '10.10.10.0/24', auth: scram-sha-256}
      # Admin enforce SSL
      - {user: dbuser_dba, db: all, addr: world, auth: ssl}

Financial-Grade Security Configuration

# Financial-grade config - enable certificate auth
pg_hba_rules:
  # Trading system uses certificate auth
  - {user: trade_user, db: trade, addr: world, auth: cert}
  # Other systems use SSL + password
  - {user: all, db: all, addr: world, auth: ssl}

# Enable backup encryption
pgbackrest_repo:
  minio:
    cipher_type: aes-256-cbc
    cipher_pass: 'YourBackupEncryptionKey'

Next Steps

Deep dive into security configuration details:

  • Security Overview: Overall security architecture and checklist
  • Authentication: HBA rules and password policies
  • Access Control: Role system and permission model
  • Encrypted Communication: SSL/TLS and certificate management
  • Compliance Checklist: Detailed SOC2 mapping

Related topics:

7.1 - Local CA

Pigsty includes a self-signed CA PKI infrastructure for issuing SSL certificates and encrypting network traffic.

Pigsty enables security best practices by default: using SSL to encrypt network traffic and HTTPS for web interfaces.

To achieve this, Pigsty includes a local self-signed CA for issuing SSL certificates and encrypting network communications.

By default, SSL and HTTPS are enabled but not enforced. For environments with higher security requirements, you can enforce SSL and HTTPS usage.


Local CA

During initialization, Pigsty generates a self-signed CA in the Pigsty source directory (~/pigsty) on the ADMIN node. This CA can be used for SSL, HTTPS, digital signatures, issuing database client certificates, and advanced security features.

Each Pigsty deployment uses a unique CA—CAs from different Pigsty deployments are not mutually trusted.

The local CA consists of two files, located in the files/pki/ca directory by default:

  • ca.crt: Self-signed CA root certificate, distributed to all managed nodes for certificate verification.
  • ca.key: CA private key for issuing certificates and verifying CA identity—keep this file secure and prevent leakage!

Using an Existing CA

If you already have your own CA PKI infrastructure, Pigsty can be configured to use your existing CA.

Simply place your CA public key and private key files in the files/pki/ca directory:

files/pki/ca/ca.key     # Core CA private key file, must exist; if missing, a new one is randomly generated
files/pki/ca/ca.crt     # If certificate file is missing, Pigsty auto-generates a new root certificate from the CA private key

When Pigsty executes the install.yml or infra.yml playbooks, if a ca.key private key file exists in files/pki/ca, the existing CA will be used. Since ca.crt can be generated from the ca.key private key, Pigsty will automatically regenerate the root certificate file if it’s missing.


Trusting the CA

During Pigsty installation, ca.crt is distributed to all nodes at /etc/pki/ca.crt during the node_ca task in the node.yml playbook.

EL-family and Debian-family operating systems have different default trusted CA certificate paths, so the distribution path and update methods differ:

rm -rf /etc/pki/ca-trust/source/anchors/ca.crt
ln -s /etc/pki/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt
/bin/update-ca-trust
rm -rf /usr/local/share/ca-certificates/ca.crt
ln -s /etc/pki/ca.crt /usr/local/share/ca-certificates/ca.crt
/usr/sbin/update-ca-certificates

Pigsty issues HTTPS certificates for domain names used by web systems on infrastructure nodes by default, allowing HTTPS access to Pigsty’s web interfaces.

If you want to avoid “untrusted CA certificate” warnings in client browsers, distribute ca.crt to the trusted certificate directory on client machines.

You can double-click the ca.crt file to add it to your system keychain. For example, on MacOS, open “Keychain Access,” search for pigsty-ca, and set it to “trust” this root certificate.


Viewing Certificate Contents

Use the following command to view the Pigsty CA certificate contents:

openssl x509 -text -in /etc/pki/ca.crt
Local CA Root Certificate Content Example
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            50:29:e3:60:96:93:f4:85:14:fe:44:81:73:b5:e1:09:2a:a8:5c:0a
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: O=pigsty, OU=ca, CN=pigsty-ca
        Validity
            Not Before: Feb  7 00:56:27 2023 GMT
            Not After : Jan 14 00:56:27 2123 GMT
        Subject: O=pigsty, OU=ca, CN=pigsty-ca
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:c1:41:74:4f:28:c3:3c:2b:13:a2:37:05:87:31:
                    ....
                    e6:bd:69:a5:5b:e3:b4:c0:65:09:6e:84:14:e9:eb:
                    90:f7:61
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Alternative Name:
                DNS:pigsty-ca
            X509v3 Key Usage:
                Digital Signature, Certificate Sign, CRL Sign
            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:1
            X509v3 Subject Key Identifier:
                C5:F6:23:CE:BA:F3:96:F6:4B:48:A5:B1:CD:D4:FA:2B:BD:6F:A6:9C
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        89:9d:21:35:59:6b:2c:9b:c7:6d:26:5b:a9:49:80:93:81:18:
        ....
        9e:dd:87:88:0d:c4:29:9e
-----BEGIN CERTIFICATE-----
...
cXyWAYcvfPae3YeIDcQpng==
-----END CERTIFICATE-----

Issuing Certificates

If you want to use client certificate authentication, you can use the local CA and the cert.yml playbook to manually issue PostgreSQL client certificates.

Set the certificate’s CN field to the database username:

./cert.yml -e cn=dbuser_dba
./cert.yml -e cn=dbuser_monitor

Issued certificates are generated in files/pki/misc/<cn>.{key,crt} by default.

7.2 - Access Control

Pigsty provides standard security practices with an out-of-the-box role and permission model.

Pigsty provides an out-of-the-box access control model based on the Role System and Permission System.

Access control is important, but many users struggle to implement it properly. Pigsty provides a streamlined access control model that serves as a security baseline for your cluster.


Role System

Pigsty’s default role system includes four default roles and four default users:

Role NameAttributesMember OfDescription
dbrole_readonlyNOLOGINRole: Global read-only
dbrole_readwriteNOLOGINdbrole_readonlyRole: Global read-write
dbrole_adminNOLOGINpg_monitor,dbrole_readwriteRole: Admin/Object creation
dbrole_offlineNOLOGINRole: Restricted read-only
postgresSUPERUSERSystem superuser
replicatorREPLICATIONpg_monitor,dbrole_readonlySystem replication user
dbuser_dbaSUPERUSERdbrole_adminPostgreSQL admin user
dbuser_monitorpg_monitorPostgreSQL monitor user

These roles and users are defined as follows:

pg_default_roles:                 # Global default roles and system users
  - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
  - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
  - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
  - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
  - { name: postgres     ,superuser: true  ,comment: system superuser }
  - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
  - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
  - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }

Default Roles

Pigsty has four default roles:

  • Business Read-Only (dbrole_readonly): Role for global read-only access. Use this if other services need read-only access to this database.
  • Business Read-Write (dbrole_readwrite): Role for global read-write access. Production accounts for primary business should have database read-write permissions.
  • Business Admin (dbrole_admin): Role with DDL permissions. Typically used for business administrators or scenarios requiring table creation in applications.
  • Offline Read-Only (dbrole_offline): Restricted read-only access role (can only access offline instances). Usually for personal users or ETL tool accounts.

Default roles are defined in pg_default_roles. Unless you know what you’re doing, don’t change the default role names.

- { name: dbrole_readonly  , login: false , comment: role for global read-only access  }
- { name: dbrole_offline ,   login: false , comment: role for restricted read-only access (offline instance) }
- { name: dbrole_readwrite , login: false , roles: [dbrole_readonly], comment: role for global read-write access }
- { name: dbrole_admin , login: false , roles: [pg_monitor, dbrole_readwrite] , comment: role for object creation }

Default Users

Pigsty also has four default users (system users):

  • Superuser (postgres): Cluster owner and creator, same name as OS dbsu.
  • Replication User (replicator): System user for primary-replica replication.
  • Monitor User (dbuser_monitor): User for monitoring database and connection pool metrics.
  • Admin User (dbuser_dba): Administrator for daily operations and database changes.

These 4 default users’ username/password are defined by 4 pairs of dedicated parameters and referenced in many places:

Remember to change these passwords in production deployments—don’t use defaults!

pg_dbsu: postgres                             # Database superuser name, recommended not to change
pg_dbsu_password: ''                          # Database superuser password, recommended to leave empty!
pg_replication_username: replicator           # System replication username
pg_replication_password: DBUser.Replicator    # System replication password, must change!
pg_monitor_username: dbuser_monitor           # System monitor username
pg_monitor_password: DBUser.Monitor           # System monitor password, must change!
pg_admin_username: dbuser_dba                 # System admin username
pg_admin_password: DBUser.DBA                 # System admin password, must change!

Permission System

Pigsty has an out-of-the-box permission model that works with default roles.

  • All users can access all schemas.
  • Read-only users (dbrole_readonly) can read from all tables. (SELECT, EXECUTE)
  • Read-write users (dbrole_readwrite) can write to all tables and run DML. (INSERT, UPDATE, DELETE)
  • Admin users (dbrole_admin) can create objects and run DDL. (CREATE, USAGE, TRUNCATE, REFERENCES, TRIGGER)
  • Offline users (dbrole_offline) are similar to read-only but with restricted access—only offline instances (pg_role = 'offline' or pg_offline_query = true)
  • Objects created by admin users will have correct permissions.
  • Default privileges are configured on all databases, including template databases.
  • Database connection permissions are managed by database definition.
  • CREATE privilege on databases and public schema is revoked from PUBLIC by default.

Object Privileges

Default privileges for newly created objects are controlled by pg_default_privileges:

- GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
- GRANT SELECT     ON TABLES    TO dbrole_readonly
- GRANT SELECT     ON SEQUENCES TO dbrole_readonly
- GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
- GRANT USAGE      ON SCHEMAS   TO dbrole_offline
- GRANT SELECT     ON TABLES    TO dbrole_offline
- GRANT SELECT     ON SEQUENCES TO dbrole_offline
- GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
- GRANT INSERT     ON TABLES    TO dbrole_readwrite
- GRANT UPDATE     ON TABLES    TO dbrole_readwrite
- GRANT DELETE     ON TABLES    TO dbrole_readwrite
- GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
- GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
- GRANT TRUNCATE   ON TABLES    TO dbrole_admin
- GRANT REFERENCES ON TABLES    TO dbrole_admin
- GRANT TRIGGER    ON TABLES    TO dbrole_admin
- GRANT CREATE     ON SCHEMAS   TO dbrole_admin

Objects newly created by admins will have the above privileges by default. Use \ddp+ to view these default privileges:

TypeAccess Privileges
Function=X
dbrole_readonly=X
dbrole_offline=X
dbrole_admin=X
Schemadbrole_readonly=U
dbrole_offline=U
dbrole_admin=UC
Sequencedbrole_readonly=r
dbrole_offline=r
dbrole_readwrite=wU
dbrole_admin=rwU
Tabledbrole_readonly=r
dbrole_offline=r
dbrole_readwrite=awd
dbrole_admin=arwdDxt

Default Privileges

The SQL statement ALTER DEFAULT PRIVILEGES lets you set privileges for future objects. It doesn’t affect existing objects or objects created by non-admin users.

In Pigsty, default privileges are defined for three roles:

{% for priv in pg_default_privileges %}
ALTER DEFAULT PRIVILEGES FOR ROLE {{ pg_dbsu }} {{ priv }};
{% endfor %}

{% for priv in pg_default_privileges %}
ALTER DEFAULT PRIVILEGES FOR ROLE {{ pg_admin_username }} {{ priv }};
{% endfor %}

-- For other business admins, they should SET ROLE dbrole_admin before executing DDL
{% for priv in pg_default_privileges %}
ALTER DEFAULT PRIVILEGES FOR ROLE "dbrole_admin" {{ priv }};
{% endfor %}

To maintain correct object permissions, you must execute DDL with admin users:

  1. {{ pg_dbsu }}, defaults to postgres
  2. {{ pg_admin_username }}, defaults to dbuser_dba
  3. Business admin users granted dbrole_admin role (using SET ROLE to switch to dbrole_admin)

Using postgres as global object owner is wise. If creating objects as business admin, use SET ROLE dbrole_admin before creation to maintain correct permissions.


Database Privileges

In Pigsty, database-level privileges are covered in database definition.

Databases have three privilege levels: CONNECT, CREATE, TEMP, and a special ‘privilege’: OWNERSHIP.

- name: meta         # Required, `name` is the only required field
  owner: postgres    # Optional, database owner, defaults to postgres
  allowconn: true    # Optional, allow connections, default true
  revokeconn: false  # Optional, revoke public connect privilege, default false
  • If owner parameter exists, it becomes database owner instead of {{ pg_dbsu }}
  • If revokeconn is false, all users have database CONNECT privilege (default behavior)
  • If revokeconn is explicitly true:
    • Database CONNECT privilege is revoked from PUBLIC
    • CONNECT privilege is explicitly granted to {{ pg_replication_username }}, {{ pg_monitor_username }}, {{ pg_admin_username }}
    • CONNECT privilege with GRANT OPTION is granted to database owner
  • revokeconn can isolate cross-database access within the same cluster

CREATE Privilege

For security, Pigsty revokes CREATE privilege on databases from PUBLIC by default. This is also default behavior since PostgreSQL 15.

Database owners can always adjust CREATE privileges based on actual needs.