Infrastructure

Infrastructure module architecture, components, and functionality in Pigsty.

Running production-grade, highly available PostgreSQL clusters typically requires a comprehensive set of infrastructure services (foundation) for support, such as monitoring and alerting, log collection, time synchronization, DNS resolution, and local software repositories. Pigsty provides the INFRA module to address this—it’s an optional module, but we strongly recommend enabling it.


Overview

The diagram below shows the architecture of a single-node deployment. The right half represents the components included in the INFRA module:

ComponentTypeDescription
NginxWeb ServerUnified entry for WebUI, local repo, reverse proxy for internal services
RepoSoftware RepoAPT/DNF repository with all RPM/DEB packages needed for deployment
GrafanaVisualizationDisplays metrics, logs, and traces; hosts dashboards, reports, and custom data apps
VictoriaMetricsTime Series DBScrapes all metrics, Prometheus API compatible, provides VMUI query interface
VictoriaLogsLog PlatformCentralized log storage; all nodes run Vector by default, pushing logs here
VictoriaTracesTracingCollects slow SQL, service traces, and other tracing data
VMAlertAlert EngineEvaluates alerting rules, pushes events to Alertmanager
AlertManagerAlert ManagerAggregates alerts, dispatches notifications via email, Webhook, etc.
BlackboxExporterBlackbox ProbeProbes reachability of IPs/VIPs/URLs
DNSMASQDNS ServiceProvides DNS resolution for domains used within Pigsty [Optional]
ChronydTime SyncProvides NTP time synchronization to ensure consistent time across nodes [Optional]
CACertificateIssues encryption certificates within the environment
AnsibleOrchestrationBatch, declarative, agentless tool for managing large numbers of servers

pigsty-arch


Nginx

Nginx is the access entry point for all WebUI services in Pigsty, using ports 80 / 443 for HTTP/HTTPS by default. Live Demo

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10http://i.pigstyhttps://i.pigstyhttps://demo.pigsty.io

Infrastructure components with WebUIs can be exposed uniformly through Nginx, such as Grafana, VictoriaMetrics (VMUI), AlertManager, and HAProxy console. Additionally, the local software repository and other static resources are served via Nginx.

Nginx configures local web servers or reverse proxy servers based on definitions in infra_portal.

infra_portal:
  home : { domain: i.pigsty }

By default, it exposes Pigsty’s admin homepage: i.pigsty. Different endpoints on this page proxy different components:

EndpointComponentNative PortNotesPublic Demo
/Nginx80/443Homepage, local repo, file serverdemo.pigsty.io
/ui/Grafana3000Grafana dashboard entrydemo.pigsty.io/ui/
/vmetrics/VictoriaMetrics8428Time series DB Web UIdemo.pigsty.io/vmetrics/
/vlogs/VictoriaLogs9428Log DB Web UIdemo.pigsty.io/vlogs/
/vtraces/VictoriaTraces10428Tracing Web UIdemo.pigsty.io/vtraces/
/vmalert/VMAlert8880Alert rule managementdemo.pigsty.io/vmalert/
/alertmgr/AlertManager9059Alert management Web UIdemo.pigsty.io/alertmgr/
/blackbox/Blackbox9115Blackbox probe

Pigsty allows rich customization of Nginx as a local file server or reverse proxy, with self-signed or real HTTPS certificates.

For more information, see: Tutorial: Nginx—Expose Web Services via Proxy and Tutorial: Certbot—Request and Renew HTTPS Certificates


Repo

Pigsty creates a local software repository on the Infra node during installation to accelerate subsequent software installations. Live Demo

This repository defaults to the /www/pigsty directory, served by Nginx and mounted at the /pigsty path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/pigstyhttp://i.pigsty/pigstyhttps://i.pigsty/pigstyhttps://demo.pigsty.io/pigsty

Pigsty supports offline installation, which essentially pre-copies a prepared local software repository to the target environment. When Pigsty performs production deployment and needs to create a local software repository, if it finds the /www/pigsty/repo_complete marker file already exists locally, it skips downloading packages from upstream and uses existing packages directly, avoiding internet downloads.

For more information, see: Config: INFRA - REPO


Grafana

Grafana is the core component of Pigsty’s monitoring system, used for visualizing metrics, logs, and various information. Live Demo

Grafana listens on port 3000 by default and is proxied via Nginx at the /ui path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/uihttp://i.pigsty/uihttps://i.pigsty/uihttps://demo.pigsty.io/ui

Pigsty provides pre-built dashboards based on VictoriaMetrics / Logs / Traces, with one-click drill-down and roll-up via URL jumps for rapid troubleshooting.

Grafana can also serve as a low-code visualization platform, so ECharts, victoriametrics-datasource, victorialogs-datasource plugins are installed by default, with Vector / Victoria datasources registered uniformly as vmetrics-*, vlogs-*, vtraces-* for easy custom dashboard extension.

For more information, see: Config: INFRA - GRAFANA.


VictoriaMetrics

VictoriaMetrics is Pigsty’s time series database, responsible for scraping and storing all monitoring metrics. Live Demo

It listens on port 8428 by default, mounted at Nginx /vmetrics path, and also accessible via the p.pigsty domain:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vmetricshttp://p.pigstyhttps://i.pigsty/vmetricshttps://demo.pigsty.io/vmetrics

VictoriaMetrics is fully compatible with the Prometheus API, supporting PromQL queries, remote read/write protocols, and the Alertmanager API. The built-in VMUI provides an ad-hoc query interface for exploring metrics data directly, and also serves as a Grafana datasource.

For more information, see: Config: INFRA - VICTORIA


VictoriaLogs

VictoriaLogs is Pigsty’s log platform, centrally storing structured logs from all nodes. Live Demo

It listens on port 9428 by default, mounted at Nginx /vlogs path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vlogshttp://i.pigsty/vlogshttps://i.pigsty/vlogshttps://demo.pigsty.io/vlogs

All managed nodes run Vector Agent by default, collecting system logs, PostgreSQL logs, Patroni logs, Pgbouncer logs, etc., processing them into structured format and pushing to VictoriaLogs. The built-in Web UI supports log search and filtering, and can be integrated with Grafana’s victorialogs-datasource plugin for visual analysis.

For more information, see: Config: INFRA - VICTORIA


VictoriaTraces

VictoriaTraces is used for collecting trace data and slow SQL records. Live Demo

It listens on port 10428 by default, mounted at Nginx /vtraces path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vtraceshttp://i.pigsty/vtraceshttps://i.pigsty/vtraceshttps://demo.pigsty.io/vtraces

VictoriaTraces provides a Jaeger-compatible interface for analyzing service call chains and database slow queries. Combined with Grafana dashboards, it enables rapid identification of performance bottlenecks and root cause tracing.

For more information, see: Config: INFRA - VICTORIA


VMAlert

VMAlert is the alerting rule computation engine, responsible for evaluating alert rules and pushing triggered events to Alertmanager. Live Demo

It listens on port 8880 by default, mounted at Nginx /vmalert path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vmalerthttp://i.pigsty/vmalerthttps://i.pigsty/vmalerthttps://demo.pigsty.io/vmalert

VMAlert reads metrics data from VictoriaMetrics and periodically evaluates alerting rules. Pigsty provides pre-built alerting rules for PGSQL, NODE, REDIS, and other modules, covering common failure scenarios out of the box.

For more information, see: Config: INFRA - PROMETHEUS


AlertManager

AlertManager handles alert event aggregation, deduplication, grouping, and dispatch. Live Demo

It listens on port 9059 by default, mounted at Nginx /alertmgr path, and also accessible via the a.pigsty domain:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/alertmgrhttp://a.pigstyhttps://i.pigsty/alertmgrhttps://demo.pigsty.io/alertmgr

AlertManager supports multiple notification channels: email, Webhook, Slack, PagerDuty, WeChat Work, etc. Through alert routing rules, differentiated dispatch based on severity level and module type is possible, with support for silencing, inhibition, and other advanced features.

For more information, see: Config: INFRA - PROMETHEUS


BlackboxExporter

Blackbox Exporter is used for active probing of target reachability, enabling blackbox monitoring.

It listens on port 9115 by default, mounted at Nginx /blackbox path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)
http://10.10.10.10/blackboxhttp://i.pigsty/blackboxhttps://i.pigsty/blackbox

It supports multiple probe methods including ICMP Ping, TCP ports, and HTTP/HTTPS endpoints. Useful for monitoring VIP reachability, service port availability, external dependency health, etc.—an important tool for assessing failure impact scope.

For more information, see: Config: INFRA - PROMETHEUS


Ansible

Ansible is Pigsty’s core orchestration tool; all deployment, configuration, and management operations are performed through Ansible Playbooks.

Pigsty automatically installs Ansible on the admin node (Infra node) during installation. It adopts a declarative configuration style and idempotent playbook design: the same playbook can be run repeatedly, and the system automatically converges to the desired state without side effects.

Ansible’s core advantages:

  • Agentless: Executes remotely via SSH, no additional software needed on target nodes.
  • Declarative: Describes the desired state rather than execution steps; configuration is documentation.
  • Idempotent: Multiple executions produce consistent results; supports retry after partial failures.

For more information, see: Playbooks: Pigsty Playbook


DNSMASQ

DNSMASQ provides DNS resolution within the environment, resolving domain names used internally by Pigsty to their corresponding IP addresses.

Note that DNS is completely optional; Pigsty functions normally without DNS.

It listens on port 53 (UDP/TCP) by default, providing DNS resolution for all nodes in the environment:

ProtocolPortConfig Directory
UDP/TCP53/infra/hosts/

Other modules automatically register their domain names with the DNSMASQ service on INFRA nodes during deployment.

Client nodes can configure INFRA nodes as their DNS servers, allowing access to services via domain names without remembering IP addresses.

For more information, see: Config: INFRA - DNS and Tutorial: DNS—Configure Domain Resolution


Chronyd

Chronyd provides NTP time synchronization service, ensuring consistent clocks across all nodes in the environment.

It listens on port 123 (UDP) by default, serving as the time source within the environment:

ProtocolPortDescription
UDP123NTP time sync service

Time synchronization is critical for distributed systems: log analysis requires aligned timestamps, certificate validation depends on accurate clocks, and PostgreSQL streaming replication is sensitive to clock drift. In isolated network environments, the INFRA node can serve as an internal NTP server with other nodes synchronizing to it.

For more information, see: Config: NODE - NTP


Node Relationships

Regular nodes reference an INFRA node via the admin_ip parameter as their infrastructure provider.

For example, when you configure global admin_ip = 10.10.10.10, typically all nodes will use infrastructure services at this IP.

Parameters that reference ${admin_ip}:

ParameterModuleDefault ValueDescription
repo_endpointINFRAhttp://${admin_ip}:80Software repo URL
repo_upstream.baseurlINFRAhttp://${admin_ip}/pigstyLocal repo baseurl
infra_portal.endpointINFRA${admin_ip}:<port>Nginx proxy backend
dns_recordsINFRA["${admin_ip} i.pigsty", ...]DNS records
node_default_etc_hostsNODE["${admin_ip} i.pigsty"]Default static DNS
node_etc_hostsNODE[]Custom static DNS
node_dns_serversNODE["${admin_ip}"]Dynamic DNS servers
node_ntp_serversNODE["${admin_ip}"]NTP servers (optional)

This is a weak dependency relationship.

Typically the admin node and INFRA node coincide. With multiple INFRA nodes, the admin node is usually the first one; others serve as backups.

In large-scale production deployments, you might separate the Ansible admin node from nodes running the INFRA module for various reasons. For example, use 1-2 small dedicated hosts belonging to the DBA team as the control hub (ADMIN nodes), and 2-3 high-spec physical machines as monitoring infrastructure (INFRA nodes).

The table below shows typical node counts for different deployment scales:

Deployment ScaleADMININFRAETCDMINIOPGSQL
Single-node dev11101
Three-node13303
Small production1230N
Large production2354+N

Last Modified 2026-01-09: add supabase asciinema demo (693cfa8)