Module: INFRA

Optional standalone infrastructure that provides NTP, DNS, observability and other foundational services for PostgreSQL.

Configuration | Administration | Playbooks | Monitoring | Parameters


Overview

Every Pigsty deployment includes a set of infrastructure components that provide services for managed nodes and database clusters:

ComponentPortDomainDescription
Nginx80/443i.pigstyWeb service portal, local repo, and unified entry point
Grafana3000g.pigstyVisualization platform for monitoring dashboards and data apps
VictoriaMetrics8428p.pigstyTime-series database with VMUI, compatible with Prometheus API
VictoriaLogs9428-Centralized log database, receives structured logs from Vector
VictoriaTraces10428-Tracing and event storage for slow SQL / request tracing
VMAlert8880-Alert rule evaluator, triggers alerts based on VictoriaMetrics metrics
AlertManager9059a.pigstyAlert aggregation and dispatch, receives notifications from VMAlert
BlackboxExporter9115-ICMP/TCP/HTTP blackbox probing
DNSMASQ53-DNS server for internal domain resolution
Chronyd123-NTP time server
PostgreSQL5432-CMDB and default database
Ansible--Runs playbooks, orchestrates all infrastructure

In Pigsty, the PGSQL module uses some services on INFRA nodes, specifically:

  • Database cluster/host node domains depend on DNSMASQ on INFRA nodes for resolution.
  • Installing software on database nodes uses the local yum/apt repo hosted by Nginx on INFRA nodes.
  • Database cluster/node monitoring metrics are scraped and stored by VictoriaMetrics on INFRA nodes, accessible via VMUI / PromQL.
  • Database and node runtime logs are collected by Vector and pushed to VictoriaLogs on INFRA, searchable in Grafana.
  • VMAlert evaluates alert rules based on metrics in VictoriaMetrics and forwards events to Alertmanager.
  • Users initiate management of database nodes from Infra/Admin nodes using Ansible or other tools:
    • Execute cluster creation, scaling, instance/cluster recycling
    • Create business users, databases, modify services, HBA changes;
    • Execute log collection, garbage cleanup, backup, inspections, etc.
  • Database nodes sync time from the NTP server on INFRA/ADMIN nodes by default
  • If no dedicated cluster exists, the HA component Patroni uses etcd on INFRA nodes as the HA DCS.
  • If no dedicated cluster exists, the backup component pgbackrest uses MinIO on INFRA nodes as an optional centralized backup repository.

Nginx

Nginx is the access entry point for all WebUI services in Pigsty, using port 80 on the admin node by default.

Many infrastructure components with WebUI are exposed through Nginx, such as Grafana, VictoriaMetrics (VMUI), AlertManager, and HAProxy traffic management pages. Additionally, static file resources like yum/apt repos are served through Nginx.

Nginx routes access requests to corresponding upstream components based on domain names according to infra_portal configuration. If you use other domains or public domains, you can modify them here:

infra_portal:  # domain names and upstream servers
  home         : { domain: i.pigsty }
  grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
  prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:8428" }   # VMUI
  alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9059" }
  blackbox     : { endpoint: "${admin_ip}:9115" }
  vmalert      : { endpoint: "${admin_ip}:8880" }
  #logs         : { domain: logs.pigsty ,endpoint: "${admin_ip}:9428" }
  #minio        : { domain: sss.pigsty  ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

Pigsty strongly recommends using domain names to access Pigsty UI systems rather than direct IP+port access, for these reasons:

  • Using domains makes it easy to enable HTTPS traffic encryption, consolidate access to Nginx, audit all requests, and conveniently integrate authentication mechanisms.
  • Some components only listen on 127.0.0.1 by default, so they can only be accessed through Nginx proxy.
  • Domain names are easier to remember and provide additional configuration flexibility.

If you don’t have available internet domains or local DNS resolution, you can add local static resolution records in /etc/hosts (MacOS/Linux) or C:\Windows\System32\drivers\etc\hosts (Windows).

Nginx configuration parameters are at: Configuration: INFRA - NGINX


Local Software Repository

Pigsty creates a local software repository during installation to accelerate subsequent software installation.

This repository is served by Nginx, located by default at /www/pigsty, accessible via http://i.pigsty/pigsty.

Pigsty’s offline package is the entire software repository directory (yum/apt) compressed. When Pigsty tries to build a local repo, if it finds the local repo directory /www/pigsty already exists with the /www/pigsty/repo_complete marker file, it considers the local repo already built and skips downloading software from upstream, eliminating internet dependency.

The repo definition file is at /www/pigsty.repo, accessible by default via http://${admin_ip}/pigsty.repo

curl -L http://i.pigsty/pigsty.repo -o /etc/yum.repos.d/pigsty.repo

You can also use the file local repo directly without Nginx:

[pigsty-local]
name=Pigsty local $releasever - $basearch
baseurl=file:///www/pigsty/
enabled=1
gpgcheck=0

Local repository configuration parameters are at: Configuration: INFRA - REPO


Victoria Observability Suite

Pigsty v4.0 uses the VictoriaMetrics family to replace Prometheus/Loki, providing unified monitoring, logging, and tracing capabilities:

  • VictoriaMetrics listens on port 8428 by default, accessible via http://p.pigsty or https://i.pigsty/vmetrics/ for VMUI, compatible with Prometheus API.
  • VMAlert evaluates alert rules in /infra/rules/*.yml, listens on port 8880, and sends alert events to Alertmanager.
  • VictoriaLogs listens on port 9428, supports the https://i.pigsty/vlogs/ query interface. All nodes run Vector by default, pushing structured system logs, PostgreSQL logs, etc. to VictoriaLogs.
  • VictoriaTraces listens on port 10428 for slow SQL / Trace collection, Grafana accesses it as a Jaeger datasource.
  • Alertmanager listens on port 9059, accessible via http://a.pigsty or https://i.pigsty/alertmgr/ for managing alert notifications. After configuring SMTP, Webhook, etc., it can push messages.
  • Blackbox Exporter listens on port 9115 by default for Ping/TCP/HTTP probing, accessible via https://i.pigsty/blackbox/.

For more information, see: Configuration: INFRA - VICTORIA and Configuration: INFRA - PROMETHEUS.


Grafana

Grafana is the core of Pigsty’s WebUI, listening on port 3000 by default, accessible directly via IP:3000 or domain http://g.pigsty.

Pigsty comes with preconfigured datasources for VictoriaMetrics / Logs / Traces (vmetrics-*, vlogs-*, vtraces-*), and numerous dashboards with URL-based navigation for quick problem location.

Grafana can also be used as a general low-code visualization platform, so Pigsty installs plugins like ECharts and victoriametrics-datasource by default for building monitoring dashboards or inspection reports.

Grafana configuration parameters are at: Configuration: INFRA - GRAFANA.


Ansible

Pigsty installs Ansible on the meta node by default. Ansible is a popular operations tool with declarative configuration style and idempotent playbook design that greatly reduces system maintenance complexity.


DNSMASQ

DNSMASQ provides DNS resolution services within the environment. Domain names from other modules are registered with the DNSMASQ service on INFRA nodes.

DNS records are placed by default in the /etc/hosts.d/ directory on all INFRA nodes.

DNSMASQ configuration parameters are at: Configuration: INFRA - DNS


Chronyd

NTP service synchronizes time across all nodes in the environment (optional)

NTP configuration parameters are at: Configuration: NODES - NTP


Configuration

To install the INFRA module on a node, first add it to the infra group in the config inventory and assign an instance number infra_seq

# Configure single INFRA node
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } }}

# Configure two INFRA nodes
infra:
  hosts:
    10.10.10.10: { infra_seq: 1 }
    10.10.10.11: { infra_seq: 2 }

Then use the infra.yml playbook to initialize the INFRA module on the nodes.


Administration

Here are some administration tasks related to the INFRA module:


Install/Uninstall Infra Module

./infra.yml     # Install INFRA module on infra group
./infra-rm.yml  # Uninstall INFRA module from infra group

Manage Local Software Repository

You can use the following playbook subtasks to manage the local yum repo on Infra nodes:

./infra.yml -t repo              # Create local repo from internet or offline package

./infra.yml -t repo_dir          # Create local repo directory
./infra.yml -t repo_check        # Check if local repo already exists
./infra.yml -t repo_prepare      # If exists, use existing local repo
./infra.yml -t repo_build        # If not exists, build local repo from upstream
./infra.yml     -t repo_upstream     # Handle upstream repo files in /etc/yum.repos.d
./infra.yml     -t repo_remove       # If repo_remove == true, delete existing repo files
./infra.yml     -t repo_add          # Add upstream repo files to /etc/yum.repos.d (or /etc/apt/sources.list.d)
./infra.yml     -t repo_url_pkg      # Download packages from internet defined by repo_url_packages
./infra.yml     -t repo_cache        # Create upstream repo metadata cache with yum makecache / apt update
./infra.yml     -t repo_boot_pkg     # Install bootstrap packages like createrepo_c, yum-utils... (or dpkg-)
./infra.yml     -t repo_pkg          # Download packages & dependencies from upstream repos
./infra.yml     -t repo_create       # Create local repo with createrepo_c & modifyrepo_c
./infra.yml     -t repo_use          # Add newly built repo to /etc/yum.repos.d | /etc/apt/sources.list.d
./infra.yml -t repo_nginx        # If no nginx serving, start nginx as web server

The most commonly used commands are:

./infra.yml     -t repo_upstream     # Add upstream repos defined in repo_upstream to INFRA nodes
./infra.yml     -t repo_pkg          # Download packages and dependencies from upstream repos
./infra.yml     -t repo_create       # Create/update local yum repo with createrepo_c & modifyrepo_c

Manage Infrastructure Components

You can use the following playbook subtasks to manage various infrastructure components on Infra nodes:

./infra.yml -t infra           # Configure infrastructure
./infra.yml -t infra_env       # Configure environment variables on admin node: env_dir, env_pg, env_var
./infra.yml -t infra_pkg       # Install software packages required by INFRA: infra_pkg_yum, infra_pkg_pip
./infra.yml -t infra_user      # Setup infra OS user group
./infra.yml -t infra_cert      # Issue certificates for infra components
./infra.yml -t dns             # Configure DNSMasq: dns_config, dns_record, dns_launch
./infra.yml -t nginx           # Configure Nginx: nginx_config, nginx_cert, nginx_static, nginx_launch, nginx_exporter
./infra.yml -t victoria        # Configure VictoriaMetrics/Logs/Traces: vmetrics|vlogs|vtraces|vmalert
./infra.yml -t alertmanager    # Configure AlertManager: alertmanager_config, alertmanager_launch
./infra.yml -t blackbox        # Configure Blackbox Exporter: blackbox_launch
./infra.yml -t grafana         # Configure Grafana: grafana_clean, grafana_config, grafana_plugin, grafana_launch, grafana_provision
./infra.yml -t infra_register  # Register infra components to VictoriaMetrics / Grafana

Other commonly used tasks include:

./infra.yml -t nginx_index                        # Re-render Nginx homepage content
./infra.yml -t nginx_config,nginx_reload          # Re-render Nginx portal config, expose new upstream services
./infra.yml -t vmetrics_config,vmetrics_launch    # Regenerate VictoriaMetrics main config and restart service
./infra.yml -t vlogs_config,vlogs_launch          # Re-render VictoriaLogs config
./infra.yml -t vmetrics_clean                     # Clean VictoriaMetrics storage data directory
./infra.yml -t grafana_plugin                     # Download Grafana plugins from internet

Playbooks

Pigsty provides three playbooks related to the INFRA module:

  • infra.yml: Initialize pigsty infrastructure on infra nodes
  • infra-rm.yml: Remove infrastructure components from infra nodes
  • deploy.yml: Complete one-time Pigsty installation on all nodes

infra.yml

The INFRA module playbook infra.yml initializes pigsty infrastructure on INFRA nodes

Executing this playbook completes the following tasks

  • Configure meta node directories and environment variables
  • Download and build a local software repository to accelerate subsequent installation. (If using offline package, skip download phase)
  • Add the current meta node as a regular node under Pigsty management
  • Deploy infrastructure components including VictoriaMetrics/Logs/Traces, VMAlert, Grafana, Alertmanager, Blackbox Exporter, etc.

This playbook executes on INFRA nodes by default

  • Pigsty uses the current node executing this playbook as Pigsty’s INFRA node and ADMIN node by default.
  • During configuration, Pigsty marks the current node as Infra/Admin node and replaces the placeholder IP 10.10.10.10 in config templates with the current node’s primary IP address.
  • Besides initiating management and hosting infrastructure, this node is no different from a regular managed node.
  • In single-node installation, ETCD is also installed on this node to provide DCS service

Notes about this playbook

  • This is an idempotent playbook; repeated execution will wipe infrastructure components on meta nodes.
  • To preserve historical monitoring data, first set vmetrics_clean, vlogs_clean, vtraces_clean to false.
  • When offline repo /www/pigsty/repo_complete exists, this playbook skips downloading software from internet. Full execution takes about 5-8 minutes depending on machine configuration.
  • Downloading directly from upstream internet sources without offline package may take 10-20 minutes depending on your network conditions.

asciicast


infra-rm.yml

The INFRA module playbook infra-rm.yml removes pigsty infrastructure from INFRA nodes

Common subtasks include:

./infra-rm.yml               # Remove INFRA module
./infra-rm.yml -t service    # Stop infrastructure services on INFRA
./infra-rm.yml -t data       # Remove remaining data on INFRA
./infra-rm.yml -t package    # Uninstall software packages installed on INFRA

deploy.yml

The INFRA module playbook deploy.yml performs a complete one-time Pigsty installation on all nodes

This playbook is described in more detail in Playbook: One-Time Installation.


Monitoring

Pigsty Home: Pigsty monitoring system homepage

Pigsty Home Dashboard

pigsty.jpg

INFRA Overview: Pigsty infrastructure self-monitoring overview

INFRA Overview Dashboard

infra-overview.jpg

Nginx Instance: Nginx metrics and logs

Nginx Overview Dashboard

nginx-overview.jpg

Grafana Instance: Grafana metrics and logs

Grafana Overview Dashboard

grafana-overview.jpg

VictoriaMetrics Instance: VictoriaMetrics scraping, querying, and storage metrics

VMAlert Instance: Alert rule evaluation and queue status

Alertmanager Instance: Alert aggregation, notification pipelines, and Silences

VictoriaLogs Instance: Log ingestion rate, query load, and index hits

VictoriaTraces Instance: Trace/KV storage and Jaeger interface

Logs Instance: Node log search based on Vector + VictoriaLogs

Logs Instance Dashboard

logs-instance.jpg

CMDB Overview: CMDB visualization

CMDB Overview Dashboard

cmdb-overview.jpg

ETCD Overview: etcd metrics and logs

ETCD Overview Dashboard

etcd-overview.jpg


Parameters

The INFRA module has the following 10 parameter groups.

  • META: Pigsty metadata
  • CA: Self-signed PKI/CA infrastructure
  • INFRA_ID: Infrastructure portal, Nginx domains
  • REPO: Local software repository
  • INFRA_PACKAGE: Infrastructure software packages
  • NGINX: Nginx web server
  • DNS: DNSMASQ domain server
  • VICTORIA: VictoriaMetrics / Logs / Traces suite
  • PROMETHEUS: Alertmanager and Blackbox Exporter
  • GRAFANA: Grafana observability suite
Parameter Overview

For the latest default values, types, and hierarchy, please refer to the Parameter Reference to stay consistent with the Pigsty version.


Architecture

INFRA module architecture, functional components, and responsibilities in Pigsty.

Configuration

How to configure INFRA nodes? Customize Nginx, local repo, DNS, NTP, monitoring components.

Parameters

INFRA module provides 10 sections with 70+ configurable parameters

Playbook

How to use built-in Ansible playbooks to manage the INFRA module, with a quick reference for common commands.

Monitoring

How to perform self-monitoring of infrastructure in Pigsty?

Metrics

Complete list of monitoring metrics provided by the Pigsty INFRA module

FAQ

Frequently asked questions about the Pigsty INFRA infrastructure module

Administration

Infrastructure components and INFRA cluster administration SOP: create, destroy, scale out, scale in, certificates, repositories…


Last Modified 2026-01-09: add supabase asciinema demo (693cfa8)