INFRA Node Model

Entity-Relationship model for INFRA infrastructure nodes in Pigsty, component composition, and naming conventions.

The INFRA module plays a special role in Pigsty: it’s not a traditional “cluster” but rather a management hub composed of a group of infrastructure nodes, providing core services for the entire Pigsty deployment. Each INFRA node is an autonomous infrastructure service unit running core components like Nginx, Grafana, and VictoriaMetrics, collectively providing observability and management capabilities for managed database clusters.

There are two core entities in Pigsty’s INFRA module:

  • Node: A server running infrastructure components—can be bare metal, VM, container, or Pod.
  • Component: Various infrastructure services running on nodes, such as Nginx, Grafana, VictoriaMetrics, etc.

INFRA nodes typically serve as Admin Nodes, the control plane of Pigsty.


Component Composition

Each INFRA node runs the following core components:

ComponentPortDescription
Nginx80/443Web portal, local repo, unified reverse proxy
Grafana3000Visualization platform, dashboards, data apps
VictoriaMetrics8428Time-series database, Prometheus API compatible
VictoriaLogs9428Log database, receives structured logs from Vector
VictoriaTraces10428Trace storage for slow SQL / request tracing
VMAlert8880Alert rule evaluator based on VictoriaMetrics
Alertmanager9059Alert aggregation and dispatch
Blackbox Exporter9115ICMP/TCP/HTTP black-box probing
DNSMASQ53DNS server for internal domain resolution
Chronyd123NTP time server

These components together form Pigsty’s observability infrastructure.


Examples

Let’s look at a concrete example with a two-node INFRA deployment:

infra:
  hosts:
    10.10.10.10: { infra_seq: 1 }
    10.10.10.11: { infra_seq: 2 }

The above config fragment defines a two-node INFRA deployment:

GroupDescription
infraINFRA infrastructure node group
NodeDescription
infra-110.10.10.10 INFRA node #1
infra-210.10.10.11 INFRA node #2

For production environments, deploying at least two INFRA nodes is recommended for infrastructure component redundancy.


Identity Parameters

Pigsty uses the INFRA_ID parameter group to assign deterministic identities to each INFRA module entity. One parameter is required:

ParameterTypeLevelDescriptionFormat
infra_seqintNodeINFRA node sequence, requiredNatural number, starting from 1, unique within group

With node sequence assigned at node level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Nodeinfra-{{ infra_seq }}infra-1, infra-2

The INFRA module assigns infra-N format identifiers to nodes for distinguishing multiple infrastructure nodes in the monitoring system. However, this doesn’t change the node’s hostname or system identity; nodes still use their existing hostname or IP address for identification.


Service Portal

INFRA nodes provide unified web service entry through Nginx. The infra_portal parameter defines services exposed through Nginx:

infra_portal:
  home         : { domain: i.pigsty }
  grafana      : { domain: g.pigsty, endpoint: "${admin_ip}:3000", websocket: true }
  prometheus   : { domain: p.pigsty, endpoint: "${admin_ip}:8428" }   # VMUI
  alertmanager : { domain: a.pigsty, endpoint: "${admin_ip}:9059" }

Users access different domains, and Nginx routes requests to corresponding backend services:

DomainServiceDescription
i.pigstyHomePigsty homepage
g.pigstyGrafanaMonitoring dashboard
p.pigstyVictoriaMetricsTSDB Web UI
a.pigstyAlertmanagerAlert management UI

Accessing Pigsty services via domain names is recommended over direct IP + port.


Deployment Scale

The number of INFRA nodes depends on deployment scale and HA requirements:

ScaleINFRA NodesDescription
Dev/Test1Single-node deployment, all on one node
Small Prod1-2Single or dual node, can share with other services
Medium Prod2-3Dedicated INFRA nodes, redundant components
Large Prod3+Multiple INFRA nodes, component separation

In singleton deployment, INFRA components share the same node with PGSQL, ETCD, etc. In small-scale deployments, INFRA nodes typically also serve as “Admin Node” / backup admin node and local software repository (/www/pigsty). In larger deployments, these responsibilities can be separated to dedicated nodes.


Monitoring Label System

Pigsty’s monitoring system collects metrics from INFRA components themselves. Unlike database modules, each component in the INFRA module is treated as an independent monitoring object, distinguished by the cls (class) label.

LabelDescriptionExample
clsComponent type, each forming a “class”nginx
insInstance name, format {component}-{infra_seq}nginx-1
ipINFRA node IP running the component10.10.10.10
jobVictoriaMetrics scrape job, fixed as infrainfra

Using a two-node INFRA deployment (infra_seq: 1 and infra_seq: 2) as example, component monitoring labels are:

Componentclsins ExamplePort
Nginxnginxnginx-1, nginx-29113
Grafanagrafanagrafana-1, grafana-23000
VictoriaMetricsvmetricsvmetrics-1, vmetrics-28428
VictoriaLogsvlogsvlogs-1, vlogs-29428
VictoriaTracesvtracesvtraces-1, vtraces-210428
VMAlertvmalertvmalert-1, vmalert-28880
Alertmanageralertmanageralertmanager-1, alertmanager-29059
Blackboxblackboxblackbox-1, blackbox-29115

All INFRA component metrics use a unified job="infra" label, distinguished by the cls label:

nginx_up{cls="nginx", ins="nginx-1", ip="10.10.10.10", job="infra"}
grafana_info{cls="grafana", ins="grafana-1", ip="10.10.10.10", job="infra"}
vm_app_version{cls="vmetrics", ins="vmetrics-1", ip="10.10.10.10", job="infra"}
vlogs_rows_ingested_total{cls="vlogs", ins="vlogs-1", ip="10.10.10.10", job="infra"}
alertmanager_alerts{cls="alertmanager", ins="alertmanager-1", ip="10.10.10.10", job="infra"}