polar
PolarDB for PostgreSQL kernel, provides Aurora-style storage-compute separation capability
The polar configuration template uses Alibaba Cloud’s PolarDB for PostgreSQL database kernel instead of native PostgreSQL, providing “cloud-native” Aurora-style storage-compute separation capability.
For the complete tutorial, see: PolarDB for PostgreSQL (POLAR) Kernel Guide
Overview
- Config Name:
polar - Node Count: Single node
- Description: Uses PolarDB for PostgreSQL kernel
- OS Distro:
el8,el9,el10,d12,d13,u22,u24 - OS Arch:
x86_64 - Related:
meta
Usage:
./configure -c polar [-i <primary_ip>]
Content
Source: pigsty/conf/polar.yml
---
#==============================================================#
# File : polar.yml
# Desc : Pigsty 1-node PolarDB Kernel Config Template
# Ctime : 2020-08-05
# Mtime : 2025-12-28
# Docs : https://doc.pgsty.com/config
# License : Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright : 2018-2026 Ruohang Feng / Vonng ([email protected])
#==============================================================#
# This is the config template for PolarDB PG Kernel,
# Which is a PostgreSQL 15 fork with RAC flavor features
# tutorial: https://doc.pgsty.com/pgsql/kernel/polardb
#
# Usage:
# curl https://repo.pigsty.io/get | bash
# ./configure -c polar
# ./deploy.yml
all:
children:
infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 }} ,vars: { etcd_cluster: etcd }}
#minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}
#----------------------------------------------#
# PolarDB Database Cluster
#----------------------------------------------#
pg-meta:
hosts:
10.10.10.10: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-meta
pg_users:
- {name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: pigsty admin user }
- {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
pg_databases:
- {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
pg_hba_rules:
- {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
# PolarDB Ad Hoc Settings
pg_version: 15 # PolarDB PG is based on PG 15
pg_mode: polar # PolarDB PG Compatible mode
pg_packages: [ polardb, pgsql-common ] # Replace PG kernel with PolarDB kernel
pg_exporter_exclude_database: 'template0,template1,postgres,polardb_admin'
pg_default_roles: # PolarDB require replicator as superuser
- { name: dbrole_readonly ,login: false ,comment: role for global read-only access }
- { name: dbrole_offline ,login: false ,comment: role for restricted read-only access }
- { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
- { name: dbrole_admin ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
- { name: postgres ,superuser: true ,comment: system superuser }
- { name: replicator ,superuser: true ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator } # <- superuser is required for replication
- { name: dbuser_dba ,superuser: true ,roles: [dbrole_admin] ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
- { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
vars: # global variables
#----------------------------------------------#
# INFRA : https://doc.pgsty.com/infra/param
#----------------------------------------------#
version: v4.0.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default,china,europe
infra_portal: # infra services exposed via portal
home : { domain: i.pigsty } # default domain name
#----------------------------------------------#
# NODE : https://doc.pgsty.com/node/param
#----------------------------------------------#
nodename_overwrite: false # do not overwrite node hostname on single node mode
node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
node_tune: oltp # node tuning specs: oltp,olap,tiny,crit
#----------------------------------------------#
# PGSQL : https://doc.pgsty.com/pgsql/param
#----------------------------------------------#
pg_version: 15 # PolarDB is compatible with PG 15
pg_conf: oltp.yml # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
#----------------------------------------------#
# PASSWORD : https://doc.pgsty.com/config/security
#----------------------------------------------#
grafana_admin_password: pigsty
grafana_view_password: DBUser.Viewer
pg_admin_password: DBUser.DBA
pg_monitor_password: DBUser.Monitor
pg_replication_password: DBUser.Replicator
patroni_password: Patroni.API
haproxy_admin_password: pigsty
minio_secret_key: S3User.MinIO
etcd_root_password: Etcd.Root
...Explanation
The polar template uses Alibaba Cloud’s open-source PolarDB for PostgreSQL kernel, providing cloud-native database capabilities.
Key Features:
- Storage-compute separation architecture, compute and storage nodes can scale independently
- Supports one-write-multiple-read, read replicas scale in seconds
- Compatible with PostgreSQL ecosystem, maintains SQL compatibility
- Supports shared storage scenarios, suitable for cloud environment deployment
Use Cases:
- Cloud-native scenarios requiring storage-compute separation architecture
- Read-heavy write-light workloads
- Scenarios requiring quick scaling of read replicas
- Test environments for evaluating PolarDB features
Notes:
- PolarDB is based on PostgreSQL 15, does not support higher version features
- Replication user requires superuser privileges (different from native PostgreSQL)
- Some PostgreSQL extensions may have compatibility issues
- ARM64 architecture not supported
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.