This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Module: DOCKER

Docker daemon service that enables one-click deployment of containerized stateless software templates and additional functionality.

Docker is the most popular containerization platform, providing standardized software delivery capabilities.

Pigsty does not rely on Docker to deploy any of its components; instead, it provides the ability to deploy and install Docker — this is an optional module.

Pigsty offers a series of Docker software/tool/application templates for you to choose from as needed. This allows users to quickly spin up various containerized stateless software templates, adding extra functionality. You can use external, Pigsty-managed highly available database clusters while placing stateless applications inside containers.

Pigsty’s Docker module automatically configures accessible registry mirrors for users in mainland China to improve image pulling speed (and availability). You can easily configure Registry and Proxy settings to flexibly access different image sources.

1 - Usage

Docker module quick start guide - installation, removal, download, repository, mirrors, proxy, and image pulling.

Pigsty has built-in Docker support, which you can use to quickly deploy containerized applications.


Getting Started

Docker is an optional module, and in most of Pigsty’s configuration templates, Docker is not enabled by default. Therefore, users need to explicitly download and configure it to use Docker in Pigsty.

For example, in the default meta template, Docker is not downloaded or installed by default. However, in the rich single-node template, Docker is downloaded and installed.

The key difference between these two configurations lies in these two parameters: repo_modules and repo_packages.

repo_modules: infra,node,pgsql,docker  # <--- Enable Docker repository
repo_packages:
  - node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common, docker   # <--- Download Docker

After Docker is downloaded, you need to set the docker_enabled: true flag on the nodes where you want to install Docker, and configure other parameters as needed.

infra:
  hosts:
    10.10.10.10: { infra_seq: 1 ,nodename: infra-1 }
    10.10.10.11: { infra_seq: 2 ,nodename: infra-2 }
  vars:
    docker_enabled: true  # Install Docker on this group!

Finally, use the docker.yml playbook to install it on the nodes:

./docker.yml -l infra    # Install Docker on the infra group

Installation

If you want to temporarily install Docker directly from the internet on certain nodes, you can use the following command:

./node.yml -e '{"node_repo_modules":"node,docker","node_packages":["docker-ce,docker-compose-plugin"]}' -t node_repo,node_pkg -l <select_group_ip>

This command will first enable the upstream software sources for the node,docker modules on the target nodes, then install the docker-ce and docker-compose-plugin packages (same package names for EL/Debian).

If you want Docker-related packages to be automatically downloaded during Pigsty initialization, refer to the instructions below.


Removal

Because it’s so simple, Pigsty doesn’t provide an uninstall playbook for the Docker module. You can directly remove Docker using an Ansible command:

ansible minio -m package -b -a 'name=docker-ce state=absent'  # Remove docker

This command will uninstall the docker-ce package using the OS package manager.


Download

To download Docker during Pigsty installation, modify the repo_modules parameter in the configuration inventory to enable the Docker software repository, then specify Docker packages to download in the repo_packages or repo_extra_packages parameters.

repo_modules: infra,node,pgsql,docker  # <--- Enable Docker repository
repo_packages:
  - node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common, docker   # <--- Download Docker
repo_extra_packages:
  - pgsql-main docker # <--- Can also be specified here

The docker specified here (which actually corresponds to the docker-ce and docker-compose-plugin packages) will be automatically downloaded to the local repository during the default install.yml process. After downloading, the Docker packages will be available to all nodes via the local repository.

If you’ve already completed Pigsty installation and the local repository is initialized, you can run ./infra.yml -t repo_build after modifying the configuration to re-download and rebuild the offline repository.

Installing Docker requires the Docker YUM/APT repository, which is included by default in Pigsty but not enabled. You need to add docker to repo_modules to enable it before installation.


Repository

Downloading Docker requires upstream internet software repositories, which are defined in the default repo_upstream with module name docker:

- { name: docker-ce ,description: 'Docker CE' ,module: docker  ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable'    ,china: 'https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable'  ,europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
- { name: docker-ce ,description: 'Docker CE' ,module: docker  ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux//${distro_name} ${distro_codename} stable' }}

You can reference this repository using the docker module name in the repo_modules and node_repo_modules parameters.

Note that Docker’s official software repository is blocked by default in mainland China. You need to use mirror sites in China to complete the download.

If you’re in mainland China and encounter Docker download failures, check whether region is set to default in your configuration inventory. The automatically configured region: china can resolve this issue.


Proxy

If your network environment requires a proxy server to access the internet, you can configure the proxy_env parameter in Pigsty’s configuration inventory. This parameter will be written to the proxy related configuration in Docker’s configuration file.

proxy_env:
  no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.aliyuncs.com,mirrors.tuna.tsinghua.edu.cn,mirrors.zju.edu.cn"
  #http_proxy: 'http://username:[email protected]'
  #https_proxy: 'http://username:[email protected]'
  #all_proxy: 'http://username:[email protected]'

When running configure with the -x parameter, the proxy server configuration from your current environment will be automatically generated into Pigsty’s configuration file under proxy_env.

In addition to using a proxy server, you can also configure Docker Registry Mirrors to bypass blocks.


Registry Mirrors

You can use the docker_registry_mirrors parameter to specify Docker Registry Mirrors:

For users outside the firewall, in addition to the official DockerHub site, you can also consider using the quay.io mirror site. If your internal network environment already has mature image infrastructure, you can use your internal Docker registry mirrors to avoid being affected by external mirror sites and improve download speeds.

Users of public cloud providers can consider using free internal Docker mirrors. For example, if you’re using Alibaba Cloud, you can use Alibaba Cloud’s internal Docker mirror site (requires login):

["https://registry.cn-hangzhou.aliyuncs.com"]   # Alibaba Cloud mirror, requires explicit login

If you’re using Tencent Cloud, you can use Tencent Cloud’s internal Docker mirror site (requires internal network):

["https://ccr.ccs.tencentyun.com"]   # Tencent Cloud mirror, internal network only

Additionally, you can use CF-Workers-docker.io to quickly set up your own Docker image proxy. You can also consider using free Docker proxy mirrors (use at your own risk!)


Pulling Images

The docker_image and docker_image_cache parameters can be used to directly specify a list of images to pull during Docker installation.

Using this feature, Docker will come with the specified images after installation (provided they can be successfully pulled; this task will be automatically ignored and skipped on failure).

For example, you can specify images to pull in the configuration inventory:

infra:
  hosts:
    10.10.10.10: { infra_seq: 1 }
  vars:
    docker_enabled: true  # Install Docker on this group!
    docker_image:
      - redis:latest      # Pull the latest Redis image

Another way to preload images is to use locally saved tgz archives: if you’ve previously exported Docker images using docker save xxx | gzip -c > /tmp/docker/xxx.tgz. These exported image files can be automatically loaded via the glob specified by the docker_image_cache parameter. The default location is: /tmp/docker/*.tgz.

This means you can place images in the /tmp/docker directory beforehand, and after running docker.yml to install Docker, these image packages will be automatically loaded.

For example, in the self-hosted Supabase tutorial, this technique is used. Before spinning up Supabase and installing Docker, the *.tgz image archives from the local /tmp/supabase directory are copied to the target node’s /tmp/docker directory.

- name: copy local docker images
  copy: src="{{ item }}" dest="/tmp/docker/"
  with_fileglob: "{{ supa_images }}"
  vars: # you can override this with -e cli args
    supa_images: /tmp/supabase/*.tgz

Applications

Pigsty provides a series of ready-to-use, Docker Compose-based software templates, which you can use to spin up business software that uses external Pigsty-managed database clusters.




2 - Parameters

DOCKER module provides 8 configuration parameters

The DOCKER module provides 8 configuration parameters.

Parameter Overview

The DOCKER parameter group is used for Docker container engine deployment and configuration, including enable switch, data directory, storage driver, registry mirrors, and monitoring.

ParameterTypeLevelDescription
docker_enabledboolG/C/IEnable Docker on current node? disabled by default
docker_datapathG/C/IDocker data directory, /data/docker by default
docker_storage_driverenumG/C/IDocker storage driver, overlay2 by default
docker_cgroups_driverenumG/C/IDocker cgroup driver: cgroupfs or systemd
docker_registry_mirrorsstring[]G/C/IDocker registry mirror list
docker_exporter_portportGDocker metrics exporter port, 9323 by default
docker_imagestring[]G/C/IDocker images to pull, empty list by default
docker_image_cachepathG/C/IDocker image cache tarball path, /tmp/docker/*.tgz

You can use the docker.yml playbook to install and enable Docker on nodes.

Default parameters are defined in roles/docker/defaults/main.yml

docker_enabled: false             # Enable Docker on current node?
docker_data: /data/docker         # Docker data directory, /data/docker by default
docker_storage_driver: overlay2   # Docker storage driver, overlay2/zfs/btrfs...
docker_cgroups_driver: systemd    # Docker cgroup driver: cgroupfs or systemd
docker_registry_mirrors: []       # Docker registry mirror list
docker_exporter_port: 9323        # Docker metrics exporter port, 9323 by default
docker_image: []                  # Docker images to pull after startup
docker_image_cache: /tmp/docker/*.tgz # Docker image cache tarball glob pattern

docker_enabled

Parameter: docker_enabled, Type: bool, Level: G/C/I

Enable Docker on current node? Default: false, meaning Docker is not enabled.

docker_data

Parameter: docker_data, Type: path, Level: G/C/I

Docker data directory, default is /data/docker.

This directory stores Docker images, containers, volumes, and other data. If you have a dedicated data disk, it’s recommended to point this directory to that disk’s mount point.

docker_storage_driver

Parameter: docker_storage_driver, Type: enum, Level: G/C/I

Docker storage driver, default is overlay2.

See official documentation: https://docs.docker.com/engine/storage/drivers/select-storage-driver/

Available storage drivers include:

  • overlay2: Recommended default driver, suitable for most scenarios
  • fuse-overlayfs: For rootless container scenarios
  • btrfs: When using Btrfs filesystem
  • zfs: When using ZFS filesystem
  • vfs: For testing purposes, not recommended for production

docker_cgroups_driver

Parameter: docker_cgroups_driver, Type: enum, Level: G/C/I

Docker cgroup filesystem driver, can be cgroupfs or systemd, default: systemd

docker_registry_mirrors

Parameter: docker_registry_mirrors, Type: string[], Level: G/C/I

Docker registry mirror list, default: [] empty array.

You can use Docker mirror sites to accelerate image pulls. Here are some examples:

["https://docker.m.daocloud.io"]                # DaoCloud mirror
["https://docker.1ms.run"]                      # 1ms mirror
["https://mirror.ccs.tencentyun.com"]           # Tencent Cloud internal mirror
["https://registry.cn-hangzhou.aliyuncs.com"]   # Alibaba Cloud mirror (requires login)

You can also consider using a Cloudflare Worker to set up a Docker Proxy for faster access.

If pull speeds are still too slow, consider using alternative registries: docker login quay.io

docker_exporter_port

Parameter: docker_exporter_port, Type: port, Level: G

Docker metrics exporter port, default is 9323.

The Docker daemon exposes Prometheus-format monitoring metrics on this port for collection by monitoring infrastructure.

docker_image

Parameter: docker_image, Type: string[], Level: G/C/I

List of Docker images to pull, default is empty list [].

Docker image names specified here will be automatically pulled during the installation phase.

docker_image_cache

Parameter: docker_image_cache, Type: path, Level: G/C/I

Local Docker image cache tarball glob pattern, default is /tmp/docker/*.tgz.

You can use docker save | gzip to package images and automatically import them during Docker installation via this parameter.

.tgz tarball files matching this pattern will be imported into Docker one by one using:

cat *.tgz | gzip -d -c - | docker load

3 - Playbooks

How to use the built-in Ansible playbook to manage Docker and quick reference for common management commands.

The Docker module provides a default playbook docker.yml for installing Docker Daemon and Docker Compose.


docker.yml

Playbook source file: docker.yml

Running this playbook will install docker-ce and docker-compose-plugin on target nodes with the docker_enabled: true flag, and enable the dockerd service.

The following are the available task subsets in the docker.yml playbook:

  • docker_install : Install Docker and Docker Compose packages on the node
  • docker_admin : Add specified users to the Docker admin user group
  • docker_alias : Generate Docker command completion and alias scripts
  • docker_dir : Create Docker related directories
  • docker_config : Generate Docker daemon service configuration file
  • docker_launch : Start the Docker daemon service
  • docker_register : Register Docker daemon as a Prometheus monitoring target
  • docker_image : Attempt to load pre-cached image tarballs from /tmp/docker/*.tgz (if they exist)

The Docker module does not provide a dedicated uninstall playbook. If you need to uninstall Docker, you can manually stop Docker and then remove it:

systemctl stop docker                        # Stop Docker daemon service
yum remove docker-ce docker-compose-plugin   # Uninstall Docker on EL systems
apt remove docker-ce docker-compose-plugin   # Uninstall Docker on Debian systems



4 - Metrics

Complete list of monitoring metrics provided by the Pigsty Docker module

The DOCKER module contains 123 available monitoring metrics.

Metric NameTypeLabelsDescription
builder_builds_failed_totalcounterip, cls, reason, ins, job, instanceNumber of failed image builds
builder_builds_triggered_totalcounterip, cls, ins, job, instanceNumber of triggered image builds
docker_upUnknownip, cls, ins, job, instanceN/A
engine_daemon_container_actions_seconds_bucketUnknownip, cls, ins, job, instance, le, actionN/A
engine_daemon_container_actions_seconds_countUnknownip, cls, ins, job, instance, actionN/A
engine_daemon_container_actions_seconds_sumUnknownip, cls, ins, job, instance, actionN/A
engine_daemon_container_states_containersgaugeip, cls, ins, job, instance, stateThe count of containers in various states
engine_daemon_engine_cpus_cpusgaugeip, cls, ins, job, instanceThe number of cpus that the host system of the engine has
engine_daemon_engine_infogaugeip, cls, architecture, ins, job, instance, os_version, kernel, version, graphdriver, os, daemon_id, commit, os_typeThe information related to the engine and the OS it is running on
engine_daemon_engine_memory_bytesgaugeip, cls, ins, job, instanceThe number of bytes of memory that the host system of the engine has
engine_daemon_events_subscribers_totalgaugeip, cls, ins, job, instanceThe number of current subscribers to events
engine_daemon_events_totalcounterip, cls, ins, job, instanceThe number of events logged
engine_daemon_health_checks_failed_totalcounterip, cls, ins, job, instanceThe total number of failed health checks
engine_daemon_health_check_start_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
engine_daemon_health_check_start_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
engine_daemon_health_check_start_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
engine_daemon_health_checks_totalcounterip, cls, ins, job, instanceThe total number of health checks
engine_daemon_host_info_functions_seconds_bucketUnknownip, cls, ins, job, instance, le, functionN/A
engine_daemon_host_info_functions_seconds_countUnknownip, cls, ins, job, instance, functionN/A
engine_daemon_host_info_functions_seconds_sumUnknownip, cls, ins, job, instance, functionN/A
engine_daemon_image_actions_seconds_bucketUnknownip, cls, ins, job, instance, le, actionN/A
engine_daemon_image_actions_seconds_countUnknownip, cls, ins, job, instance, actionN/A
engine_daemon_image_actions_seconds_sumUnknownip, cls, ins, job, instance, actionN/A
engine_daemon_network_actions_seconds_bucketUnknownip, cls, ins, job, instance, le, actionN/A
engine_daemon_network_actions_seconds_countUnknownip, cls, ins, job, instance, actionN/A
engine_daemon_network_actions_seconds_sumUnknownip, cls, ins, job, instance, actionN/A
etcd_debugging_snap_save_marshalling_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
etcd_debugging_snap_save_marshalling_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
etcd_debugging_snap_save_marshalling_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
etcd_debugging_snap_save_total_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
etcd_debugging_snap_save_total_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
etcd_debugging_snap_save_total_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
etcd_disk_wal_fsync_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
etcd_disk_wal_fsync_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
etcd_disk_wal_fsync_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
etcd_disk_wal_write_bytes_totalgaugeip, cls, ins, job, instanceTotal number of bytes written in WAL.
etcd_snap_db_fsync_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
etcd_snap_db_fsync_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
etcd_snap_db_fsync_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
etcd_snap_db_save_total_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
etcd_snap_db_save_total_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
etcd_snap_db_save_total_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
etcd_snap_fsync_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
etcd_snap_fsync_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
etcd_snap_fsync_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
go_gc_duration_secondssummaryip, cls, ins, job, instance, quantileA summary of the pause duration of garbage collection cycles.
go_gc_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
go_gc_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
go_goroutinesgaugeip, cls, ins, job, instanceNumber of goroutines that currently exist.
go_infogaugeip, cls, ins, job, version, instanceInformation about the Go environment.
go_memstats_alloc_bytescounterip, cls, ins, job, instanceTotal number of bytes allocated, even if freed.
go_memstats_alloc_bytes_totalcounterip, cls, ins, job, instanceTotal number of bytes allocated, even if freed.
go_memstats_buck_hash_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes used by the profiling bucket hash table.
go_memstats_frees_totalcounterip, cls, ins, job, instanceTotal number of frees.
go_memstats_gc_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes used for garbage collection system metadata.
go_memstats_heap_alloc_bytesgaugeip, cls, ins, job, instanceNumber of heap bytes allocated and still in use.
go_memstats_heap_idle_bytesgaugeip, cls, ins, job, instanceNumber of heap bytes waiting to be used.
go_memstats_heap_inuse_bytesgaugeip, cls, ins, job, instanceNumber of heap bytes that are in use.
go_memstats_heap_objectsgaugeip, cls, ins, job, instanceNumber of allocated objects.
go_memstats_heap_released_bytesgaugeip, cls, ins, job, instanceNumber of heap bytes released to OS.
go_memstats_heap_sys_bytesgaugeip, cls, ins, job, instanceNumber of heap bytes obtained from system.
go_memstats_last_gc_time_secondsgaugeip, cls, ins, job, instanceNumber of seconds since 1970 of last garbage collection.
go_memstats_lookups_totalcounterip, cls, ins, job, instanceTotal number of pointer lookups.
go_memstats_mallocs_totalcounterip, cls, ins, job, instanceTotal number of mallocs.
go_memstats_mcache_inuse_bytesgaugeip, cls, ins, job, instanceNumber of bytes in use by mcache structures.
go_memstats_mcache_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes used for mcache structures obtained from system.
go_memstats_mspan_inuse_bytesgaugeip, cls, ins, job, instanceNumber of bytes in use by mspan structures.
go_memstats_mspan_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes used for mspan structures obtained from system.
go_memstats_next_gc_bytesgaugeip, cls, ins, job, instanceNumber of heap bytes when next garbage collection will take place.
go_memstats_other_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes used for other system allocations.
go_memstats_stack_inuse_bytesgaugeip, cls, ins, job, instanceNumber of bytes in use by the stack allocator.
go_memstats_stack_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes obtained from system for stack allocator.
go_memstats_sys_bytesgaugeip, cls, ins, job, instanceNumber of bytes obtained from system.
go_threadsgaugeip, cls, ins, job, instanceNumber of OS threads created.
logger_log_entries_size_greater_than_buffer_totalcounterip, cls, ins, job, instanceNumber of log entries which are larger than the log buffer
logger_log_read_operations_failed_totalcounterip, cls, ins, job, instanceNumber of log reads from container stdio that failed
logger_log_write_operations_failed_totalcounterip, cls, ins, job, instanceNumber of log write operations that failed
process_cpu_seconds_totalcounterip, cls, ins, job, instanceTotal user and system CPU time spent in seconds.
process_max_fdsgaugeip, cls, ins, job, instanceMaximum number of open file descriptors.
process_open_fdsgaugeip, cls, ins, job, instanceNumber of open file descriptors.
process_resident_memory_bytesgaugeip, cls, ins, job, instanceResident memory size in bytes.
process_start_time_secondsgaugeip, cls, ins, job, instanceStart time of the process since unix epoch in seconds.
process_virtual_memory_bytesgaugeip, cls, ins, job, instanceVirtual memory size in bytes.
process_virtual_memory_max_bytesgaugeip, cls, ins, job, instanceMaximum amount of virtual memory available in bytes.
promhttp_metric_handler_requests_in_flightgaugeip, cls, ins, job, instanceCurrent number of scrapes being served.
promhttp_metric_handler_requests_totalcounterip, cls, ins, job, instance, codeTotal number of scrapes by HTTP status code.
scrape_duration_secondsUnknownip, cls, ins, job, instanceN/A
scrape_samples_post_metric_relabelingUnknownip, cls, ins, job, instanceN/A
scrape_samples_scrapedUnknownip, cls, ins, job, instanceN/A
scrape_series_addedUnknownip, cls, ins, job, instanceN/A
swarm_dispatcher_scheduling_delay_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_dispatcher_scheduling_delay_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_dispatcher_scheduling_delay_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_manager_configs_totalgaugeip, cls, ins, job, instanceThe number of configs in the cluster object store
swarm_manager_leadergaugeip, cls, ins, job, instanceIndicates if this manager node is a leader
swarm_manager_networks_totalgaugeip, cls, ins, job, instanceThe number of networks in the cluster object store
swarm_manager_nodesgaugeip, cls, ins, job, instance, stateThe number of nodes
swarm_manager_secrets_totalgaugeip, cls, ins, job, instanceThe number of secrets in the cluster object store
swarm_manager_services_totalgaugeip, cls, ins, job, instanceThe number of services in the cluster object store
swarm_manager_tasks_totalgaugeip, cls, ins, job, instance, stateThe number of tasks in the cluster object store
swarm_node_managergaugeip, cls, ins, job, instanceWhether this node is a manager or not
swarm_raft_snapshot_latency_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_raft_snapshot_latency_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_raft_snapshot_latency_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_raft_transaction_latency_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_raft_transaction_latency_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_raft_transaction_latency_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_store_batch_latency_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_store_batch_latency_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_store_batch_latency_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_store_lookup_latency_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_store_lookup_latency_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_store_lookup_latency_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_store_memory_store_lock_duration_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_store_memory_store_lock_duration_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_store_memory_store_lock_duration_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_store_read_tx_latency_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_store_read_tx_latency_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_store_read_tx_latency_seconds_sumUnknownip, cls, ins, job, instanceN/A
swarm_store_write_tx_latency_seconds_bucketUnknownip, cls, ins, job, instance, leN/A
swarm_store_write_tx_latency_seconds_countUnknownip, cls, ins, job, instanceN/A
swarm_store_write_tx_latency_seconds_sumUnknownip, cls, ins, job, instanceN/A
upUnknownip, cls, ins, job, instanceN/A

5 - FAQ

Frequently asked questions about the Pigsty Docker module

Who Can Run Docker Commands?

By default, Pigsty adds both the management user running the playbook on the remote node (i.e., the SSH login user on the target node) and the admin user specified in the node_admin_username parameter to the Docker operating system group. All users in this group (docker) can manage Docker using the docker CLI command.

If you want other users to be able to run Docker commands, add that OS user to the docker group:

usermod -aG docker <username>

Working Through a Proxy

During Docker installation, if the proxy_env parameter exists, the HTTP proxy server configuration will be written to the /etc/docker/daemon.json configuration file.

Docker will use this proxy server when pulling images from upstream registries.

Tip: Running configure with the -x flag will write the proxy server configuration from your current environment into proxy_env.


Using Mirror Registries

If you’re in mainland China and affected by the Great Firewall, you can consider using Docker mirror sites available within China, such as quay.io:

docker login quay.io    # Enter username and password to log in

Update (June 2024): All previously accessible Docker mirror sites in China have been blocked. Please use a proxy server to access and pull images.


Adding Docker to Monitoring

During Docker module installation, you can register Docker as a monitoring target by running the docker_register or register_prometheus subtask for specific nodes:

./docker.yml -l <your-node-selector> -t register_prometheus

Using Software Templates

Pigsty provides a collection of software templates that can be launched using Docker Compose, ready to use out of the box.

But you need to install the Docker module first.