Administration
Common etcd admin SOPs:
- Create Cluster: Initialize an etcd cluster
- Destroy Cluster: Destroy an etcd cluster
- CLI Environment: Configure etcd client to access server cluster
- RBAC Authentication: Use etcd RBAC auth
- Reload Config: Update etcd server member list for clients
- Add Member: Add new member to existing etcd cluster
- Remove Member: Remove member from etcd cluster
- Utility Scripts: Simplify ops with
bin/etcd-addandbin/etcd-rm
For more, refer to FAQ: ETCD.
Create Cluster
Define etcd cluster in config inventory:
etcd:
hosts:
10.10.10.10: { etcd_seq: 1 }
10.10.10.11: { etcd_seq: 2 }
10.10.10.12: { etcd_seq: 3 }
vars: { etcd_cluster: etcd }
Run etcd.yml playbook:
./etcd.yml # initialize etcd cluster
Since v3.6, etcd.yml focuses on cluster install and member addition—no longer includes removal. Use dedicated etcd-rm.yml for all removals.
For prod etcd clusters, enable safeguard etcd_safeguard to prevent accidental deletion.
Destroy Cluster
Use dedicated etcd-rm.yml playbook to destroy etcd cluster. Use caution!
./etcd-rm.yml # remove entire etcd cluster
./etcd-rm.yml -e etcd_safeguard=false # override safeguard
Or use utility script:
bin/etcd-rm # remove entire etcd cluster
Removal playbook respects etcd_safeguard. If true, playbook aborts to prevent accidental deletion.
Before removing etcd cluster, ensure no PG clusters use it as DCS. PG HA will break otherwise.
CLI Environment
Uses etcd v3 API by default (v2 removed in v3.6+). Pigsty auto-configures env script /etc/profile.d/etcdctl.sh on etcd nodes, loaded on login.
Example client env config:
alias e="etcdctl"
alias em="etcdctl member"
export ETCDCTL_ENDPOINTS=https://10.10.10.10:2379
export ETCDCTL_CACERT=/etc/etcd/ca.crt
export ETCDCTL_CERT=/etc/etcd/server.crt
export ETCDCTL_KEY=/etc/etcd/server.key
v4.0 enables RBAC auth by default—user auth required:
export ETCDCTL_USER="root:$(cat /etc/etcd/etcd.pass)"
After configuring client env, run etcd CRUD ops:
e put a 10 ; e get a; e del a # basic KV ops
e member list # list cluster members
e endpoint health # check endpoint health
e endpoint status # view endpoint status
RBAC Authentication
v4.0 enables etcd RBAC auth by default. During cluster init, etcd_auth task auto-creates root user and enables auth.
Root user password set by etcd_root_password, default: Etcd.Root. Stored in /etc/etcd/etcd.pass with 0640 perms (root-owned, etcd-group readable).
Strongly recommended to change default password in prod:
etcd:
hosts:
10.10.10.10: { etcd_seq: 1 }
10.10.10.11: { etcd_seq: 2 }
10.10.10.12: { etcd_seq: 3 }
vars:
etcd_cluster: etcd
etcd_root_password: 'YourSecurePassword' # change default
Client auth methods:
# Method 1: env vars (recommended, auto-configured in /etc/profile.d/etcdctl.sh)
export ETCDCTL_USER="root:$(cat /etc/etcd/etcd.pass)"
# Method 2: command line
etcdctl --user root:YourSecurePassword member list
Patroni and etcd auth:
Patroni uses pg_etcd_password to configure etcd connection password. If empty, Patroni uses cluster name as password (not recommended). Configure separate etcd password per PG cluster in prod.
Reload Config
If etcd cluster membership changes (add/remove members), refresh etcd service endpoint references. These etcd refs in Pigsty need updates:
| Config Location | Config File | Update Method |
|---|---|---|
| etcd member config | /etc/etcd/etcd.conf | ./etcd.yml -t etcd_conf |
| etcdctl env vars | /etc/profile.d/etcdctl.sh | ./etcd.yml -t etcd_config |
| Patroni DCS config | /pg/bin/patroni.yml | ./pgsql.yml -t pg_conf |
| VIP-Manager config | /etc/default/vip-manager | ./pgsql.yml -t pg_vip_config |
Refresh etcd member config:
./etcd.yml -t etcd_conf # refresh /etc/etcd/etcd.conf
ansible etcd -f 1 -b -a 'systemctl restart etcd' # optional: restart etcd instances
Refresh etcdctl client env:
./etcd.yml -t etcd_config # refresh /etc/profile.d/etcdctl.sh
Update Patroni DCS endpoint config:
./pgsql.yml -t pg_conf # regenerate patroni config
ansible all -f 1 -b -a 'systemctl reload patroni' # reload patroni config
Update VIP-Manager endpoint config (only for PGSQL L2 VIP):
./pgsql.yml -t pg_vip_config # regenerate vip-manager config
ansible all -f 1 -b -a 'systemctl restart vip-manager' # restart vip-manager
Using bin/etcd-add / bin/etcd-rm utility scripts? Scripts prompt config refresh commands after completion.
Add Member
ETCD Reference: Add a member
Recommended: Utility Script
Use bin/etcd-add script to add new members to existing etcd cluster:
# First add new member definition to config inventory, then:
bin/etcd-add <ip> # add single new member
bin/etcd-add <ip1> <ip2> ... # add multiple new members
Script auto-performs:
- Validates IP address validity
- Executes
etcd.ymlplaybook (auto-setsetcd_init=existing) - Provides safety warnings and countdown
- Prompts config refresh commands after completion
Manual: Step-by-Step
Add new member to existing etcd cluster:
- Update config inventory: Add new instance to
etcdgroup - Notify cluster: Run
etcdctl member add(optional, playbook auto-does this) - Initialize new member: Run playbook with
etcd_init=existingparameter - Promote member: Promote learner to full member (optional, required when using
etcd_learner=true) - Reload config: Update etcd endpoint references for all clients
# After config inventory update, initialize new member
./etcd.yml -l <new_ins_ip> -e etcd_init=existing
# If using learner mode, manually promote
etcdctl member promote <new_ins_server_id>
When adding new members, must use etcd_init=existing parameter. New instance will create new cluster instead of joining existing one otherwise.
Detailed: Add member to etcd cluster
Detailed steps. Start from single-instance etcd cluster:
etcd:
hosts:
10.10.10.10: { etcd_seq: 1 } # <--- only existing instance in cluster
10.10.10.11: { etcd_seq: 2 } # <--- add this new member to inventory
vars: { etcd_cluster: etcd }
Add new member using utility script (recommended):
$ bin/etcd-add 10.10.10.11
Or manual. First use etcdctl member add to announce new learner instance etcd-2 to existing etcd cluster:
$ etcdctl member add etcd-2 --learner=true --peer-urls=https://10.10.10.11:2380
Member 33631ba6ced84cf8 added to cluster 6646fbcf5debc68f
ETCD_NAME="etcd-2"
ETCD_INITIAL_CLUSTER="etcd-2=https://10.10.10.11:2380,etcd-1=https://10.10.10.10:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.10.11:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
Check member list with etcdctl member list (or em list), see unstarted new member:
33631ba6ced84cf8, unstarted, , https://10.10.10.11:2380, , true # unstarted new member here
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false
Next, use etcd.yml playbook to initialize new etcd instance etcd-2. After completion, new member has started:
$ ./etcd.yml -l 10.10.10.11 -e etcd_init=existing # must add existing parameter
...
33631ba6ced84cf8, started, etcd-2, https://10.10.10.11:2380, https://10.10.10.11:2379, true
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false
After new member initialized and running stably, promote from learner to follower:
$ etcdctl member promote 33631ba6ced84cf8 # promote learner to follower
Member 33631ba6ced84cf8 promoted in cluster 6646fbcf5debc68f
$ em list # check again, new member promoted to full member
33631ba6ced84cf8, started, etcd-2, https://10.10.10.11:2380, https://10.10.10.11:2379, false
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false
New member added. Don’t forget to reload config so all clients know new member.
Repeat steps to add more members. Prod environments need at least 3 members.
Remove Member
Recommended: Utility Script
Use bin/etcd-rm script to remove members from etcd cluster:
bin/etcd-rm <ip> # remove specified member
bin/etcd-rm <ip1> <ip2> ... # remove multiple members
bin/etcd-rm # remove entire etcd cluster
Script auto-performs:
- Gracefully removes members from cluster
- Stops and disables etcd service
- Cleans up data and config files
- Deregisters from monitoring system
Manual: Step-by-Step
Remove member instance from etcd cluster:
- Remove from config inventory: Comment out or delete instance, and reload config
- Kick from cluster: Use
etcdctl member removecommand - Clean up instance: Use
etcd-rm.ymlplaybook to clean up
# Use dedicated removal playbook (recommended)
./etcd-rm.yml -l <ip>
# Or manual
etcdctl member remove <server_id> # kick from cluster
./etcd-rm.yml -l <ip> # clean up instance
Detailed: Remove member from etcd cluster
Example: 3-node etcd cluster, remove instance 3.
Method 1: Utility script (recommended)
$ bin/etcd-rm 10.10.10.12
Script auto-completes all operations: remove from cluster, stop service, clean up data.
Method 2: Manual
First, refresh config by commenting out member to delete, then reload config so all clients stop using this instance.
etcd:
hosts:
10.10.10.10: { etcd_seq: 1 }
10.10.10.11: { etcd_seq: 2 }
# 10.10.10.12: { etcd_seq: 3 } # <---- comment out this member
vars: { etcd_cluster: etcd }
Then use removal playbook:
$ ./etcd-rm.yml -l 10.10.10.12
Playbook auto-executes:
- Get member list, find corresponding member ID
- Execute
etcdctl member removeto kick from cluster - Stop etcd service
- Clean up data and config files
If manual:
$ etcdctl member list
429ee12c7fbab5c1, started, etcd-1, https://10.10.10.10:2380, https://10.10.10.10:2379, false
33631ba6ced84cf8, started, etcd-2, https://10.10.10.11:2380, https://10.10.10.11:2379, false
93fcf23b220473fb, started, etcd-3, https://10.10.10.12:2380, https://10.10.10.12:2379, false # <--- remove this
$ etcdctl member remove 93fcf23b220473fb # kick from cluster
Member 93fcf23b220473fb removed from cluster 6646fbcf5debc68f
After execution, permanently remove from config inventory. Member removal complete.
Repeat to remove more members. Combined with Add Member, perform rolling upgrades and migrations of etcd cluster.
Utility Scripts
v3.6+ provides utility scripts to simplify etcd cluster scaling:
bin/etcd-add
Add new members to existing etcd cluster:
bin/etcd-add <ip> # add single new member
bin/etcd-add <ip1> <ip2> ... # add multiple new members
Script features:
- Validates IP addresses in config inventory
- Auto-sets
etcd_init=existingparameter - Executes
etcd.ymlplaybook to complete member addition - Prompts config refresh commands after completion
bin/etcd-rm
Remove members or entire cluster from etcd:
bin/etcd-rm <ip> # remove specified member
bin/etcd-rm <ip1> <ip2> ... # remove multiple members
bin/etcd-rm # remove entire etcd cluster
Script features:
- Provides safety warnings and confirmation countdown
- Auto-executes
etcd-rm.ymlplaybook - Gracefully removes members from cluster
- Cleans up data and config files
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.