Administration
Common operations:
- Initialize Instance
- Reconfigure
- Remove Instance
- Add New Instance
- Shared Mount Across Nodes
- PITR Recovery
- Troubleshooting
- Performance Tuning
See FAQ for more.
Initialize Instance
./juice.yml -l <host>
./juice.yml -l <host> -e fsname=<name>
Initialization steps:
- Install
juicefspackage - Create shared cache dir (default
/data/juice) - Run
juicefs format --no-update(only effective on first creation) - Create mount point and set permissions
- Render systemd unit and env files
- Start service and wait for metrics port
- Register to VictoriaMetrics (if infra node exists)
Reconfigure
After changing config, it’s recommended to run (update config and ensure service is online):
./juice.yml -l <host> -t juice_config,juice_launch
Render config without touching service state:
./juice.yml -l <host> -t juice_config
Notes:
juice_config,juice_launchensures service isstarted, but does not force-restart an already running instancedataonly takes effect on the firstformat- After changing
mountoptions, manually restart the instance service (systemctl restart juicefs-<name>)
Remove Instance
- Set instance
statetoabsent - Run
juice_clean
juice_instances:
jfs:
path: /fs
meta: postgres://...
state: absent
./juice.yml -l <host> -t juice_clean
./juice.yml -l <host> -e fsname=jfs -t juice_clean
Removal actions:
- Stop systemd service
umount -llazy unmount- Remove unit and env files
- Reload systemd
PostgreSQL metadata and object storage data are not deleted.
Add New Instance
Add a new instance in config, ensure unique port:
juice_instances:
newfs:
path: /newfs
meta: postgres://...
data: --storage minio --bucket http://minio:9000/newfs
port: 9568
Deploy:
./juice.yml -l <host> -e fsname=newfs
Shared Mount Across Nodes
Configure the same meta and instance name on multiple nodes:
app:
hosts:
10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
Only one node needs to format the filesystem; others will skip via --no-update.
PITR Recovery
When data is also stored in PostgreSQL (--storage postgres), filesystem PITR can be done via PG PITR:
# Stop services on all nodes
systemctl stop juicefs-jfs
# Restore metadata DB with pgBackRest
pb restore --stanza=meta --type=time --target="2024-01-15 10:30:00"
# Start PostgreSQL
systemctl start postgresql
# Start JuiceFS service
systemctl start juicefs-jfs
If data is stored in MinIO/S3, only metadata is rolled back; objects will not.
Troubleshooting
Mount Fails
systemctl status juicefs-jfs
journalctl -u juicefs-jfs -f
mountpoint /fs
Metadata Connection Issues
psql "postgres://dbuser_meta:[email protected]:5432/meta" -c "SELECT 1"
Metrics Port Check
ss -tlnp | grep 9567
curl http://localhost:9567/metrics
Performance Tuning
Pass juicefs mount options via mount:
juice_instances:
jfs:
path: /fs
meta: postgres://...
mount: --cache-size 102400 --prefetch 3 --max-uploads 50
Key metrics to watch:
juicefs_blockcache_hits/juicefs_blockcache_miss: cache hit ratiojuicefs_object_request_durations_histogram_seconds: object storage latencyjuicefs_transaction_durations_histogram_seconds: metadata transaction latency
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.