Module: JUICE
Use JuiceFS distributed filesystem with PostgreSQL metadata to provide shared POSIX storage.
JuiceFS is a high-performance POSIX-compatible distributed filesystem that can mount object storage or databases as a local filesystem.
The JUICE module depends on NODE for infrastructure and package repo, and typically uses PGSQL as the metadata engine.
Data storage can be PostgreSQL or MINIO / S3 object storage. Monitoring relies on INFRA VictoriaMetrics.
flowchart LR
subgraph Client["App/User"]
app["POSIX Access"]
end
subgraph JUICE["JUICE"]
jfs["JuiceFS Mount"]
end
subgraph PGSQL["PGSQL"]
meta["Metadata DB"]
end
subgraph Object["Object Storage (optional)"]
s3["S3 / MinIO"]
end
subgraph INFRA["INFRA (optional)"]
vm["VictoriaMetrics"]
end
app --> jfs
jfs --> meta
jfs -.-> s3
jfs -->|/metrics| vm
style JUICE fill:#5B9CD5,stroke:#4178a8,color:#fff
style PGSQL fill:#3E668F,stroke:#2d4a66,color:#fff
style Object fill:#FCDB72,stroke:#d4b85e,color:#333
style INFRA fill:#999,stroke:#666,color:#fff
Features
- PostgreSQL metadata: Metadata stored in PostgreSQL for easy management and backup
- Multi-instance: One node can mount multiple independent filesystem instances
- Multiple data backends: PostgreSQL, MinIO, S3, and more
- Monitoring integration: Each instance exposes Prometheus / Victoria metrics port
- Simple config: Describe instances with the
juice_instances dict
Quick Start
Minimal config example (single instance):
juice_instances:
jfs:
path: /fs
meta: postgres://dbuser_meta:[email protected]:5432/meta
data: --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port: 9567
Deploy:
1 - Configuration
JUICE module configuration, instance definition, storage backends, and mount options.
Concepts and Implementation
JuiceFS consists of a metadata engine and data storage.
In Pigsty v4.1, meta is passed through to juicefs as the metadata engine URL, and PostgreSQL is typically used in production.
Data storage is defined by data options passed to juicefs format.
JUICE module core commands:
# Format (only effective on first creation)
juicefs format --no-update <data> "<meta>" "<name>"
# Mount
juicefs mount <mount> --cache-dir <juice_cache> --metrics 0.0.0.0:<port> <meta> <path>
Notes:
--no-update ensures existing filesystems are not overwritten.data is only used for initial format; it does not affect existing filesystems.mount is only used during mount, you can pass cache and concurrency options.
Module Parameters
JUICE module has only two parameters:
| Parameter | Type | Level | Description |
|---|
juice_cache | path | C | JuiceFS shared cache directory |
juice_instances | dict | I | JuiceFS instance dict (can be empty) |
juice_cache: shared local cache directory for all instances, default /data/juicejuice_instances: instance-level dict, key is filesystem name; an empty dict means no instances are managed
Instance Configuration
Each entry in juice_instances represents a JuiceFS instance:
| Field | Required | Default | Description |
|---|
path | Yes | - | Mount point path, e.g. /fs |
meta | Yes | - | Metadata engine URL (PostgreSQL recommended) |
data | No | '' | juicefs format options (storage backend) |
unit | No | juicefs-<name> | systemd service name |
mount | No | '' | Extra juicefs mount options |
port | No | 9567 | Metrics port (unique per node) |
owner | No | root | Mount point owner |
group | No | root | Mount point group |
mode | No | 0755 | Mount point permissions |
state | No | create | create / absent |
Important
- It’s recommended to explicitly set
data on first format to make the storage backend clear. - Multiple instances on the same node must use different
port values.
Example:
juice_instances:
jfs:
path: /fs
meta: postgres://dbuser_meta:[email protected]:5432/meta
data: --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port: 9567
Storage Backends
data is appended to juicefs format, any supported backend works. Common examples:
PostgreSQL Large Objects
juice_instances:
jfs:
path: /fs
meta: postgres://dbuser_meta:[email protected]:5432/meta
data: --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
MinIO Object Storage
juice_instances:
jfs:
path: /fs
meta: postgres://dbuser_meta:[email protected]:5432/meta
data: --storage minio --bucket http://10.10.10.10:9000/juice --access-key minioadmin --secret-key minioadmin
S3-Compatible Storage
juice_instances:
jfs:
path: /fs
meta: postgres://dbuser_meta:[email protected]:5432/meta
data: --storage s3 --bucket https://s3.amazonaws.com/my-bucket --access-key AKIAXXXXXXXX --secret-key XXXXXXXXXX
Typical Configurations
Multi-Instance (Same Node)
juice_instances:
pgfs:
path: /pgfs
meta: postgres://dbuser_meta:[email protected]:5432/meta
data: --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port: 9567
shared:
path: /shared
meta: postgres://dbuser_meta:[email protected]:5432/shared
data: --storage minio --bucket http://10.10.10.10:9000/shared
port: 9568
owner: postgres
group: postgres
Shared Mount Across Nodes
Mount the same JuiceFS on multiple nodes:
app:
hosts:
10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
Only one node needs to format the filesystem; others will skip via --no-update.
Notes
port is exposed on 0.0.0.0. Use firewall or security group to restrict access.- Changing
data will not update an existing filesystem; handle migration manually.
2 - Parameters
JUICE module parameters (2 total).
JUICE module has 2 parameters:
Parameter Overview
| Parameter | Type | Level | Description |
|---|
juice_cache | path | C | JuiceFS shared cache directory |
juice_instances | dict | I | JuiceFS instance definition dict (can be empty) |
Level: C = cluster level, I = instance level.
Default Parameters
Defined in roles/juice/defaults/main.yml:
#-----------------------------------------------------------------
# JUICE
#-----------------------------------------------------------------
juice_cache: /data/juice
juice_instances: {}
juice_cache
Parameter: juice_cache, type: path, level: C
Shared local cache directory for all JuiceFS instances, default /data/juice.
JuiceFS isolates caches by filesystem UUID under this directory.
juice_instances
Parameter: juice_instances, type: dict, level: I
Instance definition dict, usually defined at instance level.
Default is an empty dict (meaning no instances are deployed). Key is filesystem name, value is instance config object.
juice_instances:
jfs:
path: /fs
meta: postgres://u:p@h:5432/db
data: --storage postgres --bucket ...
port: 9567
Instance fields:
| Field | Required | Default | Description |
|---|
path | Yes | - | Mount point path |
meta | Yes | - | Metadata engine URL (PostgreSQL recommended) |
data | No | '' | juicefs format options (only effective on first creation) |
unit | No | juicefs-<name> | systemd service name |
mount | No | '' | Extra juicefs mount options |
port | No | 9567 | Metrics port (unique per node) |
owner | No | root | Mount point owner |
group | No | root | Mount point group |
mode | No | 0755 | Mount point permissions |
state | No | create | create / absent |
Note
data is only used by juicefs format, it will not update an existing filesystem.- Multiple instances on the same node must use different
port values.
3 - Playbook
JUICE module playbook guide.
JUICE module provides juice.yml playbook to deploy and remove JuiceFS instances.
juice.yml
Task structure in juice.yml:
juice_id : validate config, check port conflicts
juice_install : install juicefs package
juice_cache : create shared cache dir
juice_clean : remove instance (state=absent)
juice_instance : create instance (state=create)
- juice_init : format filesystem (--no-update)
- juice_dir : create mount dir
- juice_config: render env file and systemd unit
- juice_launch: start service and wait for metrics port
juice_register : register to VictoriaMetrics targets
Scope
| Scope | Limit | Description |
|---|
| Node | -l <host> | Deploy all instances on the node |
| Instance | -l <host> -e fsname=<name> | Only handle specified instance |
Examples:
./juice.yml -l 10.10.10.10 # deploy all instances on the node
./juice.yml -l 10.10.10.10 -e fsname=jfs # only deploy jfs instance
| Tag | Description |
|---|
juice_id | Validate juice_instances and port conflicts |
juice_install | Install juicefs package |
juice_cache | Create shared cache dir |
juice_clean | Remove instance (state=absent) |
juice_instance | Create instance (umbrella tag) |
juice_init | Format filesystem |
juice_dir | Create mount dir |
juice_config | Render config files |
juice_launch | Start service |
juice_register | Write VictoriaMetrics target file |
Config Updates
Render config only (no restart):
./juice.yml -l <host> -t juice_config
Update config and ensure service is online (without force restart):
./juice.yml -l <host> -t juice_config,juice_launch
If you need new mount options to take effect immediately, manually restart the instance service:
systemctl restart juicefs-<name>
Remove Instance
Removal flow:
- Set instance
state to absent - Run
juice_clean
juice_instances:
jfs:
path: /fs
meta: postgres://...
state: absent
./juice.yml -l <host> -t juice_clean
./juice.yml -l <host> -e fsname=jfs -t juice_clean
Removal includes: stop service, lazy unmount, remove systemd unit/env files, reload systemd.
PostgreSQL metadata and object storage data are not deleted.
Monitoring Registration
juice_register writes target file on infra node:
/infra/targets/juice/<hostname>.yml
To re-register manually:
./juice.yml -l <host> -t juice_register
4 - Administration
JUICE module operations and troubleshooting guide.
Common operations:
See FAQ for more.
Initialize Instance
./juice.yml -l <host>
./juice.yml -l <host> -e fsname=<name>
Initialization steps:
- Install
juicefs package - Create shared cache dir (default
/data/juice) - Run
juicefs format --no-update (only effective on first creation) - Create mount point and set permissions
- Render systemd unit and env files
- Start service and wait for metrics port
- Register to VictoriaMetrics (if infra node exists)
After changing config, it’s recommended to run (update config and ensure service is online):
./juice.yml -l <host> -t juice_config,juice_launch
Render config without touching service state:
./juice.yml -l <host> -t juice_config
Notes:
juice_config,juice_launch ensures service is started, but does not force-restart an already running instancedata only takes effect on the first format- After changing
mount options, manually restart the instance service (systemctl restart juicefs-<name>)
Remove Instance
- Set instance
state to absent - Run
juice_clean
juice_instances:
jfs:
path: /fs
meta: postgres://...
state: absent
./juice.yml -l <host> -t juice_clean
./juice.yml -l <host> -e fsname=jfs -t juice_clean
Removal actions:
- Stop systemd service
umount -l lazy unmount- Remove unit and env files
- Reload systemd
PostgreSQL metadata and object storage data are not deleted.
Add New Instance
Add a new instance in config, ensure unique port:
juice_instances:
newfs:
path: /newfs
meta: postgres://...
data: --storage minio --bucket http://minio:9000/newfs
port: 9568
Deploy:
./juice.yml -l <host> -e fsname=newfs
Shared Mount Across Nodes
Configure the same meta and instance name on multiple nodes:
app:
hosts:
10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
Only one node needs to format the filesystem; others will skip via --no-update.
PITR Recovery
When data is also stored in PostgreSQL (--storage postgres), filesystem PITR can be done via PG PITR:
# Stop services on all nodes
systemctl stop juicefs-jfs
# Restore metadata DB with pgBackRest
pb restore --stanza=meta --type=time --target="2024-01-15 10:30:00"
# Start PostgreSQL
systemctl start postgresql
# Start JuiceFS service
systemctl start juicefs-jfs
If data is stored in MinIO/S3, only metadata is rolled back; objects will not.
Troubleshooting
Mount Fails
systemctl status juicefs-jfs
journalctl -u juicefs-jfs -f
mountpoint /fs
Metrics Port Check
ss -tlnp | grep 9567
curl http://localhost:9567/metrics
Pass juicefs mount options via mount:
juice_instances:
jfs:
path: /fs
meta: postgres://...
mount: --cache-size 102400 --prefetch 3 --max-uploads 50
Key metrics to watch:
juicefs_blockcache_hits/juicefs_blockcache_miss: cache hit ratiojuicefs_object_request_durations_histogram_seconds: object storage latencyjuicefs_transaction_durations_histogram_seconds: metadata transaction latency
5 - Monitoring
JUICE module monitoring and metrics.
JuiceFS instances expose Prometheus metrics via juicefs mount --metrics.
In JUICE, metrics listen on 0.0.0.0:<port>, default port 9567.
Monitoring Architecture
JuiceFS Mount (metrics: 0.0.0.0:<port>)
↓
VictoriaMetrics (scrape)
↓
Grafana Dashboard
If INFRA is deployed, juice_register writes scrape targets to:
/infra/targets/juice/<hostname>.yml
Target File Example
- labels: { ip: 10.10.10.10, ins: "node-jfs", cls: "jfs" }
targets: [ 10.10.10.10:9567 ]
To register manually:
./juice.yml -l <host> -t juice_register
Key Metrics
Object Storage
| Metric | Type | Description |
|---|
juicefs_object_request_durations_histogram_seconds | histogram | Object storage request latency |
juicefs_object_request_errors | counter | Object storage errors |
Cache
| Metric | Type | Description |
|---|
juicefs_blockcache_hits | counter | Cache hits |
juicefs_blockcache_miss | counter | Cache misses |
| Metric | Type | Description |
|---|
juicefs_transaction_durations_histogram_seconds | histogram | Metadata transaction latency (histogram) |
juicefs_transaction_durations_histogram_seconds_count | counter | Metadata transaction request count |
Common PromQL
Cache hit ratio:
rate(juicefs_blockcache_hits[5m]) /
(rate(juicefs_blockcache_hits[5m]) + rate(juicefs_blockcache_miss[5m]))
Object storage P99 latency:
histogram_quantile(0.99, rate(juicefs_object_request_durations_histogram_seconds_bucket[5m]))
6 - FAQ
JUICE module frequently asked questions.
Port Conflicts?
Multiple instances on the same node must use different port values. Example:
juice_instances:
fs1:
path: /fs1
meta: postgres://...
port: 9567
fs2:
path: /fs2
meta: postgres://...
port: 9568
Why does changing data not take effect?
data is only used by juicefs format --no-update. After filesystem creation it will not change.
To switch backend, migrate data and reformat manually.
How to add a new instance?
- Add instance definition in config
- Run:
./juice.yml -l <host> -e fsname=<name>
How to remove an instance?
- Set instance
state to absent - Run:
./juice.yml -l <host> -t juice_clean
Removal does not delete PostgreSQL metadata or object storage data.
Where is file data stored?
Depends on data:
--storage postgres: data in PostgreSQL pg_largeobject--storage minio/s3: data in object storage bucket
Metadata is stored in the metadata engine defined by meta (in Pigsty production scenarios, this is usually PostgreSQL).
Multi-node mount notes?
- Use the same
meta and instance name on all nodes - Only one node needs to format; others will skip
- Ensure
port does not conflict on each node
Monitoring target not generated?
juice_register only writes /infra/targets/juice/ when infra group exists.
You can run manually:
./juice.yml -l <host> -t juice_register
How to change mount options?
After updating mount in the instance, refresh config first and then manually restart the service:
./juice.yml -l <host> -t juice_config,juice_launch
systemctl restart juicefs-<name>