Configuring S3 Storage
Use this guide to back Nitro Repo repositories with an Amazon S3 bucket or any S3-compatible object store (MinIO, DigitalOcean Spaces, Ceph, etc.). The storage driver lives in crates/storage/src/s3 and is loaded through the same storage-management APIs as the local backend, so the setup happens entirely through the admin interface or REST API—no code changes are required.
When to choose S3
- You need cheap, durable blob storage that multiple Nitro nodes can share.
- You run Nitro in Kubernetes or another stateless platform and do not want to manage block volumes.
- You already store artifacts in an S3-compatible service and want Nitro to sit in front of that bucket.
Limitations (as of this commit):
move_fileis implemented asGetObject+PutObject+DeleteObject. Renaming large blobs consumes bandwidth proportional to object size.
Prerequisites
- Network reachability from every Nitro Repo instance to the S3 endpoint (public AWS endpoint, VPC endpoint, on-prem MinIO URL, etc.).
- Bucket created ahead of time. Nitro never creates buckets; it assumes the bucket exists and is empty or reserved for Nitro data.
- Credential source Nitro can use to talk to the bucket. Supply one of:
- Static IAM keys (access key + secret, optional session token) scoped to the bucket with at least:
s3:ListBuckets3:GetObjects3:PutObjects3:DeleteObjects3:GetObjectTaggings3:HeadObject
- IAM role to assume (
role_arn, optionalrole_session_name/external_id). Nitro callsAssumeRolethrough the AWS SDK and uses those temporary credentials for all S3 traffic. - Instance/container profile – leave every credential field blank and Nitro will fall back to the AWS SDK default chain (environment variables, shared config/credentials files, ECS/EKS credentials, IMDS, etc.).
s3:ListBuckets3:GetObjects3:PutObjects3:DeleteObjects3:GetObjectTaggings3:HeadObject
- Static IAM keys (access key + secret, optional session token) scoped to the bucket with at least:
- Administrative login that carries the
StorageManagercapability (system manager or admin) so you can call/api/storage/**.
Bucket layout expectations
- Nitro prefixes every object with the repository UUID, then appends the repository-relative path:
s3://<bucket>/<repository-uuid>/<path/to/object>. This comes directly fromS3StorageInner::s3_path. - Keep the bucket dedicated to Nitro or at least ensure no other process writes under those prefixes; otherwise Nitro’s collision checks may fail deployments.
- Versioning and encryption are optional but recommended for mission-critical data.
Step 1 – Discover the region identifier
Nitro exposes the list of built-in AWS regions via the API:
curl -s -H "Authorization: Bearer $NITRO_TOKEN" \
https://nitro.example.com/api/storage/s3/regions | jqPick one of the values (e.g., UsEast1). For S3-compatible endpoints that report a custom name (MinIO, Ceph), skip this list and plan to provide a custom endpoint instead.
Step 2 – Decide between path-style and virtual-hosted-style URLs
path_style defaults to true, which means Nitro will connect as https://endpoint/bucket/object. Keep the default for MinIO and most self-hosted gateways. Set path_style to false when talking to AWS’ public endpoints so requests use https://bucket.s3.amazonaws.com/object, which avoids the deprecation path for classic path-style access.
Step 3 – Create the storage (UI or API)
The Admin UI’s Storages → Create Storage → S3 / Object Storage form exposes every server-side option (region, custom endpoint, credential fields, role information, session token, path-style toggle, and the optional on-disk cache described below). For automation you can call the REST API directly; the payload mirrors StorageTypeConfig (type + settings object).
curl -X POST https://nitro.example.com/api/storage/new/s3 \
-H "Authorization: Bearer $NITRO_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "s3-prod",
"config": {
"type": "S3",
"settings": {
"bucket_name": "nitro-prod-artifacts",
"region": "UsEast1",
"credentials": {
"access_key": "AKIA...",
"secret_key": "••••••••",
"session_token": "",
"role_arn": "",
"role_session_name": "",
"external_id": ""
},
"path_style": false,
"cache": {
"enabled": true,
"path": "/var/lib/nitro_repo/cache/s3-prod",
"max_bytes": 2147483648,
"max_entries": 4096
}
}
}
}'Notes:
namemust be unique; Nitro rejects duplicates with HTTP 409.regionis mandatory unless you supply a custom endpoint (see below).- Credentials are stored encrypted at rest inside the
storagestable. Rotate them the same way you created them: create a new storage (or future update endpoint once available) and reassign repositories to it. - During creation Nitro invokes
S3StorageFactory::test_storage_config(). If anything fails (missing region, auth error, bucket not found) the API responds with400 Invalid Storage Configand the precise driver error appears in the response body and server logs.
Custom endpoint example (MinIO, DigitalOcean Spaces, etc.)
The S3 driver flattens the optional CustomRegion struct, so provide custom_region (name label, optional) and endpoint (full URL) directly inside settings. When endpoint is present it takes precedence over region.
{
"name": "s3-minio",
"config": {
"type": "S3",
"settings": {
"bucket_name": "nitro-artifacts",
"credentials": {
"access_key": "minio",
"secret_key": "minio-secret"
},
"path_style": true,
"custom_region": "onprem",
"endpoint": "https://minio.internal.example.com"
}
}
}Optional: Local disk cache
To keep hot artifacts on the node and avoid repeated S3 downloads, enable the cache block. Nitro keeps an LRU index keyed by the S3 path, writes files under the provided directory, and enforces both a byte cap and an entry cap. When the cache grows past max_bytes, the least recently used entries are evicted and their files deleted.
{
"name": "s3-with-cache",
"config": {
"type": "S3",
"settings": {
"bucket_name": "nitro-artifacts",
"region": "UsEast1",
"credentials": { "role_arn": "arn:aws:iam::123:role/nitro" },
"path_style": true,
"cache": {
"enabled": true,
"path": "/var/lib/nitro_repo/cache/s3",
"max_bytes": 1073741824,
"max_entries": 2048
}
}
}
}Leave path blank to fall back to $(TMPDIR)/nitro_repo/s3-cache/<storage-name>. The cache runs fully asynchronously—blocking file reads/writes are offloaded to Tokio’s blocking pool—so enabling it will not starve the HTTP runtime.
IAM role / default chain example
To avoid long-lived keys entirely, leave the key fields blank and provide only the role metadata. Nitro Repo will load the default AWS credential chain (environment variables, shared ~/.aws config, ECS/EKS task metadata, or EC2 IMDS) and then call AssumeRole before it touches S3.
{
"name": "s3-assume-role",
"config": {
"type": "S3",
"settings": {
"bucket_name": "nitro-artifacts",
"region": "UsEast1",
"credentials": {
"role_arn": "arn:aws:iam::123456789012:role/nitro-deploy",
"role_session_name": "nitro-repo-ci",
"external_id": "customer-123"
},
"path_style": false
}
}
}Step 4 – Verify
- Confirm the storage record exists:bashEnsure your new storage shows
curl -s -H "Authorization: Bearer $NITRO_TOKEN" \ "https://nitro.example.com/api/storage/list?active_only=true" | jqstorage_type: "s3"andactive: true. - Create or edit a repository so it points at the new storage, then push a small artifact. Nitro writes to
s3://<bucket>/<repository-uuid>/.... Inspect the bucket to make sure the object landed where expected. - Watch logs for
Successfully connected to S3 bucket(emitted byS3StorageFactory::test_storage_config) or any subsequent AWS SDK error entries:bashdocker compose logs -f nitro_repo | rg -i s3
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
Invalid Storage Config: NoRegionSpecified | Neither region nor endpoint provided | Supply one of the supported regions or a custom endpoint |
AWS SDK error: InvalidAccessKeyId | Wrong credentials or user lacks permission | Regenerate the key pair or fix the role policy so it has the required bucket-scoped IAM actions |
| Cached file never expires | Cache size too small or directory not writable | Increase cache.max_bytes / max_entries or ensure Nitro can write to the cache path |
Bucket Does Not Exist | Typo in bucket_name or Nitro lacks access to the bucket | Verify the bucket name and the IAM policy’s Resource list |
| Uploads stall or time out | Nitro copying large files via move_file or append_file | Avoid mass renames; delete + re-upload is faster until the driver adopts server-side copy |
Operational tips
- Keep the bucket lifecycle rules aligned with Nitro retention policies. Nitro never deletes repositories automatically, so lifecycle rules that expire objects will surface as 404s to clients.
- Monitor object count and size to detect runaway storages; Nitro’s S3 driver currently lacks the optimized directory streaming used by the local backend, so list-heavy operations will cost extra API calls.
- Back up the
storagestable whenever you rotate credentials—losing it means Nitro forgets how to talk to the bucket even though the data still exists.