Amazon S3¶
Install¶
This installs boto3 as the S3 client.
Credential setup¶
nexus-fs uses the standard AWS credential chain. Configure at least one:
Option 1: Environment variables¶
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_DEFAULT_REGION=us-east-1 # optional
Option 2: AWS credentials file¶
This writes to ~/.aws/credentials. nexus-fs reads it automatically.
Option 3: IAM role (EC2/ECS/Lambda)¶
No configuration needed. boto3 picks up the instance metadata credentials automatically.
Verify credentials¶
This validates that credentials are found and that the bucket is accessible. See nexus-fs doctor for details.
Mount¶
The bucket mounts at /s3/my-bucket/.
Mount path¶
| URI | Mount point |
|---|---|
s3://my-bucket | /s3/my-bucket/ |
s3://data-lake | /s3/data-lake/ |
Override with at=:
Common patterns¶
Read and write¶
# skip-test
import nexus.fs
fs = nexus.fs.mount_sync("s3://my-bucket")
fs.write("/s3/my-bucket/report.csv", b"date,value\n2024-01-01,42\n")
content = fs.read("/s3/my-bucket/report.csv")
List objects¶
# skip-test
import nexus.fs
fs = nexus.fs.mount_sync("s3://my-bucket")
# List top-level
files = fs.ls("/s3/my-bucket/")
# With metadata (size, modified time)
entries = fs.ls("/s3/my-bucket/", detail=True)
Copy between backends¶
# skip-test
import nexus.fs
fs = nexus.fs.mount_sync("s3://my-bucket", "local://./cache")
# Download from S3 to local
content = fs.read("/s3/my-bucket/model.bin")
fs.write("/local/cache/model.bin", content)
Multi-mount with local cache¶
# skip-test
import nexus.fs
fs = nexus.fs.mount_sync("s3://my-bucket", "local://./local-cache")
# Read from S3, write to local — same API for both