Use Storages to define where finished media should go after it leaves local disk. This is the module for external buckets and archive targets used by recordings, copied files, and exported assets.
In production, a storage object gives your team a reusable destination with a stable id. Downstream jobs can reference that object instead of repeating bucket URLs and credentials in every workflow.
create and update register or change a storage target.getAll and getCount support inventory and audits.getById returns one configured destination.remove detaches a target that is no longer needed.access_key and secret_key as write-only inputs. They are supplied on create or update and are not returned later.The examples below cover the two most common object storage setups used in production.
Typical choice for long-term retention and shared media libraries.
{
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}
Typical choice for cost-sensitive archive and off-instance retention.
{
"name": "Backblaze media archive",
"type": "STORAGE_TYPE_BACKBLAZE",
"bucket_url": "https://s3.us-east-005.backblazeb2.com/my-media-bucket",
"access_key": "B2_KEY_ID",
"secret_key": "B2_APPLICATION_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}
Create the storage object before enabling recording or export workflows for an event. That way, operators are not entering bucket details during a live production window and every job can point to a known-good destination.
Different buckets usually exist for different reasons: long-term archive, short-term editorial exchange, or final customer delivery. Model those as separate storage objects so permissions, lifecycle rules, and operational ownership stay clear.
When keys or endpoint details change, update the storage object and run a small validation transfer before the next show. Read APIs will confirm the destination exists, but they will not return stored secrets for review.
Use this method before any recording, archive, or file-transfer workflow is pointed at external storage.
It registers the target, validates bucket and metadata settings, and makes the destination available for production use. The preset examples are the fastest path for common S3-compatible providers such as AWS S3 or Backblaze.
Use this preset when recordings or copied assets should land in an AWS S3 bucket.
curl --request POST \--url http://localhost/api/storages/create \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"name": "AWS media archive","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","access_key": "AWS_ACCESS_KEY_ID","secret_key": "AWS_SECRET_ACCESS_KEY","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {}}'
Use this preset when the storage backend is Backblaze and the bucket URL follows the S3-compatible Backblaze endpoint pattern.
curl --request POST \--url http://localhost/api/storages/create \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"name": "Backblaze media archive","type": "STORAGE_TYPE_BACKBLAZE","bucket_url": "https://s3.us-east-005.backblazeb2.com/my-media-bucket","access_key": "B2_KEY_ID","secret_key": "B2_APPLICATION_KEY","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {}}'
Dashboard label: Storage Name.
The unique storage name. The backend validates a constrained name pattern and length before creating the storage object.
Dashboard label: Location.
Real product values include STORAGE_TYPE_S3, STORAGE_TYPE_BACKBLAZE, and an internal-disk type used as a model default.
Dashboard label: Bucket URL.
Full bucket URL for the storage target.
Dashboard label: Access key (Key ID).
Write-only credential field used when the storage is created or updated.
Dashboard label: Secret key (App key).
Write-only credential field used when the storage is created or updated.
Dashboard label: Metadata URL.
Redis metadata backend used by the storage mounting layer.
Optional metadata object stored with the storage target.
curl --request POST \--url http://localhost/api/storages/create \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"name": "AWS media archive","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","access_key": "AWS_ACCESS_KEY_ID","secret_key": "AWS_SECRET_ACCESS_KEY","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {}}'
Storage object identifiers and the saved storage name.
Storage type, bucket URL, and the normalized folder name derived from the storage name.
Saved metadata backend fields for the storage object.
Creation timestamp managed by the backend.
The create and update flows accept credentials, but list and lookup methods intentionally avoid returning the secret fields back to the client.
{"_id": "6a0011223344556677889900","id": "6a0011223344556677889900","name": "AWS media archive","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {},"folder_name": "aws-media-archive","created": "2026-03-24T18:00:00.000Z","success": true}
Use this with paginated storage tables or quick inventory checks in admin tooling.
It reports how many storage targets are visible to the authenticated user before you fetch the full list.
curl --request POST \--url http://localhost/api/storages/getCount \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"filter": {}}'
Total number of storage targets currently visible to the authenticated user.
{"count": 1}
Use this to populate storage inventories in control panels and preflight checks.
It lists the storage targets visible to the current user with non-secret fields such as name, type, bucket URL, and metadata.
Optional page size for the list query.
Optional offset for paginated listing.
Optional sort descriptor. The dashboard store defaults to { created: 1 }.
Optional filter object forwarded to the backend query.
curl --request POST \--url http://localhost/api/storages/getAll \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"limit": 10,"skip": 0,"sort": {"created": 1},"filter": {}}'
The backend returns a bare array of non-secret storage objects.
[{"_id": "6a0011223344556677889900","id": "6a0011223344556677889900","name": "AWS media archive","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {},"folder_name": "aws-media-archive","created": "2026-03-24T18:00:00.000Z","success": true}]
Use this when an operator opens a storage target for review or edit, or when a workflow needs to confirm the saved destination details before use.
It loads one configured target by id and exposes the non-secret fields needed by management tooling.
Identifier of the storage target you want to load.
curl --request POST \--url http://localhost/api/storages/getById \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"id": "STORAGE_ID"}'
The backend returns the saved non-secret storage fields for the requested target.
{"_id": "6a0011223344556677889900","id": "6a0011223344556677889900","name": "AWS media archive","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {},"folder_name": "aws-media-archive","created": "2026-03-24T18:00:00.000Z","success": true}
Use this to rotate credentials, change the bucket or metadata endpoint, or correct storage settings without replacing the destination in your control plane.
The update path follows the same contract as create, so existing workflows can keep the same storage target identity while you refresh its connection details.
Identifier of the storage target being updated.
The update contract mirrors create. In practice this is where you change the bucket URL, metadata URL, or credentials.
curl --request POST \--url http://localhost/api/storages/update \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"id": "STORAGE_ID","name": "AWS media archive updated","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","access_key": "AWS_ACCESS_KEY_ID","secret_key": "AWS_SECRET_ACCESS_KEY","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {}}'
The backend returns the updated storage object.
{"_id": "6a0011223344556677889900","id": "6a0011223344556677889900","name": "AWS media archive updated","type": "STORAGE_TYPE_S3","bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com","redis_meta_data_url": "redis://localhost:6379/2","meta_data": {},"folder_name": "aws-media-archive","created": "2026-03-24T18:00:00.000Z","success": true}
Use this to retire a storage destination so new recordings and file transfers can no longer select it.
Run it only after confirming no active workflow still depends on that target; the engine detaches it from the control plane and performs cleanup.
Identifier of the storage target to delete.
curl --request DELETE \--url http://localhost/api/storages/remove \--header 'x-access-token: <your_api_token>' \--header 'Content-Type: application/json' \--data '{"id": "STORAGE_ID"}'
The remove endpoint responds with a success-shaped result after the cleanup path finishes.
{"success": true}