Storages

Storages

Use Storages to define where finished media should go after it leaves local disk. This is the module for external buckets and archive targets used by recordings, copied files, and exported assets.

In production, a storage object gives your team a reusable destination with a stable id. Downstream jobs can reference that object instead of repeating bucket URLs and credentials in every workflow.

What this module solves

  • Moves archive and delivery planning out of individual media jobs.
  • Provides a single, reusable destination for recordings and file copy workflows.
  • Supports separate storage policies for archive, exchange, and customer delivery.
  • Keeps secrets out of read APIs after initial configuration.

Key operations

  • create and update register or change a storage target.
  • getAll and getCount support inventory and audits.
  • getById returns one configured destination.
  • remove detaches a target that is no longer needed.

Operational notes

  • Treat access_key and secret_key as write-only inputs. They are supplied on create or update and are not returned later.
  • Use one storage object per operational destination. Do not mix archive, handoff, and delivery policies into a single bucket unless that is intentional.
  • Validate a new or updated destination with a low-risk workflow before routing production media to it.

Example targets

The examples below cover the two most common object storage setups used in production.

AWS S3 archive

Typical choice for long-term retention and shared media libraries.

{
  "name": "AWS media archive",
  "type": "STORAGE_TYPE_S3",
  "bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
  "access_key": "AWS_ACCESS_KEY_ID",
  "secret_key": "AWS_SECRET_ACCESS_KEY",
  "redis_meta_data_url": "redis://localhost:6379/2",
  "meta_data": {}
}

Backblaze B2 archive

Typical choice for cost-sensitive archive and off-instance retention.

{
  "name": "Backblaze media archive",
  "type": "STORAGE_TYPE_BACKBLAZE",
  "bucket_url": "https://s3.us-east-005.backblazeb2.com/my-media-bucket",
  "access_key": "B2_KEY_ID",
  "secret_key": "B2_APPLICATION_KEY",
  "redis_meta_data_url": "redis://localhost:6379/2",
  "meta_data": {}
}

Production workflows

Register archive storage before recording depends on it

Create the storage object before enabling recording or export workflows for an event. That way, operators are not entering bucket details during a live production window and every job can point to a known-good destination.

Separate archive, editorial handoff, and customer delivery

Different buckets usually exist for different reasons: long-term archive, short-term editorial exchange, or final customer delivery. Model those as separate storage objects so permissions, lifecycle rules, and operational ownership stay clear.

Rotate credentials without changing downstream references

When keys or endpoint details change, update the storage object and run a small validation transfer before the next show. Read APIs will confirm the destination exists, but they will not return stored secrets for review.

Operator guidance

  • Name storage objects by purpose, not just by vendor.
  • Keep metadata minimal and meaningful so inventory stays easy to audit.
  • Remove unused targets when projects end or retention policies change.
Create storage
Collapse
POST
/api/storages/create

Use this method before any recording, archive, or file-transfer workflow is pointed at external storage.

It registers the target, validates bucket and metadata settings, and makes the destination available for production use. The preset examples are the fastest path for common S3-compatible providers such as AWS S3 or Backblaze.

AWS S3 storage target

Use this preset when recordings or copied assets should land in an AWS S3 bucket.

AWS S3 storage target
curl --request POST \
--url http://localhost/api/storages/create \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Backblaze storage target

Use this preset when the storage backend is Backblaze and the bucket URL follows the S3-compatible Backblaze endpoint pattern.

Backblaze storage target
curl --request POST \
--url http://localhost/api/storages/create \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "Backblaze media archive",
"type": "STORAGE_TYPE_BACKBLAZE",
"bucket_url": "https://s3.us-east-005.backblazeb2.com/my-media-bucket",
"access_key": "B2_KEY_ID",
"secret_key": "B2_APPLICATION_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Request body parameters
Identity
name
string

Dashboard label: Storage Name.

The unique storage name. The backend validates a constrained name pattern and length before creating the storage object.

Location
type
string

Dashboard label: Location.

Real product values include STORAGE_TYPE_S3, STORAGE_TYPE_BACKBLAZE, and an internal-disk type used as a model default.

bucket_url
string

Dashboard label: Bucket URL.

Full bucket URL for the storage target.

Credentials
access_key
string

Dashboard label: Access key (Key ID).

Write-only credential field used when the storage is created or updated.

secret_key
string

Dashboard label: Secret key (App key).

Write-only credential field used when the storage is created or updated.

Metadata backend
redis_meta_data_url
string

Dashboard label: Metadata URL.

Redis metadata backend used by the storage mounting layer.

meta_data
object

Optional metadata object stored with the storage target.

Create storage
curl --request POST \
--url http://localhost/api/storages/create \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Response
Identity
_id / id / name
mixed

Storage object identifiers and the saved storage name.

Location
type / bucket_url / folder_name
mixed

Storage type, bucket URL, and the normalized folder name derived from the storage name.

Metadata backend
redis_meta_data_url / meta_data
mixed

Saved metadata backend fields for the storage object.

Runtime
created
string

Creation timestamp managed by the backend.

Security
credential material
not returned in later lookups

The create and update flows accept credentials, but list and lookup methods intentionally avoid returning the secret fields back to the client.

Response: Create storage
JSON
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
Get storages count
Expand
POST
/api/storages/getCount

Use this with paginated storage tables or quick inventory checks in admin tooling.

It reports how many storage targets are visible to the authenticated user before you fetch the full list.

Request body parameters
This method has no parameters
Get storages count
curl --request POST \
--url http://localhost/api/storages/getCount \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"filter": {}
}'
Response
count
integer

Total number of storage targets currently visible to the authenticated user.

Response: Get storages count
JSON
{
"count": 1
}
Get all storages
Expand
POST
/api/storages/getAll

Use this to populate storage inventories in control panels and preflight checks.

It lists the storage targets visible to the current user with non-secret fields such as name, type, bucket URL, and metadata.

Request body parameters
limit
integer

Optional page size for the list query.

skip
integer

Optional offset for paginated listing.

sort
object

Optional sort descriptor. The dashboard store defaults to { created: 1 }.

filter
object

Optional filter object forwarded to the backend query.

Get all storages
curl --request POST \
--url http://localhost/api/storages/getAll \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"limit": 10,
"skip": 0,
"sort": {
"created": 1
},
"filter": {}
}'
Response
array of storage objects
array

The backend returns a bare array of non-secret storage objects.

Response: Get all storages
JSON
[
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
]
Get storage by id
Expand
POST
/api/storages/getById

Use this when an operator opens a storage target for review or edit, or when a workflow needs to confirm the saved destination details before use.

It loads one configured target by id and exposes the non-secret fields needed by management tooling.

Request body parameters
id
string

Identifier of the storage target you want to load.

Get storage by id
curl --request POST \
--url http://localhost/api/storages/getById \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"id": "STORAGE_ID"
}'
Response
storage object
object

The backend returns the saved non-secret storage fields for the requested target.

Response: Get storage by id
JSON
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
Update storage
Expand
POST
/api/storages/update

Use this to rotate credentials, change the bucket or metadata endpoint, or correct storage settings without replacing the destination in your control plane.

The update path follows the same contract as create, so existing workflows can keep the same storage target identity while you refresh its connection details.

Request body parameters
id
string

Identifier of the storage target being updated.

storage payload fields
mixed

The update contract mirrors create. In practice this is where you change the bucket URL, metadata URL, or credentials.

Update storage
curl --request POST \
--url http://localhost/api/storages/update \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"id": "STORAGE_ID",
"name": "AWS media archive updated",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Response
updated storage object
object

The backend returns the updated storage object.

Response: Update storage
JSON
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive updated",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
Remove storage
Expand
DELETE
/api/storages/remove

Use this to retire a storage destination so new recordings and file transfers can no longer select it.

Run it only after confirming no active workflow still depends on that target; the engine detaches it from the control plane and performs cleanup.

Query parameters
id
string

Identifier of the storage target to delete.

Remove storage
curl --request DELETE \
--url http://localhost/api/storages/remove \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"id": "STORAGE_ID"
}'
Response
success
boolean

The remove endpoint responds with a success-shaped result after the cleanup path finishes.

Response: Remove storage
JSON
{
"success": true
}