Storages

The Storages section covers the API endpoints used to register and manage external storage targets in Callaba Engine. This is the layer you use when recorded files, copied media, or archived assets should live in an object storage backend rather than only on the local server.

Compared with file-oriented modules, a storage object is the place where your media should end up when local disk is not the final home. It tells the engine which bucket or storage target should receive completed assets and gives operations one durable destination to plan around.

The control methods complete that lifecycle. Use create and update to register the storage target, getAll and getCount for listings, getById when you need one storage object, and remove when the storage should be detached and cleaned up.

This module is operationally sensitive: secrets are provided on create and update, but are intentionally not returned later through listing and lookup methods. The point is to manage the destination safely, not to turn the API into a place where credentials are read back out later.

Examples by preset

The most useful presets here are storage target families rather than transport families. In practice, the common paths are an AWS S3-compatible target, a Backblaze target, and the metadata strategy that goes with them.

{
  "name": "AWS media archive",
  "type": "STORAGE_TYPE_S3",
  "bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
  "access_key": "AWS_ACCESS_KEY_ID",
  "secret_key": "AWS_SECRET_ACCESS_KEY",
  "redis_meta_data_url": "redis://localhost:6379/2",
  "meta_data": {}
}
{
  "name": "Backblaze media archive",
  "type": "STORAGE_TYPE_BACKBLAZE",
  "bucket_url": "https://s3.us-east-005.backblazeb2.com/my-media-bucket",
  "access_key": "B2_KEY_ID",
  "secret_key": "B2_APPLICATION_KEY",
  "redis_meta_data_url": "redis://localhost:6379/2",
  "meta_data": {}
}

Workflow examples

These scenarios frame why a storage object exists in production, not only how the request body is shaped.

Register a durable archive target before recording or file transfer workflows depend on it

Use a storage object when recordings, copied files, or exported assets should land in a managed external bucket rather than stay only on the local machine.

  • Why it helps: one API object gives the team a reusable archive or delivery destination instead of forcing every recording or file-copy workflow to redefine it.
  • Typical fit: long-term archive, shared media repository, and off-instance file retention.

Use one storage object per operational destination rather than mixing archive policies

Different buckets often serve different business goals: long-term archive, short-term exchange, or per-project delivery. Defining them as separate storage objects keeps downstream recording and transfer workflows easier to reason about.

  • Typical fit: separate buckets for archive, editorial handoff, and customer delivery.

Treat secrets as write-only operational inputs

Access keys and secret keys are supplied when the storage is created or updated, but later lookup methods focus on the storage target itself rather than exposing those credentials again.

Create storage
Collapse
POST
/api/storages/create

Create a new storage target in Callaba Engine.

This method registers the storage location, validates the bucket and metadata parameters, prepares the mount layer, and persists the storage object. In practice it is the method you use before recordings or file transfers should rely on a cloud archive target.

Examples by preset are especially useful here because most real setups are one of a few recognizable storage families, such as AWS S3 or Backblaze.

AWS S3 storage target

Use this preset when recordings or copied assets should land in an AWS S3 bucket.

AWS S3 storage target
curl --request POST \
--url http://localhost/api/storages/create \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Backblaze storage target

Use this preset when the storage backend is Backblaze and the bucket URL follows the S3-compatible Backblaze endpoint pattern.

Backblaze storage target
curl --request POST \
--url http://localhost/api/storages/create \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "Backblaze media archive",
"type": "STORAGE_TYPE_BACKBLAZE",
"bucket_url": "https://s3.us-east-005.backblazeb2.com/my-media-bucket",
"access_key": "B2_KEY_ID",
"secret_key": "B2_APPLICATION_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Request body parameters
Identity
name
string

Dashboard label: Storage Name.

The unique storage name. The backend validates a constrained name pattern and length before creating the storage object.

Location
type
string

Dashboard label: Location.

Real product values include STORAGE_TYPE_S3, STORAGE_TYPE_BACKBLAZE, and an internal-disk type used as a model default.

bucket_url
string

Dashboard label: Bucket URL.

Full bucket URL for the storage target.

Credentials
access_key
string

Dashboard label: Access key (Key ID).

Write-only credential field used when the storage is created or updated.

secret_key
string

Dashboard label: Secret key (App key).

Write-only credential field used when the storage is created or updated.

Metadata backend
redis_meta_data_url
string

Dashboard label: Metadata URL.

Redis metadata backend used by the storage mounting layer.

meta_data
object

Optional metadata object stored with the storage target.

Create storage
curl --request POST \
--url http://localhost/api/storages/create \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Response
Identity
_id / id / name
mixed

Storage object identifiers and the saved storage name.

Location
type / bucket_url / folder_name
mixed

Storage type, bucket URL, and the normalized folder name derived from the storage name.

Metadata backend
redis_meta_data_url / meta_data
mixed

Saved metadata backend fields for the storage object.

Runtime
created
string

Creation timestamp managed by the backend.

Security
credential material
not returned in later lookups

The create and update flows accept credentials, but list and lookup methods intentionally avoid returning the secret fields back to the client.

Response: Create storage
JSON
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
Get storages count
Expand
POST
/api/storages/getCount

Return the total number of storage targets visible to the authenticated user.

Use this with paginated listings when a control panel needs the current count before fetching storage objects.

Request body parameters
This method has no parameters
Get storages count
curl --request POST \
--url http://localhost/api/storages/getCount \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"filter": {}
}'
Response
count
integer

Total number of storage targets currently visible to the authenticated user.

Response: Get storages count
JSON
{
"count": 1
}
Get all storages
Expand
POST
/api/storages/getAll

Return the list of storage targets for the current user.

This is the normal control-plane listing method for storage management. It returns non-secret storage fields such as the name, type, bucket URL, and metadata object.

Request body parameters
limit
integer

Optional page size for the list query.

skip
integer

Optional offset for paginated listing.

sort
object

Optional sort descriptor. The dashboard store defaults to { created: 1 }.

filter
object

Optional filter object forwarded to the backend query.

Get all storages
curl --request POST \
--url http://localhost/api/storages/getAll \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"limit": 10,
"skip": 0,
"sort": {
"created": 1
},
"filter": {}
}'
Response
array of storage objects
array

The backend returns a bare array of non-secret storage objects.

Response: Get all storages
JSON
[
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
]
Get storage by id
Expand
POST
/api/storages/getById

Load one storage target by its id.

Use this when an operator opens the storage editor or when another management view needs the saved non-secret details of one configured target.

Request body parameters
id
string

Identifier of the storage target you want to load.

Get storage by id
curl --request POST \
--url http://localhost/api/storages/getById \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"id": "STORAGE_ID"
}'
Response
storage object
object

The backend returns the saved non-secret storage fields for the requested target.

Response: Get storage by id
JSON
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
Update storage
Expand
POST
/api/storages/update

Update an existing storage target.

The update contract mirrors the create path closely. In practice this is where you adjust the bucket URL, metadata backend URL, or replace credentials for the storage target.

Request body parameters
id
string

Identifier of the storage target being updated.

storage payload fields
mixed

The update contract mirrors create. In practice this is where you change the bucket URL, metadata URL, or credentials.

Update storage
curl --request POST \
--url http://localhost/api/storages/update \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"id": "STORAGE_ID",
"name": "AWS media archive updated",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"access_key": "AWS_ACCESS_KEY_ID",
"secret_key": "AWS_SECRET_ACCESS_KEY",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {}
}'
Response
updated storage object
object

The backend returns the updated storage object.

Response: Update storage
JSON
{
"_id": "6a0011223344556677889900",
"id": "6a0011223344556677889900",
"name": "AWS media archive updated",
"type": "STORAGE_TYPE_S3",
"bucket_url": "https://my-media-bucket.s3.us-east-1.amazonaws.com",
"redis_meta_data_url": "redis://localhost:6379/2",
"meta_data": {},
"folder_name": "aws-media-archive",
"created": "2026-03-24T18:00:00.000Z",
"success": true
}
Remove storage
Expand
DELETE
/api/storages/remove

Remove a storage target by id.

This detaches the configured storage target from the control plane and runs the cleanup path behind it. Use remove when the storage should no longer be available as a destination for recordings or file transfer workflows.

Query parameters
id
string

Identifier of the storage target to delete.

Remove storage
curl --request DELETE \
--url http://localhost/api/storages/remove \
--header 'x-access-token: <your_api_token>' \
--header 'Content-Type: application/json' \
--data '{
"id": "STORAGE_ID"
}'
Response
success
boolean

The remove endpoint responds with a success-shaped result after the cleanup path finishes.

Response: Remove storage
JSON
{
"success": true
}