Skip to content

StorageManager

Defined in: packages/synapse-sdk/src/storage/manager.ts:124

new StorageManager(options): StorageManager

Defined in: packages/synapse-sdk/src/storage/manager.ts:135

Creates a new StorageManager

ParameterTypeDescription
optionsStorageManagerOptionsThe options for the StorageManager StorageManagerOptions

StorageManager

createContext(options?): Promise<StorageContext>

Defined in: packages/synapse-sdk/src/storage/manager.ts:679

Create a single storage context with specified options

ParameterType
options?StorageServiceOptions

Promise<StorageContext>


createContexts(options?): Promise<StorageContext[]>

Defined in: packages/synapse-sdk/src/storage/manager.ts:621

Creates storage contexts for multi-provider storage deals and other operations.

By storing data with multiple independent providers, you reduce dependency on any single provider and improve overall data availability. Use contexts together as a group.

Contexts are selected by priority:

  1. Specified datasets (dataSetIds) - uses their existing providers
  2. Specified providers (providerIds) - finds or creates matching datasets
  3. Automatically selected from remaining approved providers

For automatic selection, existing datasets matching the metadata are reused. Providers are randomly chosen to distribute across the network.

ParameterTypeDescription
options?CreateContextsOptionsConfiguration options CreateContextsOptions

Promise<StorageContext[]>

Promise resolving to array of storage contexts


download(options): Promise<Uint8Array<ArrayBufferLike>>

Defined in: packages/synapse-sdk/src/storage/manager.ts:504

Download data from storage If context is provided, routes to context.download() Otherwise performs SP-agnostic download

ParameterType
optionsStorageManagerDownloadOptions

Promise<Uint8Array<ArrayBufferLike>>


findDataSets(options?): Promise<EnhancedDataSetInfo[]>

Defined in: packages/synapse-sdk/src/storage/manager.ts:752

Query data sets for this client

ParameterTypeDescription
options{ address?: `0x${string}`; }The options for the find data sets
options.address?`0x${string}`The client address, defaults to current signer

Promise<EnhancedDataSetInfo[]>

Array of enhanced data set information including management status


getDefaultContext(): Promise<StorageContext>

Defined in: packages/synapse-sdk/src/storage/manager.ts:742

Get or create the default context

Promise<StorageContext>


getStorageInfo(): Promise<StorageInfo>

Defined in: packages/synapse-sdk/src/storage/manager.ts:773

Get comprehensive information about the storage service including approved providers, pricing, contract addresses, and current allowances

Promise<StorageInfo>

Complete storage service information


preflightUpload(options): Promise<PreflightInfo>

Defined in: packages/synapse-sdk/src/storage/manager.ts:572

Run preflight checks for an upload without creating a context

ParameterTypeDescription
options{ metadata?: Record<string, string>; size: number; withCDN?: boolean; }The options for the preflight upload
options.metadata?Record<string, string>The metadata for the preflight upload
options.sizenumberThe size of data to upload in bytes
options.withCDN?booleanWhether to enable CDN services

Promise<PreflightInfo>

Preflight information including costs and allowances


terminateDataSet(options): Promise<`0x${string}`>

Defined in: packages/synapse-sdk/src/storage/manager.ts:764

Terminate a data set with given ID that belongs to the synapse signer. This will also result in the removal of all pieces in the data set.

ParameterTypeDescription
options{ dataSetId: bigint; }The options for the terminate data set
options.dataSetIdbigintThe ID of the data set to terminate

Promise<`0x${string}`>

Transaction hash


upload(data, options?): Promise<UploadResult>

Defined in: packages/synapse-sdk/src/storage/manager.ts:165

Upload data to Filecoin Onchain Cloud using a store->pull->commit flow across multiple providers.

By default, uploads to 2 providers (primary + secondary) for redundancy. Data is uploaded once to the primary, then secondaries pull from the primary via SP-to-SP transfer.

This method only throws if zero copies succeed. Individual copy failures are recorded in result.failures. Always check result.copies.length against your requested count.

For large files, prefer streaming to minimize memory usage.

For uploading multiple files, use the split operations API directly: createContexts() -> store() -> presignForCommit() -> pull() -> commit()

ParameterTypeDescription
dataUploadPieceStreamingDataRaw bytes (Uint8Array) or ReadableStream to upload
options?StorageManagerUploadOptionsUpload options including contexts, callbacks, and abort signal

Promise<UploadResult>

Upload result with pieceCid, size, copies array, and failures array

StoreError if primary store fails (before any data is committed)

CommitError if all commit attempts fail (data stored but not on-chain)