Skip to content

Storage Operations

This guide covers the primary storage API — synapse.storage.upload() — which stores your data with multiple providers for redundancy. For manual control over each upload phase, see Split Operations.

Data Set: A logical container of pieces stored with one provider. When a data set is created, a payment rail is established with that provider. All pieces in the data set share this single payment rail and are verified together via PDP proofs.

PieceCID: Content-addressed identifier for your data (format: bafkzcib...). Automatically calculated during upload and used to retrieve data from any provider.

Metadata: Optional key-value pairs for organization:

  • Data Set Metadata: Max 10 keys (e.g., project, environment)
  • Piece Metadata: Max 5 keys per piece (e.g., filename, contentType)

Copies and Durability: By default, upload() stores your data with 2 independent providers. Each provider maintains its own data set with separate PDP proofs and payment rails. If one provider goes down, your data is still available from the other.

Storage Manager: The main entry point for storage operations (synapse.storage). Handles provider selection, multi-copy orchestration, data set management, and provider-agnostic downloads.

Upload data with a single call — the SDK selects providers and handles multi-copy replication automatically:

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array([1, 2, 3, 4, 5])
const { pieceCid, size, copies, failures } = await synapse.storage.upload(data)
console.log("PieceCID:", pieceCid.toString())
console.log("Size:", size, "bytes")
console.log("Stored on", copies.length, "providers")
for (const copy of copies) {
console.log(` Provider ${copy.providerId}: role=${copy.role}, dataSet=${copy.dataSetId}`)
}
if (failures.length > 0) {
console.warn("Some copies failed:", failures)
}

The result contains:

  • pieceCid — content address of your data, used for downloads
  • size — size of the uploaded data in bytes
  • copies — array of successful copies, each with providerId, dataSetId, pieceId, role ('primary' or 'secondary'), retrievalUrl, and isNewDataSet
  • failures — array of failed copy attempts (partial failures are returned, not thrown), each with providerId, role, error, and explicit

Attach metadata to organize uploads. The SDK reuses existing data sets when metadata matches, avoiding duplicate payment rails:

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new TextEncoder().encode("Hello, Filecoin!")
const result = await synapse.storage.upload(data, {
metadata: {
Application: "My DApp",
Version: "1.0.0",
Category: "Documents",
},
pieceMetadata: {
filename: "hello.txt",
contentType: "text/plain",
},
})
console.log("Uploaded:", result.pieceCid.toString())

Subsequent uploads with the same metadata reuse the same data sets and payment rails.

Track the lifecycle of a multi-copy upload with callbacks:

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array(1024) // 1KB of data
const result = await synapse.storage.upload(data, {
callbacks: {
onStored: (providerId, pieceCid) => {
console.log(`Data stored on provider ${providerId}`)
},
onCopyComplete: (providerId, pieceCid) => {
console.log(`Secondary copy complete on provider ${providerId}`)
},
onCopyFailed: (providerId, pieceCid, error) => {
console.warn(`Copy failed on provider ${providerId}:`, error.message)
},
onPullProgress: (providerId, pieceCid, status) => {
console.log(`Pull to provider ${providerId}: ${status}`)
},
onPiecesAdded: (txHash, providerId, pieces) => {
console.log(`On-chain commit submitted: ${txHash}`)
},
onPiecesConfirmed: (dataSetId, providerId, pieces) => {
console.log(`Confirmed on-chain: dataSet=${dataSetId}, provider=${providerId}`)
},
onProgress: (bytesUploaded) => {
console.log(`Uploaded ${bytesUploaded} bytes`)
},
},
})

Callback lifecycle:

  1. onProgress — fires during upload to primary provider
  2. onStored — primary upload complete, piece parked on SP
  3. onPullProgress — SP-to-SP transfer status for secondaries
  4. onCopyComplete / onCopyFailed — secondary pull result
  5. onPiecesAdded — commit transaction submitted
  6. onPiecesConfirmed — commit confirmed on-chain

Adjust the number of copies for your durability requirements:

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array(256)
// Store 3 copies for higher redundancy
const result3 = await synapse.storage.upload(data, { count: 3 })
console.log("3 copies:", result3.copies.length)
// Store a single copy when redundancy isn't needed
const result1 = await synapse.storage.upload(data, { count: 1 })
console.log("1 copy:", result1.copies.length)

The default is 2 copies. The first copy is stored on an endorsed provider (high trust, curated), and secondary copies are pulled via SP-to-SP transfer from approved providers.

upload() is designed around partial success over atomicity: it commits whatever succeeded rather than throwing away successful work. This means the return value is the primary interface for understanding what happened — not just whether it threw.

upload() only throws in these cases:

ErrorWhat happenedWhat to do
StoreErrorPrimary upload failed — no data committed anywhereRetry the upload
CommitErrorData is stored on providers but all on-chain commits failedUse split operations to retry commit() without re-uploading
Selection errorNo endorsed provider available or reachableCheck provider health / network

If upload() returns (no throw), at least one copy is committed on-chain. But the result may contain fewer copies than requested. Every copy in copies[] represents a committed on-chain data set that the user is now paying for.

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array(256)
const result = await synapse.storage.upload(data, { count: 2 })
// Check: did we get all requested copies?
if (result.copies.length < 2) {
console.warn(`Only ${result.copies.length}/2 copies succeeded`)
for (const failure of result.failures) {
console.warn(` Provider ${failure.providerId} (${failure.role}): ${failure.error}`)
}
}
// Check: did the endorsed primary succeed?
const primaryFailed = result.failures.find(f => f.role === "primary")
if (primaryFailed) {
console.warn(`Endorsed provider failed: ${primaryFailed.error}`)
// Data is only on non-endorsed secondaries
}
// Every copy is committed and being paid for
for (const copy of result.copies) {
console.log(`Provider ${copy.providerId}, dataset ${copy.dataSetId}, piece ${copy.pieceId}`)
}

For auto-selected providers (no explicit providerIds or dataSetIds), the SDK automatically retries failed secondaries with alternate providers up to 5 times. If you explicitly specify providers, the SDK respects your choice and does not retry.

Download from any provider that has the piece — the SDK resolves the provider automatically:

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
// Download using PieceCID from a previous upload
const pieceCid = "bafkzcib..." // from upload result
const bytes = await synapse.storage.download({ pieceCid })
const text = new TextDecoder().decode(bytes)
console.log("Downloaded:", text)

For CDN-accelerated downloads:

import { Synapse } from "@filoz/synapse-sdk"
import { privateKeyToAccount } from "viem/accounts"
// Enable CDN globally
const synapse = Synapse.create({
account: privateKeyToAccount("0x..."),
withCDN: true,
})
const bytes = await synapse.storage.download({ pieceCid: "bafkzcib..." })
// Or per-download:
const bytes2 = await synapse.storage.download({
pieceCid: "bafkzcib...",
withCDN: true,
})

Retrieve all data sets owned by your account to inspect piece counts, CDN status, and metadata:

const
const dataSets: EnhancedDataSetInfo[]
dataSets
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.findDataSets(options?: {
address?: Address;
}): Promise<EnhancedDataSetInfo[]>
findDataSets
();
for (const
const ds: EnhancedDataSetInfo
ds
of
const dataSets: EnhancedDataSetInfo[]
dataSets
) {
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Dataset ${
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.pdpVerifierDataSetId: bigint
pdpVerifierDataSetId
}:`, {
live: boolean
live
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.isLive: boolean
isLive
,
cdn: boolean
cdn
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.withCDN: boolean
withCDN
,
pieces: bigint
pieces
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.activePieceCount: bigint
activePieceCount
,
metadata: Record<string, string>
metadata
:
const ds: EnhancedDataSetInfo
ds
.
EnhancedDataSetInfo.metadata: Record<string, string>
metadata
});
}

List all pieces stored in a specific data set by iterating through a context:

const
const context: StorageContext
context
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.createContext(options?: StorageServiceOptions): Promise<StorageContext>
createContext
({
StorageServiceOptions.dataSetId?: bigint
dataSetId
});
const
const pieces: any[]
pieces
= [];
for await (const
const piece: PieceRecord
piece
of
const context: StorageContext
context
.
StorageContext.getPieces(options?: {
batchSize?: bigint;
}): AsyncGenerator<PieceRecord>
getPieces
()) {
const pieces: any[]
pieces
.
Array<any>.push(...items: any[]): number

Appends new elements to the end of an array, and returns the new length of the array.

@paramitems New elements to add to the array.

push
(
const piece: PieceRecord
piece
);
}
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Found ${
const pieces: any[]
pieces
.
Array<any>.length: number

Gets or sets the length of the array. This is a number one higher than the highest index in the array.

length
} pieces`);

Access custom metadata attached to individual pieces:

const
const warmStorage: WarmStorageService
warmStorage
=
class WarmStorageService
WarmStorageService
.
WarmStorageService.create(options: {
transport?: Transport;
chain?: Chain;
account: Account;
}): WarmStorageService
create
({
account: Account
account
:
function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount

@description Creates an Account from a private key.

@returnsA Private Key Account.

privateKeyToAccount
('0x...') });
const
const metadata: MetadataObject
metadata
= await
const warmStorage: WarmStorageService
warmStorage
.
WarmStorageService.getPieceMetadata(options: {
dataSetId: bigint;
pieceId: bigint;
}): Promise<MetadataObject>
getPieceMetadata
({
dataSetId: bigint
dataSetId
,
pieceId: bigint
pieceId
:
const piece: {
pieceCid: string;
pieceId: bigint;
}
piece
.
pieceId: bigint
pieceId
});
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Piece metadata:",
const metadata: MetadataObject
metadata
);

Calculate size of a specific piece by extracting the size from the PieceCID:

import {
import getSizeFromPieceCID
getSizeFromPieceCID
} from "@filoz/synapse-sdk/piece";
const
const size: any
size
=
import getSizeFromPieceCID
getSizeFromPieceCID
(
const pieceCid: string
pieceCid
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
(`Piece size: ${
const size: any
size
} bytes`);

Query service-wide pricing, available providers, and network parameters:

const
const info: StorageInfo
info
= await
const synapse: Synapse
synapse
.
Synapse.storage: StorageManager
storage
.
StorageManager.getStorageInfo(): Promise<StorageInfo>
getStorageInfo
();
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Price/TiB/month:",
const info: StorageInfo
info
.
StorageInfo.pricing: {
noCDN: {
perTiBPerMonth: bigint;
perTiBPerDay: bigint;
perTiBPerEpoch: bigint;
};
withCDN: {
perTiBPerMonth: bigint;
perTiBPerDay: bigint;
perTiBPerEpoch: bigint;
};
tokenAddress: Address;
tokenSymbol: string;
}
pricing
.
noCDN: {
perTiBPerMonth: bigint;
perTiBPerDay: bigint;
perTiBPerEpoch: bigint;
}
noCDN
.
perTiBPerMonth: bigint
perTiBPerMonth
);
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("Providers:",
const info: StorageInfo
info
.
StorageInfo.providers: PDPProvider[]
providers
.
Array<PDPProvider>.length: number

Gets or sets the length of the array. This is a number one higher than the highest index in the array.

length
);
const
const providerInfo: PDPProvider
providerInfo
= await
const synapse: Synapse
synapse
.
Synapse.getProviderInfo(providerAddress: Address | bigint): Promise<PDPProvider>
getProviderInfo
("0x...");
var console: Console
console
.
Console.log(...data: any[]): void

The console.log() static method outputs a message to the console.

MDN Reference

log
("PDP URL:",
const providerInfo: PDPProvider
providerInfo
.
PDPProvider.pdp: PDPOffering
pdp
.
PDPOffering.serviceURL: string
serviceURL
);
  • Split Operations — Manual control over store, pull, and commit phases for batch uploads, custom error handling, and direct core library usage.

  • Plan Storage Costs — Calculate your monthly costs and understand funding requirements.

  • Payment Management — Manage deposits, approvals, and payment rails.