Storage Operations
This guide covers the primary storage API — synapse.storage.upload() — which stores your data with multiple providers for redundancy. For manual control over each upload phase, see Split Operations.
Key Concepts
Section titled “Key Concepts”Data Set: A logical container of pieces stored with one provider. When a data set is created, a payment rail is established with that provider. All pieces in the data set share this single payment rail and are verified together via PDP proofs.
PieceCID: Content-addressed identifier for your data (format: bafkzcib...). Automatically calculated during upload and used to retrieve data from any provider.
Metadata: Optional key-value pairs for organization:
- Data Set Metadata: Max 10 keys (e.g.,
project,environment) - Piece Metadata: Max 5 keys per piece (e.g.,
filename,contentType)
Copies and Durability: By default, upload() stores your data with 2 independent providers. Each provider maintains its own data set with separate PDP proofs and payment rails. If one provider goes down, your data is still available from the other.
Storage Manager: The main entry point for storage operations (synapse.storage). Handles provider selection, multi-copy orchestration, data set management, and provider-agnostic downloads.
Quick Start
Section titled “Quick Start”Upload data with a single call — the SDK selects providers and handles multi-copy replication automatically:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array([1, 2, 3, 4, 5])
const { pieceCid, size, copies, failures } = await synapse.storage.upload(data)
console.log("PieceCID:", pieceCid.toString())console.log("Size:", size, "bytes")console.log("Stored on", copies.length, "providers")
for (const copy of copies) { console.log(` Provider ${copy.providerId}: role=${copy.role}, dataSet=${copy.dataSetId}`)}
if (failures.length > 0) { console.warn("Some copies failed:", failures)}The result contains:
pieceCid— content address of your data, used for downloadssize— size of the uploaded data in bytescopies— array of successful copies, each withproviderId,dataSetId,pieceId,role('primary'or'secondary'),retrievalUrl, andisNewDataSetfailures— array of failed copy attempts (partial failures are returned, not thrown), each withproviderId,role,error, andexplicit
Upload with Metadata
Section titled “Upload with Metadata”Attach metadata to organize uploads. The SDK reuses existing data sets when metadata matches, avoiding duplicate payment rails:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new TextEncoder().encode("Hello, Filecoin!")
const result = await synapse.storage.upload(data, { metadata: { Application: "My DApp", Version: "1.0.0", Category: "Documents", }, pieceMetadata: { filename: "hello.txt", contentType: "text/plain", },})
console.log("Uploaded:", result.pieceCid.toString())Subsequent uploads with the same metadata reuse the same data sets and payment rails.
Upload with Callbacks
Section titled “Upload with Callbacks”Track the lifecycle of a multi-copy upload with callbacks:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array(1024) // 1KB of data
const result = await synapse.storage.upload(data, { callbacks: { onStored: (providerId, pieceCid) => { console.log(`Data stored on provider ${providerId}`) }, onCopyComplete: (providerId, pieceCid) => { console.log(`Secondary copy complete on provider ${providerId}`) }, onCopyFailed: (providerId, pieceCid, error) => { console.warn(`Copy failed on provider ${providerId}:`, error.message) }, onPullProgress: (providerId, pieceCid, status) => { console.log(`Pull to provider ${providerId}: ${status}`) }, onPiecesAdded: (txHash, providerId, pieces) => { console.log(`On-chain commit submitted: ${txHash}`) }, onPiecesConfirmed: (dataSetId, providerId, pieces) => { console.log(`Confirmed on-chain: dataSet=${dataSetId}, provider=${providerId}`) }, onProgress: (bytesUploaded) => { console.log(`Uploaded ${bytesUploaded} bytes`) }, },})Callback lifecycle:
onProgress— fires during upload to primary provideronStored— primary upload complete, piece parked on SPonPullProgress— SP-to-SP transfer status for secondariesonCopyComplete/onCopyFailed— secondary pull resultonPiecesAdded— commit transaction submittedonPiecesConfirmed— commit confirmed on-chain
Controlling Copy Count
Section titled “Controlling Copy Count”Adjust the number of copies for your durability requirements:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array(256)
// Store 3 copies for higher redundancyconst result3 = await synapse.storage.upload(data, { count: 3 })console.log("3 copies:", result3.copies.length)
// Store a single copy when redundancy isn't neededconst result1 = await synapse.storage.upload(data, { count: 1 })console.log("1 copy:", result1.copies.length)The default is 2 copies. The first copy is stored on an endorsed provider (high trust, curated), and secondary copies are pulled via SP-to-SP transfer from approved providers.
Understanding the Result
Section titled “Understanding the Result”upload() is designed around partial success over atomicity: it commits whatever succeeded rather than throwing away successful work. This means the return value is the primary interface for understanding what happened — not just whether it threw.
When upload() throws
Section titled “When upload() throws”upload() only throws in these cases:
| Error | What happened | What to do |
|---|---|---|
StoreError | Primary upload failed — no data committed anywhere | Retry the upload |
CommitError | Data is stored on providers but all on-chain commits failed | Use split operations to retry commit() without re-uploading |
| Selection error | No endorsed provider available or reachable | Check provider health / network |
When upload() returns
Section titled “When upload() returns”If upload() returns (no throw), at least one copy is committed on-chain. But the result may contain fewer copies than requested. Every copy in copies[] represents a committed on-chain data set that the user is now paying for.
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const data = new Uint8Array(256)
const result = await synapse.storage.upload(data, { count: 2 })
// Check: did we get all requested copies?if (result.copies.length < 2) { console.warn(`Only ${result.copies.length}/2 copies succeeded`) for (const failure of result.failures) { console.warn(` Provider ${failure.providerId} (${failure.role}): ${failure.error}`) }}
// Check: did the endorsed primary succeed?const primaryFailed = result.failures.find(f => f.role === "primary")if (primaryFailed) { console.warn(`Endorsed provider failed: ${primaryFailed.error}`) // Data is only on non-endorsed secondaries}
// Every copy is committed and being paid forfor (const copy of result.copies) { console.log(`Provider ${copy.providerId}, dataset ${copy.dataSetId}, piece ${copy.pieceId}`)}Auto-retry behavior
Section titled “Auto-retry behavior”For auto-selected providers (no explicit providerIds or dataSetIds), the SDK automatically retries failed secondaries with alternate providers up to 5 times. If you explicitly specify providers, the SDK respects your choice and does not retry.
Download
Section titled “Download”Download from any provider that has the piece — the SDK resolves the provider automatically:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
// Download using PieceCID from a previous uploadconst pieceCid = "bafkzcib..." // from upload resultconst bytes = await synapse.storage.download({ pieceCid })const text = new TextDecoder().decode(bytes)console.log("Downloaded:", text)For CDN-accelerated downloads:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
// Enable CDN globallyconst synapse = Synapse.create({ account: privateKeyToAccount("0x..."), withCDN: true,})
const bytes = await synapse.storage.download({ pieceCid: "bafkzcib..." })
// Or per-download:const bytes2 = await synapse.storage.download({ pieceCid: "bafkzcib...", withCDN: true,})Data Set Management
Section titled “Data Set Management”Getting all data sets
Section titled “Getting all data sets”Retrieve all data sets owned by your account to inspect piece counts, CDN status, and metadata:
const const dataSets: EnhancedDataSetInfo[]
dataSets = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.findDataSets(options?: { address?: Address;}): Promise<EnhancedDataSetInfo[]>
findDataSets();
for (const const ds: EnhancedDataSetInfo
ds of const dataSets: EnhancedDataSetInfo[]
dataSets) { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Dataset ${const ds: EnhancedDataSetInfo
ds.EnhancedDataSetInfo.pdpVerifierDataSetId: bigint
pdpVerifierDataSetId}:`, { live: boolean
live: const ds: EnhancedDataSetInfo
ds.EnhancedDataSetInfo.isLive: boolean
isLive, cdn: boolean
cdn: const ds: EnhancedDataSetInfo
ds.EnhancedDataSetInfo.withCDN: boolean
withCDN, pieces: bigint
pieces: const ds: EnhancedDataSetInfo
ds.EnhancedDataSetInfo.activePieceCount: bigint
activePieceCount, metadata: Record<string, string>
metadata: const ds: EnhancedDataSetInfo
ds.EnhancedDataSetInfo.metadata: Record<string, string>
metadata });}Getting data set pieces
Section titled “Getting data set pieces”List all pieces stored in a specific data set by iterating through a context:
const const context: StorageContext
context = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContext(options?: StorageServiceOptions): Promise<StorageContext>
createContext({ StorageServiceOptions.dataSetId?: bigint
dataSetId });
const const pieces: any[]
pieces = [];for await (const const piece: PieceRecord
piece of const context: StorageContext
context.StorageContext.getPieces(options?: { batchSize?: bigint;}): AsyncGenerator<PieceRecord>
getPieces()) { const pieces: any[]
pieces.Array<any>.push(...items: any[]): number
Appends new elements to the end of an array, and returns the new length of the array.
push(const piece: PieceRecord
piece);}var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Found ${const pieces: any[]
pieces.Array<any>.length: number
Gets or sets the length of the array. This is a number one higher than the highest index in the array.
length} pieces`);Getting piece metadata
Section titled “Getting piece metadata”Access custom metadata attached to individual pieces:
const const warmStorage: WarmStorageService
warmStorage = class WarmStorageService
WarmStorageService.WarmStorageService.create(options: { transport?: Transport; chain?: Chain; account: Account;}): WarmStorageService
create({ account: Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount('0x...') });
const const metadata: MetadataObject
metadata = await const warmStorage: WarmStorageService
warmStorage.WarmStorageService.getPieceMetadata(options: { dataSetId: bigint; pieceId: bigint;}): Promise<MetadataObject>
getPieceMetadata({ dataSetId: bigint
dataSetId, pieceId: bigint
pieceId: const piece: { pieceCid: string; pieceId: bigint;}
piece.pieceId: bigint
pieceId });var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Piece metadata:", const metadata: MetadataObject
metadata);Getting the size of a specific piece
Section titled “Getting the size of a specific piece”Calculate size of a specific piece by extracting the size from the PieceCID:
import { import getSizeFromPieceCID
getSizeFromPieceCID } from "@filoz/synapse-sdk/piece";
const const size: any
size = import getSizeFromPieceCID
getSizeFromPieceCID(const pieceCid: string
pieceCid);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Piece size: ${const size: any
size} bytes`);Storage Information
Section titled “Storage Information”Query service-wide pricing, available providers, and network parameters:
const const info: StorageInfo
info = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.getStorageInfo(): Promise<StorageInfo>
getStorageInfo();var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Price/TiB/month:", const info: StorageInfo
info.StorageInfo.pricing: { noCDN: { perTiBPerMonth: bigint; perTiBPerDay: bigint; perTiBPerEpoch: bigint; }; withCDN: { perTiBPerMonth: bigint; perTiBPerDay: bigint; perTiBPerEpoch: bigint; }; tokenAddress: Address; tokenSymbol: string;}
pricing.noCDN: { perTiBPerMonth: bigint; perTiBPerDay: bigint; perTiBPerEpoch: bigint;}
noCDN.perTiBPerMonth: bigint
perTiBPerMonth);var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("Providers:", const info: StorageInfo
info.StorageInfo.providers: PDPProvider[]
providers.Array<PDPProvider>.length: number
Gets or sets the length of the array. This is a number one higher than the highest index in the array.
length);
const const providerInfo: PDPProvider
providerInfo = await const synapse: Synapse
synapse.Synapse.getProviderInfo(providerAddress: Address | bigint): Promise<PDPProvider>
getProviderInfo("0x...");var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log("PDP URL:", const providerInfo: PDPProvider
providerInfo.PDPProvider.pdp: PDPOffering
pdp.PDPOffering.serviceURL: string
serviceURL);Next Steps
Section titled “Next Steps”-
Split Operations — Manual control over store, pull, and commit phases for batch uploads, custom error handling, and direct core library usage.
-
Plan Storage Costs — Calculate your monthly costs and understand funding requirements.
-
Payment Management — Manage deposits, approvals, and payment rails.