Split Operations
When You Need This
Section titled “When You Need This”The high-level upload() handles single-piece multi-copy uploads end-to-end. Use split operations when you need:
- Batch uploading many files to specific providers without repeated context creation
- Custom error handling at each phase — retry store failures, skip failed secondaries, recover from commit failures
- Signing control to control the signing operations to avoid multiple wallet signature prompts during multi-copy uploads
- Greater provider/dataset targeting for uploading to known providers
The Upload Pipeline
Section titled “The Upload Pipeline”Every upload goes through three phases:
store ──► pull ──► commit │ │ │ │ │ └─ On-chain: create dataset, add piece, start payments │ └─ SP-to-SP: secondary provider fetches from primary └─ Upload: bytes sent to one provider (no on-chain state yet)- store: Upload bytes to a single SP. Returns
{ pieceCid, size }. The piece is “parked” on the SP but not yet on-chain and subject to garbage collection if not committed. - pull: SP-to-SP transfer. The destination SP fetches the piece from a source SP. No client bandwidth used.
- commit: Submit an on-chain transaction to add the piece to a data set. Creates the data set and payment rail if needed.
Creating Contexts
Section titled “Creating Contexts”A StorageContext represents a connection to a specific provider and data set. Create one for single-provider work, or multiple for multi-copy:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
// Single context — auto-selects providerconst ctx = await synapse.storage.createContext({ metadata: { source: "my-app" },})
// Multiple contexts for multi-copyconst contexts = await synapse.storage.createContexts({ count: 2, metadata: { source: "my-app" },})const [primary, secondary] = contextsContext creation options:
// Single context (createContext)await synapse.storage.createContext({ providerId: 1n, // specific provider (optional) dataSetId: 42n, // specific data set (optional) metadata: { ... }, // data set metadata for matching/creation withCDN: true, // enable fast-retrieval (paid, optional) excludeProviderIds: [3n], // skip specific providers (optional)})
// Multiple contexts (createContexts)await synapse.storage.createContexts({ count: 3, // number of contexts (default: 2) providerIds: [1n, 2n, 3n], // specific providers (mutually exclusive with dataSetIds) dataSetIds: [10n, 20n, 30n], // specific data sets (mutually exclusive with providerIds) metadata: { ... },})Data Set Selection and Matching
Section titled “Data Set Selection and Matching”The SDK intelligently manages data sets to minimize on-chain transactions. The selection behavior depends on the parameters you provide:
Selection Scenarios:
- Explicit data set ID: If you specify
dataSetId, that exact data set is used (must exist and be accessible) - Specific provider: If you specify
providerId, the SDK searches for matching data sets only within that provider’s existing data sets - Automatic selection: Without specific parameters, the SDK searches across all your data sets with any approved provider
Exact Metadata Matching: In scenarios 2 and 3, the SDK will reuse an existing data set only if it has exactly the same metadata keys and values as requested. This ensures data sets remain organized according to your specific requirements.
Selection Priority: When multiple data sets match your criteria:
- Data sets with existing pieces are preferred over empty ones
- Within each group (with pieces vs. empty), the oldest data set (lowest ID) is selected
Provider Selection (when no matching data sets exist):
- If you specify a provider (via
providerId), that provider is used - Otherwise, the SDK selects from endorsed providers for the primary copy and any approved provider for secondaries
- Before finalizing selection, the SDK verifies the provider is reachable via a ping test
- If a provider fails the ping test, the SDK tries the next candidate
- A new data set will be created automatically during the first commit
Store Phase
Section titled “Store Phase”Upload data to a provider without committing on-chain:
const { pieceCid, size } = await ctx.store(data, { pieceCid: preCalculatedCid, // skip expensive PieceCID (hash digest) calculation (optional) signal: abortController.signal, // cancellation (optional) onProgress: (bytes) => { // progress callback (optional) console.log(`Uploaded ${bytes} bytes`) },})
console.log(`Stored: ${pieceCid}, ${size} bytes`)store() accepts Uint8Array or ReadableStream<Uint8Array>. Use streaming for large files to minimize memory.
After store completes, the piece is parked on the SP and can be:
- Retrieved via the context’s
getPieceUrl(pieceCid) - Pulled to other providers via
pull() - Committed on-chain via
commit()
Pull Phase (SP-to-SP Transfer)
Section titled “Pull Phase (SP-to-SP Transfer)”Request a secondary provider to fetch pieces from the primary:
// Pre-sign to avoid double wallet prompts during pull + commitconst extraData = await secondary.presignForCommit([{ pieceCid }])
const pullResult = await secondary.pull({ pieces: [pieceCid], from: primary, // source context (or URL string) extraData, // pre-signed auth (optional, reused for commit) signal: abortController.signal, // cancellation (optional) onProgress: (cid, status) => { // status callback (optional) console.log(`${cid}: ${status}`) },})
if (pullResult.status !== "complete") { for (const piece of pullResult.pieces) { if (piece.status === "failed") { console.error(`Failed to pull ${piece.pieceCid}`) } }}The from parameter accepts either a StorageContext (extracts the provider URL automatically) or a URL string.
Pre-signing: presignForCommit() creates an EIP-712 signature that can be reused for both pull() and commit(). This avoids prompting the wallet twice. Pass the same extraData to both calls.
Commit Phase
Section titled “Commit Phase”Add pieces to an on-chain data set. Creates the data set and payment rail if one doesn’t exist:
const commitResult = await ctx.commit({ pieces: [{ pieceCid, pieceMetadata: { filename: "doc.pdf" } }], extraData, // pre-signed auth from presignForCommit() (optional) onSubmitted: (txHash) => { console.log(`Transaction submitted: ${txHash}`) },})
console.log(`Committed: dataSet=${commitResult.dataSetId}, piece=${commitResult.pieceIds[0]}`)console.log(`New data set: ${commitResult.isNewDataSet}`)The result:
txHash— transaction hashpieceIds— assigned piece IDs (one per input piece)dataSetId— data set ID (may be newly created)isNewDataSet— whether a new data set was created
Multi-File Batch Example
Section titled “Multi-File Batch Example”Upload multiple files to 2 providers with full error handling:
import { Synapse } from "@filoz/synapse-sdk"import { privateKeyToAccount } from "viem/accounts"
const synapse = Synapse.create({ account: privateKeyToAccount("0x...") })
const files = [ new TextEncoder().encode("File 1 content..."), new TextEncoder().encode("File 2 content..."), new TextEncoder().encode("File 3 content..."),]
// Create contexts for 2 providersconst [primary, secondary] = await synapse.storage.createContexts({ count: 2, metadata: { source: "batch-upload" },})
// Store all files on primary (note: these could be done in parallel w/ Promise.all)const stored = []for (const file of files) { const result = await primary.store(file) stored.push(result) console.log(`Stored ${result.pieceCid}`)}
// Pre-sign for all pieces on secondaryconst pieceCids = stored.map(s => s.pieceCid)const extraData = await secondary.presignForCommit( pieceCids.map(cid => ({ pieceCid: cid })))
// Pull all pieces to secondaryconst pullResult = await secondary.pull({ pieces: pieceCids, from: primary, extraData,})
// Commit on both providersconst [primaryCommit, secondaryCommit] = await Promise.allSettled([ primary.commit({ pieces: pieceCids.map(cid => ({ pieceCid: cid })) }), pullResult.status === "complete" ? secondary.commit({ pieces: pieceCids.map(cid => ({ pieceCid: cid })), extraData }) : Promise.reject(new Error("Pull failed, skipping secondary commit")), // not advised!])
if (primaryCommit.status === "fulfilled") { console.log(`Primary: dataSet=${primaryCommit.value.dataSetId}`)}if (secondaryCommit.status === "fulfilled") { console.log(`Secondary: dataSet=${secondaryCommit.value.dataSetId}`)}Error Handling Patterns
Section titled “Error Handling Patterns”Each phase’s errors are independent. Failures don’t cascade — you can retry at any level:
| Phase | Failure | Data state | Recovery |
|---|---|---|---|
| store | Upload/network error | No data on SP | Retry store() with same or different context |
| pull | SP-to-SP transfer failed | Data on primary only | Retry pull(), try different secondary, or skip |
| commit | On-chain transaction failed | Data on SP but not on-chain | Retry commit() (no re-upload needed) |
The key advantage of split operations: if commit fails, data is already stored on the SP. You can retry commit() without re-uploading the data. With the high-level upload(), a CommitError would require re-uploading.
Lifecycle Management
Section titled “Lifecycle Management”Terminating a Data Set
Section titled “Terminating a Data Set”Irreversible Operation
Data set termination cannot be undone. Once initiated:
- The termination transaction is irreversible
- After the termination period, the provider may delete all data
- Payment rails associated with the data set will be terminated
- You cannot cancel the termination
Only terminate data sets when you’re certain you no longer need the data.
To delete an entire data set and discontinue payments for the service, call context.terminate().
This method submits an on-chain transaction to initiate the termination process. Following a defined termination period, payments will cease, and the service provider will be able to delete the data set.
You can also terminate a data set using synapse.storage.terminateDataSet({ dataSetId }), when the data set ID is known and creating a context is not necessary.
// Via contextconst hash = await ctx.terminate()await synapse.client.waitForTransactionReceipt({ hash })console.log("Dataset terminated successfully")
// Or directly by data set IDconst hash2 = await synapse.storage.terminateDataSet({ dataSetId: 42n })await synapse.client.waitForTransactionReceipt({ hash: hash2 })Deleting a Piece
Section titled “Deleting a Piece”To delete an individual piece from the data set, call context.deletePiece().
This method submits an on-chain transaction to initiate the deletion process.
Important: Piece deletion is irreversible and cannot be canceled once initiated.
// List all pieces in the data setconst pieces = []for await (const piece of ctx.getPieces()) { pieces.push(piece)}
// Delete by piece IDawait ctx.deletePiece({ piece: pieces[0].pieceId })console.log( `Piece ${pieces[0].pieceCid} (ID: ${pieces[0].pieceId}) deleted successfully`)
// Delete by PieceCIDawait ctx.deletePiece({ piece: "bafkzcib..." })Download Options
Section titled “Download Options”The SDK provides flexible download options with clear semantics:
SP-Agnostic Download (from anywhere)
Section titled “SP-Agnostic Download (from anywhere)”Download pieces from any available provider using the StorageManager:
// Download from any provider that has the piececonst data = await synapse.storage.download({ pieceCid })
// Download with CDN optimization (if available)const dataWithCDN = await synapse.storage.download({ pieceCid, withCDN: true })Context-Specific Download (from this provider)
Section titled “Context-Specific Download (from this provider)”When using a StorageContext, downloads are automatically restricted to that specific provider:
// Downloads from the provider associated with this contextconst data = await ctx.download({ pieceCid })CDN Option Inheritance
Section titled “CDN Option Inheritance”The withCDN option follows a clear inheritance hierarchy:
- Synapse level: Default setting for all operations
- StorageContext level: Can override Synapse’s default
- Method level: Can override instance settings
// Example of inheritanceconst synapse = Synapse.create({ account, withCDN: true }) // Global default: CDN enabledconst ctx = await synapse.storage.createContext({ withCDN: false }) // Context override: CDN disabledawait synapse.storage.download({ pieceCid }) // Uses Synapse's withCDN: trueawait ctx.download({ pieceCid }) // Uses context's withCDN: falseawait synapse.storage.download({ pieceCid, withCDN: false }) // Method override: CDN disabledNote: When withCDN: true is set, it adds { withCDN: '' } to the data set’s metadata, ensuring CDN-enabled and non-CDN data sets remain separate.
Using synapse-core Directly
Section titled “Using synapse-core Directly”For maximum control, use the core library functions without the SDK wrapper classes. This is useful for building custom upload pipelines, integrating into existing frameworks, or server-side applications that don’t need the SDK’s orchestration.
Provider Selection
Section titled “Provider Selection”import { fetchProviderSelectionInput, selectProviders } from "@filoz/synapse-core/warm-storage"
// Fetch all chain data needed for selectionconst input = await fetchProviderSelectionInput(client, { address: walletAddress, metadata: { source: "my-app" },})
// Primary: pass endorsedIds to restrict pool to endorsed providers onlyconst [primary] = selectProviders( { ...input, endorsedIds: input.endorsedIds }, { count: 1 })
// Secondary: pass empty set to allow any approved providerconst [secondary] = selectProviders( { ...input, endorsedIds: new Set() }, { count: 1, excludeProviderIds: new Set([primary.provider.id]) })fetchProviderSelectionInput() makes a single multicall to gather providers, endorsements, and existing data sets. selectProviders() is a pure function — no network calls — that applies a 2-tier preference within the eligible pool:
- Existing data set with matching metadata
- New data set (no matching data set found)
The endorsedIds parameter controls which providers are eligible. When non-empty, only endorsed providers can be selected — there is no fallback to non-endorsed. When empty, all approved providers are eligible. The SDK’s smartSelect() uses this to enforce endorsed-for-primary (hard constraint) while allowing any approved provider for secondaries.
Upload and Commit
Section titled “Upload and Commit”import * as SP from "@filoz/synapse-core/sp"import { signAddPieces, signCreateDataSetAndAddPieces } from "@filoz/synapse-core/typed-data"
// Upload piece to SPconst { pieceCid, size } = await SP.uploadPieceStreaming({ serviceURL: provider.pdp.serviceURL, data: myStream,})
// Confirm piece is parkedawait SP.findPiece({ serviceURL: provider.pdp.serviceURL, pieceCid, retry: true,})
// Sign and commit (new data set)const result = await SP.createDataSetAndAddPieces(client, { cdn: false, payee: provider.serviceProvider, payer: client.account.address, recordKeeper: chain.contracts.fwss.address, pieces: [{ pieceCid }], metadata: { source: "my-app" }, serviceURL: provider.pdp.serviceURL,})
const confirmation = await SP.waitForCreateDataSetAddPieces(result)console.log(`DataSet: ${confirmation.dataSetId}`)SP-to-SP Pull
Section titled “SP-to-SP Pull”import * as SP from "@filoz/synapse-core/sp"
const response = await SP.waitForPullStatus(client, { serviceURL: secondaryProvider.pdp.serviceURL, pieces: [{ pieceCid, sourceUrl: `${primaryProvider.pdp.serviceURL}/pdp/piece/${pieceCid}`, }], payee: secondaryProvider.serviceProvider, payer: client.account.address, cdn: false, metadata: { source: "my-app" },})This path requires manual EIP-712 signing. The signAddPieces and signCreateDataSetAndAddPieces functions from @filoz/synapse-core/typed-data handle the signature creation.
Next Steps
Section titled “Next Steps”-
Storage Operations — The high-level multi-copy upload API for most use cases. Start here if you haven’t used
synapse.storage.upload()yet. -
Calculate Storage Costs — Plan your budget and fund your storage account. Use the quick calculator to estimate monthly costs.
-
Component Architecture — Understand how StorageContext fits into the SDK design. Deep dive into the component architecture.
-
Payment Management — Manage deposits, approvals, and payment rails. Required before your first upload.