Skip to content

Blob Streaming

Stream large VARBINARY(MAX) data in chunks without loading the entire value into memory. Works on all platforms (Windows, Linux, macOS) using standard SQL queries under the hood.

When to Use Use blob streaming for binary data too large to fit

comfortably in a single query result (multi-MB files, images, documents). For smaller data (< 10 MB), standard parameterized queries are simpler — see Binary Data.

For Windows-only FILESTREAM I/O via native file handles, see FILESTREAM. :::

Transaction Required

All blob streams require an active transaction. This ensures consistent reads (no partial data from concurrent writes) and atomic writes (all chunks or none).

ts
await using tx = await cn.beginTransaction();
// ... open blob stream, read/write, then commit ...
await tx.commit();

Reading (Node.js Streams)

Use cn.blob.filestream.read to get a node:stream.Readable that reads the column in chunks via SUBSTRING:

ts
import { pipeline } from "node:stream/promises";
import { createWriteStream } from "node:fs";

await using cn = await mssql.connect(connectionString);
await using tx = await cn.beginTransaction();

const readable = cn.blob.filestream.read(tx, {
  table: "Documents", // "[dbo].[Documents]", "[MyDb].[Schema].[TableName]", etc.
  column: "data",
  where: "id = @id",
  params: { id: docId },
  chunkSize: 512 * 1024, // 512 KB chunks (default: 1 MB)
});

await pipeline(readable, createWriteStream("output.bin"));
await tx.commit();

Writing (Node.js Streams)

Use cn.blob.filestream.write to get a node:stream.Writable that appends chunks using SQL Server's .WRITE syntax:

ts
import { pipeline } from "node:stream/promises";
import { createReadStream } from "node:fs";

await using cn = await mssql.connect(connectionString);
await using tx = await cn.beginTransaction();

// Column must have an initial value (even empty)
await cn.execute(
  "UPDATE Documents SET data = 0x WHERE id = @id",
  { id: docId },
  { transaction: tx },
);

const writable = cn.blob.filestream.write(tx, {
  table: "Documents",
  column: "data",
  where: "id = @id",
  params: { id: docId },
});

await pipeline(createReadStream("input.bin"), writable);
await tx.commit();

Reading (Web Streams)

Use cn.blob.webstream.read for a ReadableStream, ideal for Deno or any environment that prefers Web Standard streams:

ts
await using cn = await mssql.connect(connectionString);
await using tx = await cn.beginTransaction();

await using stream = cn.blob.webstream.read(tx, {
  table: "Documents",
  column: "data",
  where: "id = @id",
  params: { id: docId },
});

// Deno: pipe to a file
using file = await Deno.open("output.bin", { write: true, create: true });
await stream.pipeTo(file.writable);
await tx.commit();

Writing (Web Streams)

Use cn.blob.webstream.write for a WritableStream:

ts
await using cn = await mssql.connect(connectionString);
await using tx = await cn.beginTransaction();

// Initialize column
await cn.execute(
  "UPDATE Documents SET data = 0x WHERE id = @id",
  { id: docId },
  { transaction: tx },
);

await using stream = cn.blob.webstream.write(tx, {
  table: "Documents",
  column: "data",
  where: "id = @id",
  params: { id: docId },
});

// Deno: pipe from a file
using file = await Deno.open("input.bin", { read: true });
await file.readable.pipeTo(stream);
await tx.commit();

BlobTarget Options

OptionTypeDefaultDescription
tablestring(required)Table name (use brackets for escaping)
columnstring(required)VARBINARY(MAX) column name (use brackets for escaping)
wherestring(required)WHERE clause identifying the row (e.g. "id = @id")
paramsRecord<string, unknown>{}Parameters for the WHERE clause
chunkSizenumber1048576 (1 MB)Chunk size in bytes for streaming

Table and column names are used as-is in the generated SQL. Use bracket escaping if your names contain special characters or are reserved words:

ts
{ table: "Documents", ... }                    // Documents
{ table: "[dbo].[Documents]", ... }            // [dbo].[Documents]
{ table: "[MyDB].[dbo].[Documents]", ... }     // [MyDB].[dbo].[Documents]

API Summary

MethodReturnsDescription
cn.blob.filestream.read(tx, target)node:stream.ReadableRead VARBINARY(MAX) in chunks
cn.blob.filestream.write(tx, target)node:stream.WritableWrite VARBINARY(MAX) in chunks
cn.blob.webstream.read(tx, target)ReadableStreamRead (Web Standard)
cn.blob.webstream.write(tx, target)WritableStreamWrite (Web Standard)

Naming Convention

The sub-object pattern mirrors the FILESTREAM API:

FeatureNode.js StreamsWeb Streams
FILESTREAM (Windows)cn.fs.open(path, ...)cn.fs.openWeb(path, ...)
Blob streaming (all platforms)cn.blob.filestream.*cn.blob.webstream.*

Comparison with FILESTREAM

FeatureBlob StreamingFILESTREAM
PlatformAll (Windows, Linux, macOS)Windows only
MechanismSQL queries (SUBSTRING / .WRITE)Win32 file handle (OpenSqlFilestream)
Server setupNoneFILESTREAM filegroup required
DependenciesODBC Driver 18 (already required)ODBC Driver 18 (already required)
PerformanceGood (SQL round-trips per chunk)Best (direct file I/O)
Max size2 GB (SQL Server VARBINARY(MAX) limit)Unlimited (direct file I/O bypasses SQL engine)
TransactionRequiredRequired