API Reference
(v 0.5)
Tensors  /  Creation

Tensors are the core datastructure of deeplearn.js. They are a generalization of vectors and matrices to potentially higher dimensions.

We have utility functions for common cases like Scalar, 1D, 2D, 3D and 4D tensors, as well a number of functions to initialize tensors in ways useful for machine learning.

ƒ dl.tensor(values, shape?, dtype?)

Creates a Tensor with the provided values, shape and dtype.

// Pass an array of values to create a vector.
dl.tensor([1, 2, 3, 4]).print();
// Pass a nested array of values to make a matrix or a higher
// dimensional tensor.
dl.tensor([[1, 2], [3, 4]]).print();
// Pass a flat array and specify a shape yourself.
dl.tensor([1, 2, 3, 4], [2, 2]).print();
Parameters:
  • values: TypedArray|Array The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
  • shape: number[] The shape of the tensor. Optional. If not provided, it is inferred from values. Optional
  • dtype: 'float32'|'int32'|'bool' The data type. Optional
Returns: Tensor
Defined in ops/array_ops.ts#55
ƒ dl.scalar(value, dtype?)

Creates rank-0 Tensor (scalar) with the provided value and dtype.

This method is mainly for self documentation and TypeScript typings as the same functionality can be achieved with dl.tensor(). In general, we recommend using this method as it makes code more readable.

dl.scalar(3.14).print();
Parameters:
  • value: number|boolean The value of the scalar.
  • dtype: 'float32'|'int32'|'bool' The data type. Optional
Returns: Scalar
Defined in ops/array_ops.ts#90
ƒ dl.tensor1d(values, dtype?)

Creates rank-1 Tensor with the provided values, shape and dtype.

This method is mainly for self documentation and TypeScript typings as the same functionality can be achieved with dl.tensor(). In general, we recommend using this method as it makes code more readable.

dl.tensor1d([1, 2, 3]).print();
Parameters:
  • values: TypedArray|Array The values of the tensor. Can be array of numbers, or a TypedArray.
  • dtype: 'float32'|'int32'|'bool' The data type. Optional
Returns: Tensor1D
ƒ dl.tensor2d(values, shape?, dtype?)

Creates rank-2 Tensor with the provided values, shape and dtype.

This method is mainly for self documentation and TypeScript typings as the same functionality can be achieved with dl.tensor(). In general, we recommend using this method as it makes code more readable.

// Pass a nested array.
dl.tensor2d([[1, 2], [3, 4]]).print();
// Pass a flat array and specify a shape.
dl.tensor2d([1, 2, 3, 4], [2, 2]).print();
Parameters:
  • values: TypedArray|Array The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
  • shape: [number, number] The shape of the tensor. If not provided, it is inferred from values. Optional
  • dtype: 'float32'|'int32'|'bool' The data type. Optional
Returns: Tensor2D
ƒ dl.tensor3d(values, shape?, dtype?)

Creates rank-3 Tensor with the provided values, shape and dtype.

This method is mainly for self documentation and TypeScript typings as the same functionality can be achieved with dl.tensor(). In general, we recommend using this method as it makes code more readable.

// Pass a nested array.
dl.tensor3d([[[1], [2]], [[3], [4]]]).print();
// Pass a flat array and specify a shape.
dl.tensor3d([1, 2, 3, 4], [2, 2, 1]).print();
Parameters:
  • values: TypedArray|Array The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
  • shape: [number, number, number] The shape of the tensor. If not provided, it is inferred from values. Optional
  • dtype: 'float32'|'int32'|'bool' The data type. Optional
Returns: Tensor3D
ƒ dl.tensor4d(values, shape?, dtype?)

Creates rank-4 Tensor with the provided values, shape and dtype.

// Pass a nested array.
dl.tensor4d([[[[1], [2]], [[3], [4]]]]).print();
// Pass a flat array and specify a shape.
dl.tensor4d([1, 2, 3, 4], [1, 2, 2, 1]).print();
Parameters:
  • values: TypedArray|Array The values of the tensor. Can be nested array of numbers, or a flat array, or a TypedArray.
  • shape: [number, number, number, number] The shape of the tensor. Optional. If not provided, it is inferred from values. Optional
  • dtype: 'float32'|'int32'|'bool' The data type. Optional
Returns: Tensor4D
ƒ dl.buffer(shape, dtype?, values?)

Creates an empty TensorBuffer with the specified shape and dtype.

The values are stored in cpu as TypedArray. Fill the buffer using buffer.set(), or by modifying directly buffer.values. When done, call buffer.toTensor() to get an immutable Tensor with those values.

When done, call buffer.toTensor() to get an immutable Tensor with those values.

// Create a buffer and set values at particular indices.
const buffer = dl.buffer([2, 2]);
buffer.set(3, 0, 0);
buffer.set(5, 1, 0);

// Convert the buffer back to a tensor.
buffer.toTensor().print();
Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • dtype: 'float32'|'int32'|'bool' The dtype of the buffer. Defaults to 'float32'. Optional
  • values: TypedArray The values of the buffer as TypedArray. Defaults to zeros. Optional
Returns: TensorBuffer
ƒ dl.clone(x)

Creates a new tensor with the same values and shape as the specified tensor.

const x = dl.tensor([1, 2]);
x.clone().print();
Parameters:
  • x: Tensor The tensor to clone.
Returns: Tensor
ƒ dl.fill(shape, value, dtype?)

Creates a Tensor filled with a scalar value.

dl.fill([2, 2], 4).print();
Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • value: number The scalar value to fill the tensor with.
  • dtype: 'float32'|'int32'|'bool' The type of an element in the resulting tensor. Defaults to 'float'. Optional
Returns: Tensor
ƒ dl.fromPixels(pixels, numChannels?)

Creates a Tensor from an image.

const image = new ImageData(1, 1);
image.data[0] = 100;
image.data[1] = 150;
image.data[2] = 200;
image.data[3] = 255;

dl.fromPixels(image).print();
Parameters:
Returns: Tensor3D
ƒ dl.linspace(start, stop, num)

Return an evenly spaced sequence of numbers over the given interval.

dl.linspace(0, 9, 10).print();
Parameters:
  • start: number The start value of the sequence
  • stop: number The end value of the sequence
  • num: number The number of values to generate
Returns: Tensor1D
ƒ dl.oneHot(indices, depth, onValue?, offValue?)

Creates a one-hot Tensor. The locations represented by indices take value onValue (defaults to 1), while all other locations take value offValue (defaults to 0).

dl.oneHot(dl.tensor1d([0, 1]), 3).print();
Parameters:
  • indices: Tensor1D 1D Array of indices.
  • depth: number The depth of the one hot dimension.
  • onValue: number A number used to fill in output when the index matches the location. Optional
  • offValue: number A number used to fill in the output when the index does not match the location. Optional
Returns: Tensor2D
ƒ dl.ones(shape, dtype?)

Creates a Tensor with all elements set to 1.

dl.ones([2, 2]).print();
Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • dtype: 'float32'|'int32'|'bool' The type of an element in the resulting tensor. Defaults to 'float'. Optional
Returns: Tensor
ƒ dl.onesLike(x)

Creates a Tensor with all elements set to 1 with the same shape as the given tensor.

const x = dl.tensor([1, 2]);
dl.onesLike(x).print();
Parameters:
Returns: Tensor
ƒ dl.print(x, verbose?)

Prints information about the Tensor including its data.

const verbose = true;
dl.tensor2d([1, 2, 3, 4], [2, 2]).print(verbose);
Parameters:
  • x: Tensor
  • verbose: boolean Whether to print verbose information about the Tensor, including dtype and size. Optional
Returns: void
ƒ dl.randomNormal(shape, mean?, stdDev?, dtype?, seed?)

Creates a Tensor with values sampled from a normal distribution.

dl.randomNormal([2, 2]).print();
Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • mean: number The mean of the normal distribution. Optional
  • stdDev: number The standard deviation of the normal distribution. Optional
  • dtype: 'float32'|'int32' The data type of the output. Optional
  • seed: number The seed for the random number generator. Optional
Returns: Tensor
ƒ dl.randomUniform(shape, minval?, maxval?, dtype?)

Creates a Tensor with values sampled from a uniform distribution.

The generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.

dl.randomUniform([2, 2]).print();
Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • minval: number The lower bound on the range of random values to generate. Defaults to 0. Optional
  • maxval: number The upper bound on the range of random values to generate. Defaults to 1. Optional
  • dtype: 'float32'|'int32'|'bool' The data type of the output tensor. Defaults to 'float32'. Optional
Returns: Tensor
ƒ dl.range(start, stop, step?, dtype?)

Creates a new Tensor1D filled with the numbers in the range provided.

The tensor is a is half-open interval meaning it includes start, but excludes stop. Decrementing ranges and negative step values are also supported.

dl.range(0, 9, 2).print();
Parameters:
  • start: number An integer start value
  • stop: number An integer stop value
  • step: number An integer increment (will default to 1 or -1) Optional
  • dtype: 'float32'|'int32' Optional
Returns: Tensor1D
ƒ dl.truncatedNormal(shape, mean?, stdDev?, dtype?, seed?)

Creates a Tensor with values sampled from a truncated normal distribution.

dl.truncatedNormal([2, 2]).print();

The generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • mean: number The mean of the normal distribution. Optional
  • stdDev: number The standard deviation of the normal distribution. Optional
  • dtype: 'float32'|'int32' The data type of the output. Optional
  • seed: number The seed for the random number generator. Optional
Returns: Tensor
ƒ dl.variable(initialValue, trainable?, name?, dtype?)

Creates a new variable with the provided initial value.

const x = dl.variable(dl.tensor([1, 2, 3]));
x.assign(dl.tensor([4, 5, 6]));

x.print();
Parameters:
  • initialValue: Tensor A tensor.
  • trainable: boolean If true, optimizers are allowed to update it. Optional
  • name: string Name of the variable. Defaults to a unique id. Optional
  • dtype: 'float32'|'int32'|'bool' If set, initialValue will be converted to the given type. Optional
Returns: Variable
Defined in tensor.ts#976
ƒ dl.zeros(shape, dtype?)

Creates a Tensor with all elements set to 0.

dl.zeros([2, 2]).print();
Parameters:
  • shape: number[] An array of integers defining the output tensor shape.
  • dtype: 'float32'|'int32'|'bool' The type of an element in the resulting tensor. Can be 'float32', 'int32' or 'bool'. Defaults to 'float'. Optional
Returns: Tensor
ƒ dl.zerosLike(x)

Creates a Tensor with all elements set to 0 with the same shape as the given tensor.

const x = dl.tensor([1, 2]);
dl.zerosLike(x).print();
Parameters:
Returns: Tensor
Tensors  /  Classes

This section shows the main Tensor related classes in deeplearn.js and the methods we expose on them.

Class dl.Tensor

A Tensor object represents an immutable, multidimensional array of numbers that has a shape and a data type.

See dl.tensor() for details on how to create a Tensor.

Methods:
flatten() Flatten a Tensor to a 1D array.
asScalar() Converts a size-1 Tensor to a Scalar.
as1D() Converts a Tensor to a Tensor1D.
as2D(rows, columns) Converts a Tensor to a Tensor2D.
Parameters:
  • rows: number
  • columns: number
as3D(rows, columns, depth) Converts a Tensor to a Tensor3D.
Parameters:
  • rows: number
  • columns: number
  • depth: number
as4D(rows, columns, depth, depth2) Converts a Tensor to a Tensor4D.
Parameters:
  • rows: number
  • columns: number
  • depth: number
  • depth2: number
asType(dtype) Casts a Tensor to a specified dtype.
Parameters:
  • dtype: 'float32'|'int32'|'bool'
buffer() Returns a TensorBuffer that holds the underlying data.
data() Asynchronously downloads the values from the Tensor. Returns a promise of TypedArray that resolves when the computation has finished.
dataSync() Synchronously downloads the values from the Tensor. This blocks the UI thread until the values are ready, which can cause performance issues.
dispose()
toFloat() Casts the array to type float32
toInt() Casts the array to type int32
toBool() Casts the array to type bool
print(verbose?) Prints the tensor. See dl.print() for details.
Parameters:
  • verbose: boolean Optional
reshape(newShape) Reshapes the tensor into the provided shape. See dl.reshape() for more details.
Parameters:
  • newShape: number[]
reshapeAs(x) Reshapes the tensor into the shape of the provided tensor.
Parameters:
expandDims(axis?) Returns a Tensor that has expanded rank, by inserting a dimension into the tensor's shape. See dl.expandDims() for details.
Parameters:
  • axis: number Optional
squeeze(axis?) Returns a Tensor with dimensions of size 1 removed from the shape. See dl.squeeze() for more details.
Parameters:
  • axis: number[] Optional
clone() Returns a copy of the tensor. See dl.clone() for details.
Defined in tensor.ts#150
Class dl.Variable

A mutable Tensor, useful for persisting state, e.g. for training.

Methods:
assign(newValue) Assign a new Tensor to this variable. The new Tensor must have the same shape and dtype as the old Tensor.
Parameters:
Defined in tensor.ts#939
Class dl.TensorBuffer

A mutable object, similar to Tensor, that allows users to set values at locations before converting to an immutable Tensor.

See dl.buffer() for creating a tensor buffer.

Methods:
set(value, locs) Sets a value in the buffer at a given location.
Parameters:
  • value: number The value to set.
  • locs: number[] The location indices.
get(locs) Returns the value in the buffer at the provided location.
Parameters:
  • locs: number[] The location indices.
toTensor() Creates an immutable Tensor object from the buffer.
Defined in tensor.ts#37
Tensors  /  Transformations

This section describes some common Tensor transformations for reshaping and type-casting.

ƒ dl.cast(x, dtype)

Casts a tensor to a new dtype.

const x = dl.tensor1d([1.5, 2.5, 3]);
dl.cast(x, 'int32').print();
Parameters:
  • x: Tensor A tensor.
  • dtype: 'float32'|'int32'|'bool' The dtype to cast the input tensor to.
Returns: Tensor
ƒ dl.expandDims(x, axis?)

Returns a Tensor that has expanded rank, by inserting a dimension into the tensor's shape.

const x = dl.tensor1d([1, 2, 3, 4]);
const axis = 1;
x.expandDims(axis).print();
Parameters:
  • x: Tensor
  • axis: number The dimension index at which to insert shape of 1. Defaults to 0 (the first dimension). Optional
Returns: Tensor
ƒ dl.pad(x, paddings, constantValue?)

Pads a Tensor with a given value and the paddings you specify.

This operation currently only implements the CONSTANT mode from Tensorflow's dl.pad() operation.

const x = dl.tensor1d([1, 2, 3, 4]);
x.pad([[1, 2]]).print();
Parameters:
  • x: Tensor The tensor to pad.
  • paddings: Array An array of length R (the rank of the tensor), where each element is a length-2 tuple of ints [padBefore, padAfter], specifying how much to pad along each dimension of the tensor.
  • constantValue: number The pad value to use. Defaults to 0. Optional
Returns: Tensor
ƒ dl.reshape(x, shape)

Reshapes a Tensor to a given shape.

Given a input tensor, returns a new tensor with the same values as the input tensor with shape shape.

If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.

If shape is 1-D or higher, then the operation returns a tensor with shape shape filled with the values of tensor. In this case, the number of elements implied by shape must be the same as the number of elements in tensor.

const x = dl.tensor1d([1, 2, 3, 4]);
x.reshape([2, 2]).print();
Parameters:
  • x: Tensor A tensor.
  • shape: number[] An array of integers defining the output tensor shape.
Returns: Tensor
ƒ dl.squeeze(x, axis?)

Removes dimensions of size 1 from the shape of a Tensor.

const x = dl.tensor([1, 2, 3, 4], [1, 1, 4]);
x.squeeze().print();
Parameters:
  • x: Tensor
  • axis: number[] An optional list of numbers. If specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. Optional
Returns: Tensor
Tensors  /  Slicing and Joining

deeplearn.js provides several operations to slice or extract parts of a tensor, or join multiple tensors together.

ƒ dl.concat(tensors, axis?)

Concatenates a list of Tensors along a given axis.

The tensors ranks and types must match, and their sizes must match in all dimensions except axis.

const a = dl.tensor1d([1, 2]);
const b = dl.tensor1d([3, 4]);
a.concat(b).print();  // or a.concat(b)
const a = dl.tensor1d([1, 2]);
const b = dl.tensor1d([3, 4]);
const c = dl.tensor1d([5, 6]);
dl.concat([a, b, c]).print();
const a = dl.tensor2d([[1, 2], [10, 20]]);
const b = dl.tensor2d([[3, 4], [30, 40]]);
const axis = 1;
dl.concat([a, b], axis).print();
Parameters:
  • tensors: Tensor[] A list of tensors to concatenate.
  • axis: number The axis to concate along. Defaults to 0 (the first dim). Optional
Returns: Tensor
Defined in ops/concat.ts#146
ƒ dl.gather(x, indices, axis?)

Gather slices from tensor x's axis axis according to indices.

const x = dl.tensor1d([1, 2, 3, 4]);
const indices = dl.tensor1d([1, 3, 3]);

x.gather(indices).print();
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);
const indices = dl.tensor1d([1, 1, 0]);

x.gather(indices).print();
Parameters:
  • x: Tensor The input tensor.
  • indices: Tensor1D The indices of the values to extract.
  • axis: number The axis over which to select values. Defaults to 0. Optional
Returns: Tensor
ƒ dl.reverse(x, axis)

Reverses a Tensor along a specified axis.

const x = dl.tensor1d([1, 2, 3, 4]);

x.reverse().print();
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

const axis = 1;
x.reverse(axis).print();
Parameters:
  • x: Tensor The input tensor.
  • axis: number|number[] The set of dimensions to reverse. Must be in the range [-rank(x), rank(x)).
Returns: Tensor
Defined in ops/reverse.ts#93
ƒ dl.slice(x, begin, size)

Extracts a slice from a Tensor starting at coordinates begin and is of size size.

Also available are stricter rank-specific methods with the same signature as this method that assert that x is of the given rank:

  • dl.slice1d
  • dl.slice2d
  • dl.slice3d
  • dl.slice4d
const x = dl.tensor1d([1, 2, 3, 4]);

x.slice([1], [2]).print();
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

x.slice([1, 0], [1, 2]).print();
Parameters:
  • x: Tensor The input Tensor to slice from.
  • begin: number[] The coordinates to start the slice from. The length of this array should match the rank of x.
  • size: number[] The size of the slice. The length of this array should match the rank of x.
Returns: Tensor
Defined in ops/slice.ts#120
ƒ dl.stack(tensors, axis?)

Stacks a list of rank-R Tensors into one rank-(R+1) Tensor.

const a = dl.tensor1d([1, 2]);
const b = dl.tensor1d([3, 4]);
const c = dl.tensor1d([5, 6]);
dl.stack([a, b, c]).print();
Parameters:
  • tensors: Tensor[] A list of tensor objects with the same shape and dtype.
  • axis: number The axis to stack along. Defaults to 0 (the first dim). Optional
Returns: Tensor
ƒ dl.tile(x, reps)

Construct an tensor by repeating it the number of times given by reps.

This operation creates a new tensor by replicating input reps times. The output tensor's i'th dimension has input.shape[i] * reps[i] elements, and the values of input are replicated reps[i] times along the i'th dimension. For example, tiling [a, b, c, d] by [2] produces [a, b, c, d, a, b, c, d].

const a = dl.tensor1d([1, 2]);

a.tile([2]).print();    // or a.tile([2])
const a = dl.tensor2d([1, 2, 3, 4], [2, 2]);

a.tile([1, 2]).print();  // or a.tile([1, 2])
Parameters:
  • x: Tensor The tensor to transpose.
  • reps: number[] Determines the number of replications per dimension.
Returns: Tensor
Operations  /  Arithmetic

To perform mathematical computation on Tensors, we use operations. Tensors are immutable, so all operations always return new Tensors and never modify input Tensors.

ƒ dl.add(a, b)

Adds two Tensors element-wise, A + B. Supports broadcasting.

We also expose addStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

const a = dl.tensor1d([1, 2, 3, 4]);
const b = dl.tensor1d([10, 20, 30, 40]);

a.add(b).print();  // or dl.add(a, b)
// Broadcast add a with b.
const a = dl.scalar(5);
const b = dl.tensor1d([10, 20, 30, 40]);

a.add(b).print();  // or dl.add(a, b)
Parameters:
Returns: Tensor
ƒ dl.sub(a, b)

Subtracts two Tensors element-wise, A - B. Supports broadcasting.

We also expose subStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

const a = dl.tensor1d([10, 20, 30, 40]);
const b = dl.tensor1d([1, 2, 3, 4]);

a.sub(b).print();  // or dl.sub(a, b)
// Broadcast subtract a with b.
const a = dl.tensor1d([10, 20, 30, 40]);
const b = dl.scalar(5);

a.sub(b).print();  // or dl.sub(a, b)
Parameters:
Returns: Tensor
ƒ dl.mul(a, b)

Multiplies two Tensors element-wise, A * B. Supports broadcasting.

We also expose mulStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

const a = dl.tensor1d([1, 2, 3, 4]);
const b = dl.tensor1d([2, 3, 4, 5]);

a.mul(b).print();  // or dl.mul(a, b)
// Broadcast mul a with b.
const a = dl.tensor1d([1, 2, 3, 4]);
const b = dl.scalar(5);

a.mul(b).print();  // or dl.mul(a, b)
Parameters:
  • a: Tensor The first tensor.
  • b: Tensor The second tensor. Must have the same dtype as a.
Returns: Tensor
ƒ dl.div(a, b)

Divides two Tensors element-wise, A / B. Supports broadcasting.

We also expose divStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

const a = dl.tensor1d([1, 4, 9, 16]);
const b = dl.tensor1d([1, 2, 3, 4]);

a.div(b).print();  // or dl.div(a, b)
// Broadcast div a with b.
const a = dl.tensor1d([2, 4, 6, 8]);
const b = dl.scalar(2);

a.div(b).print();  // or dl.div(a, b)
Parameters:
  • a: Tensor The first tensor.
  • b: Tensor The second tensor. Must have the same dtype as a.
Returns: Tensor
ƒ dl.maximum(a, b)

Returns the max of a and b (a > b ? a : b) element-wise. Supports broadcasting.

We also expose maximumStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

const a = dl.tensor1d([1, 4, 3, 16]);
const b = dl.tensor1d([1, 2, 9, 4]);

a.maximum(b).print();  // or dl.maximum(a, b)
// Broadcast maximum a with b.
const a = dl.tensor1d([2, 4, 6, 8]);
const b = dl.scalar(5);

a.maximum(b).print();  // or dl.maximum(a, b)
Parameters:
  • a: Tensor The first tensor.
  • b: Tensor The second tensor. Must have the same type as a.
Returns: Tensor
ƒ dl.minimum(a, b)

Returns the min of a and b (a < b ? a : b) element-wise. Supports broadcasting.

We also expose minimumStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

const a = dl.tensor1d([1, 4, 3, 16]);
const b = dl.tensor1d([1, 2, 9, 4]);

a.minumum(b).print();  // or dl.minumum(a, b)
// Broadcast minumum a with b.
const a = dl.tensor1d([2, 4, 6, 8]);
const b = dl.scalar(5);

a.minumum(b).print();  // or dl.minumum(a, b)
Parameters:
  • a: Tensor The first tensor.
  • b: Tensor The second tensor. Must have the same type as a.
Returns: Tensor
ƒ dl.pow(base, exp)

Computes the power of one Tensor to another. Supports broadcasting.

Given a Tensor x and a Tensor y, this operation computes x^y for corresponding elements in x and y.

const a = dl.tensor([[2, 3], [4, 5]])
const b = dl.tensor([[1, 2], [3, 0]]).toInt();

a.pow(b).print();  // or dl.pow(a, b)
const a = dl.tensor([[1, 2], [3, 4]])
const b = dl.tensor(2).toInt();

a.pow(b).print();  // or dl.pow(a, b)

We also expose powStrict which has the same signature as this op and asserts that base and dl.exp() are the same shape (does not broadcast).

Parameters:
Returns: Tensor
Operations  /  Basic math
ƒ dl.abs(x)

Computes absolute value element-wise: abs(x)

const x = dl.tensor1d([-1, 2, -3, 4]);

x.abs().print();  // or dl.abs(x)
Parameters:
Returns: Tensor
ƒ dl.acos(x)

Computes acos of the input Tensor element-wise: acos(x)

const x = dl.tensor1d([0, 1, -1, .7]);

x.acos().print();  // or dl.acos(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.asin(x)

Computes asin of the input Tensor element-wise: asin(x)

const x = dl.tensor1d([0, 1, -1, .7]);

x.asin().print();  // or dl.asin(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.atan(x)

Computes atan of the input Tensor element-wise: atan(x)

const x = dl.tensor1d([0, 1, -1, .7]);

x.atan().print();  // or dl.atan(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.ceil(x)

Computes ceiling of input Tensor element-wise: ceil(x)

const x = dl.tensor1d([.6, 1.1, -3.3]);

x.ceil().print();  // or dl.ceil(x)
Parameters:
  • x: Tensor The input Tensor.
Returns: Tensor
Defined in ops/unary_ops.ts#57
ƒ dl.clipByValue(x, clipValueMin, clipValueMax)

Clips values element-wise. max(min(x, clipValueMax), clipValueMin)

const x = dl.tensor1d([-1, 2, -3, 4]);

x.clipByValue(-2, 3).print();  // or dl.clipByValue(x, -2, 3)
Parameters:
  • x: Tensor The input tensor.
  • clipValueMin: number Lower-bound of range to be clipped to.
  • clipValueMax: number Upper-bound of range to be clipped to.
Returns: Tensor
ƒ dl.cos(x)

Computes cos of the input Tensor element-wise: cos(x)

const x = dl.tensor1d([0, Math.PI / 2, Math.PI * 3 / 4]);

x.cos().print();  // or dl.cos(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.cosh(x)

Computes hyperbolic cos of the input Tensor element-wise: cosh(x)

const x = dl.tensor1d([0, 1, -1, .7]);

x.cosh().print();  // or dl.cosh(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.elu(x)

Computes exponential linear element-wise, x > 0 ? e ^ x - 1 : 0

const x = dl.tensor1d([-1, 1, -3, 2]);

x.elu().print();  // or dl.elu(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.exp(x)

Computes exponential of the input Tensor element-wise. e ^ x

const x = dl.tensor1d([1, 2, -3]);

x.exp().print();  // or dl.exp(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
Defined in ops/unary_ops.ts#98
ƒ dl.floor(x)

Computes floor of input Tensor element-wise: floor(x).

const x = dl.tensor1d([.6, 1.1, -3.3]);

x.floor().print();  // or dl.floor(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
Defined in ops/unary_ops.ts#77
ƒ dl.leakyRelu(x, alpha?)

Computes leaky rectified linear element-wise.

See http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf

const x = dl.tensor1d([-1, 2, -3, 4]);

x.leakyRelu(0.1).print();  // or dl.leakyRelu(x, 0.1)
Parameters:
  • x: Tensor The input tensor.
  • alpha: number The scaling factor for negative values, defaults to 0.2. Optional
Returns: Tensor
ƒ dl.log(x)

Computes natural logarithm of the input Tensor element-wise: ln(x)

const x = dl.tensor1d([1, 2, Math.E]);

x.log().print();  // or dl.log(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.neg(x)

Computes -1 * x element-wise.

const x = dl.tensor2d([1, 2, -2, 0], [2, 2]);

x.neg().print();  // or dl.neg(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
Defined in ops/unary_ops.ts#39
ƒ dl.prelu(x, alpha)

Computes leaky rectified linear element-wise with parametric alphas.

x < 0 ? alpha * x : f(x) = x

const x = dl.tensor1d([-1, 2, -3, 4]);
const alpha = dl.scalar(0.1);

x.prelu(alpha).print();  // or dl.prelu(x, alpha)
Parameters:
  • x: Tensor The input tensor.
  • alpha: Tensor Scaling factor for negative values.
Returns: Tensor
ƒ dl.relu(x)

Computes rectified linear element-wise: max(x, 0)

const x = dl.tensor1d([-1, 2, -3, 4]);

x.relu().print();  // or dl.relu(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.selu(x)

Computes scaled exponential linear element-wise.

x < 0 ? scale * alpha * (exp(x) - 1) : x

const x = dl.tensor1d([-1, 2, -3, 4]);

x.selu().print();  // or dl.selu(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.sigmoid(x)

Computes sigmoid element-wise, 1 / (1 + exp(-x))

const x = dl.tensor1d([0, -1, 2, -3]);

x.sigmoid().print();  // or dl.sigmoid(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.sin(x)

Computes sin of the input Tensor element-wise: sin(x)

const x = dl.tensor1d([0, Math.PI / 2, Math.PI * 3 / 4]);

x.sin().print();  // or dl.sin(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.sinh(x)

Computes hyperbolic sin of the input Tensor element-wise: sinh(x)

const x = dl.tensor1d([0, 1, -1, .7]);

x.sinh().print();  // or dl.sinh(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.sqrt(x)

Computes square root of the input Tensor element-wise: y = sqrt(x)

const x = dl.tensor1d([1, 2, 4, -1]);

x.sqrt().print();  // or dl.sqrt(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.square(x)

Computes square of x element-wise: x ^ 2

const x = dl.tensor1d([1, 2, Math.sqrt(2), -1]);

x.square().print();  // or dl.square(x)
Parameters:
  • x: Tensor The input Tensor.
Returns: Tensor
ƒ dl.step(x, alpha?)

Computes step of the input Tensor element-wise: x > 0 ? 1 : alpha * x

const x = dl.tensor1d([0, 2, -1, -3]);

x.step(.5).print();  // or dl.step(x, .5)
Parameters:
  • x: Tensor The input tensor.
  • alpha: number The gradient when input is negative. Optional
Returns: Tensor
ƒ dl.tan(x)

Computes tan of the input Tensor element-wise, tan(x)

const x = dl.tensor1d([0, Math.PI / 2, Math.PI * 3 / 4]);

x.tan().print();  // or dl.tan(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
ƒ dl.tanh(x)

Computes hyperbolic tangent of the input Tensor element-wise: tanh(x)

const x = dl.tensor1d([0, 1, -1, 70]);

x.tanh().print();  // or dl.tanh(x)
Parameters:
  • x: Tensor The input tensor.
Returns: Tensor
Operations  /  Matrices
ƒ dl.matMul(a, b, transposeA?, transposeB?)

Computes the dot product of two matrices, A * B. These must be matrices.

const a = dl.tensor2d([1, 2], [1, 2]);
const b = dl.tensor2d([1, 2, 3, 4], [2, 2]);

a.matMul(b).print();  // or dl.matMul(a, b)
Parameters:
  • a: Tensor2D First matrix in dot product operation.
  • b: Tensor2D Second matrix in dot product operation.
  • transposeA: boolean If true, a is transposed before multiplication. Optional
  • transposeB: boolean If true, b is transposed before multiplication. Optional
Returns: Tensor2D
Defined in ops/matmul.ts#40
ƒ dl.norm(x, ord?, axis?, keepDims?)

Computes the norm of scalar, vectors, and matrices. This function can compute several different vector norms (the 1-norm, the Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and matrix norms (Frobenius, 1-norm, and inf-norm).

const x = dl.tensor1d([1, 2, 3, 4]);

x.norm().print();  // or dl.norm(x)
Parameters:
  • x: Tensor The input array.
  • ord: number|'euclidean'|'fro' Optional. Order of the norm. Supported norm types are following:

    ord norm for matrices norm for vectors
    'euclidean' Frobenius norm 2-norm
    'fro' Frobenius norm
    Infinity max(sum(abs(x), axis=1)) max(abs(x))
    -Infinity min(sum(abs(x), axis=1)) min(abs(x))
    1 max(sum(abs(x), axis=0)) sum(abs(x))
    2 sum(abs(x)^2)^1/2*
    Optional
  • axis: number|number[] Optional. If axis is null (the default), the input is considered a vector and a single vector norm is computed over the entire set of values in the Tensor, i.e. norm(x, ord) is equivalent to norm(x.reshape([-1]), ord). If axis is a integer, the input is considered a batch of vectors, and axis determines the axis in x over which to compute vector norms. If axis is a 2-tuple of integer it is considered a batch of matrices and axis determines the axes in NDArray over which to compute a matrix norm. Optional
  • keepDims: boolean Optional. If true, the norm have the same dimensionality as the input. Optional
Returns: Tensor
Defined in ops/norm.ts#62
ƒ dl.outerProduct(v1, v2)

Computes the outer product of two vectors, v1 and v2.

const a = dl.tensor1d([1, 2, 3]);
const b = dl.tensor1d([3, 4, 5]);

dl.outerProduct(a, b).print();
Parameters:
  • v1: Tensor1D The first vector in the outer product operation.
  • v2: Tensor1D The second vector in the dot product operation.
Returns: Tensor2D
Defined in ops/matmul.ts#156
ƒ dl.transpose(x, perm?)

Transposes the Tensor. Permutes the dimensions according to perm.

The returned Tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to [n-1...0], where n is the rank of the input Tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

const a = dl.tensor2d([1, 2, 3, 4, 5, 6], [2, 3]);

a.transpose().print();  // or dl.transpose(a)
Parameters:
  • x: Tensor The tensor to transpose.
  • perm: number[] The permutation of the dimensions of a. Optional
Returns: Tensor
Defined in ops/transpose.ts#44
Operations  /  Convolution
ƒ dl.avgPool(x, filterSize, strides, pad, dimRoundingMode?)

Computes the 2D average pooling of an image.

Parameters:
  • x: Tensor3D|Tensor4D The input tensor, of rank 4 or rank 3 of shape [batch, height, width, inChannels]. If rank 3, batch of 1 is assumed.
  • filterSize: [number, number]|number The filter size, a tuple [filterHeight, filterWidth].
  • strides: [number, number]|number The strides of the pooling: [strideHeight, strideWidth].
  • pad: 'valid'|'same'|number The type of padding algorithm:

  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/pool.ts#210
ƒ dl.conv1d(input, filter, stride, pad, dimRoundingMode?)

Computes a 1D convolution over the input x.

Parameters:
  • input: Tensor2D|Tensor3D The input tensor, of rank 3 or rank 2, of shape [batch, width, inChannels]. If rank 2, batch of 1 is assumed.
  • filter: Tensor3D The filter, rank 3, of shape [filterWidth, inDepth, outDepth].
  • stride: number The number of entries by which the filter is moved right at each step.
  • pad: 'valid'|'same'|number The type of padding algorithm.

  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor2D|Tensor3D
Defined in ops/conv.ts#47
ƒ dl.conv2d(x, filter, strides, pad, dimRoundingMode?)

Computes a 2D convolution over the input x.

Parameters:
  • x: Tensor3D|Tensor4D The input tensor, of rank 4 or rank 3, of shape [batch, height, width, inChannels]. If rank 3, batch of 1 is assumed.
  • filter: Tensor4D The filter, rank 4, of shape [filterHeight, filterWidth, inDepth, outDepth].
  • strides: [number, number]|number The strides of the convolution: [strideHeight, strideWidth].
  • pad: 'valid'|'same'|number The type of padding algorithm.

  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/conv.ts#113
ƒ dl.conv2dTranspose(x, filter, outputShape, strides, pad, dimRoundingMode?)

Computes the transposed 2D convolution of an image, also known as a deconvolution.

Parameters:
  • x: Tensor3D|Tensor4D The input image, of rank 4 or rank 3, of shape [batch, height, width, inDepth]. If rank 3, batch of 1 is assumed.
  • filter: Tensor4D The filter, rank 4, of shape [filterHeight, filterWidth, outDepth, inDepth]. inDepth must match inDepth in x.
  • outputShape: [number, number, number, number]|[number, number, number] Output shape, of rank 4 or rank 3: [batch, height, width, outDepth]. If rank 3, batch of 1 is assumed.
  • strides: [number, number]|number The strides of the original convolution: [strideHeight, strideWidth].
  • pad: 'valid'|'same'|number The type of padding algorithm used in the non-transpose version of the op.
  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/conv.ts#323
ƒ dl.depthwiseConv2d(input, filter, strides, pad, rates?, dimRoundingMode?)

Depthwise 2D convolution.

Given a 4D input array and a filter array of shape [filterHeight, filterWidth, inChannels, channelMultiplier] containing inChannels convolutional filters of depth 1, this op applies a different filter to each input channel (expanding from 1 channel to channelMultiplier channels for each), then concatenates the results together. The output has inChannels * channelMultiplier channels.

See https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d for more details.

Parameters:
  • input: Tensor3D|Tensor4D The input tensor, of rank 4 or rank 3, of shape [batch, height, width, inChannels]. If rank 3, batch of 1 is assumed.
  • filter: Tensor4D The filter tensor, rank 4, of shape [filterHeight, filterWidth, inChannels, channelMultiplier].
  • strides: [number, number]|number The strides of the convolution: [strideHeight, strideWidth]. If strides is a single number, then strideHeight == strideWidth.
  • pad: 'valid'|'same'|number The type of padding algorithm.

  • rates: [number, number]|number The dilation rates: [rateHeight, rateWidth] in which we sample input values across the height and width dimensions in atrous convolution. Defaults to [1, 1]. If rate is a single number, then rateHeight == rateWidth. If it is greater than 1, then all values of strides must be 1. Optional
  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/conv.ts#374
ƒ dl.maxPool(x, filterSize, strides, pad, dimRoundingMode?)

Computes the 2D max pooling of an image.

Parameters:
  • x: Tensor3D|Tensor4D The input tensor, of rank 4 or rank 3 of shape [batch, height, width, inChannels]. If rank 3, batch of 1 is assumed.
  • filterSize: [number, number]|number The filter size, a tuple [filterHeight, filterWidth].
  • strides: [number, number]|number The strides of the pooling: [strideHeight, strideWidth].
  • pad: 'valid'|'same'|number The type of padding algorithm.

  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/pool.ts#45
ƒ dl.minPool(input, filterSize, strides, pad, dimRoundingMode?)

Computes the 2D min pooling of an image.

Parameters:
  • input: Tensor3D|Tensor4D The input tensor, of rank 4 or rank 3 of shape [batch, height, width, inChannels]. If rank 3, batch of 1 is assumed.
  • filterSize: [number, number]|number The filter size, a tuple [filterHeight, filterWidth].
  • strides: [number, number]|number The strides of the pooling: [strideHeight, strideWidth].
  • pad: 'valid'|'same'|number The type of padding algorithm.

  • dimRoundingMode: 'floor'|'round'|'ceil' The rounding mode used when computing output dimensions if pad is a number. If none is provided, it will not round and error if the output is of fractional size. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/pool.ts#160
Operations  /  Reduction
ƒ dl.argMax(x, axis?)

Returns the indices of the maximum values along an axis.

The result has the same shape as input with the dimension along axis removed.

const x = dl.tensor1d([1, 2, 3]);

x.argMax().print();  // or dl.argMax(x)
const x = dl.tensor2d([1, 2, 4, 3], [2, 2]);

const axis = 1;
x.argMax(axis).print();  // or dl.argMax(x, axis)
Parameters:
  • x: Tensor The input tensor.
  • axis: number The dimension to reduce. By default it reduces across all axes and returns the flat index Optional
Returns: Tensor
ƒ dl.argMin(x, axis?)

Returns the indices of the minimum values along an axis.

The result has the same shape as input with the dimension along axis removed.

const x = dl.tensor1d([1, 2, 3]);

x.argMin().print();  // or dl.argMin(x)
const x = dl.tensor2d([1, 2, 4, 3], [2, 2]);

const axis = 1;
x.argMin(axis).print();  // or dl.argMin(x, axis)
Parameters:
  • x: Tensor The input tensor.
  • axis: number The dimension to reduce. By default it reduces across all axes and returns the flat index. Optional
Returns: Tensor
ƒ dl.logSumExp(input, axis?, keepDims?)

Computes the log(sum(exp(elements across the reduction dimensions)).

Reduces the input along the dimensions given in axis. Unless keepDims is true, the rank of the array is reduced by 1 for each entry in axis. If keepDims is true, the reduced dimensions are retained with length 1. If axis has no entries, all dimensions are reduced, and an array with a single element is returned.

const x = dl.tensor1d([1, 2, 3]);

x.logSumExp().print();  // or dl.logSumExp(x)
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

const axis = 1;
x.logSumExp(axis).print();  // or dl.logSumExp(a, axis)
Parameters:
  • input: Tensor The input tensor.
  • axis: number|number[] The dimension(s) to reduce. If null (the default), reduces all dimensions. Optional
  • keepDims: boolean If true, retains reduced dimensions with length of 1. Defaults to false. Optional
Returns: Tensor
ƒ dl.max(x, axis?, keepDims?)

Computes the maximum of elements across dimensions of a Tensor.

Reduces the input along the dimensions given in axes. Unless keepDims is true, the rank of the Tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced dimensions are retained with length 1. If axes has no entries, all dimensions are reduced, and an Tensor with a single element is returned.

const x = dl.tensor1d([1, 2, 3]);

x.max().print();  // or dl.max(x)
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

const axis = 1;
x.max(axis).print();  // or dl.max(x, axis)
Parameters:
  • x: Tensor The input tensor.
  • axis: number|number[] The dimension(s) to reduce. By default it reduces all dimensions. Optional
  • keepDims: boolean If true, retains reduced dimensions with size 1. Optional
Returns: Tensor
ƒ dl.mean(x, axis?, keepDims?)

Computes the mean of elements across dimensions of a Tensor.

Reduces x along the dimensions given in axis. Unless keepDims is true, the rank of the Tensor is reduced by 1 for each entry in axis. If keepDims is true, the reduced dimensions are retained with length 1. If axis has no entries, all dimensions are reduced, and a Tensor with a single element is returned.

const x = dl.tensor1d([1, 2, 3]);

x.mean().print();  // or dl.logSumExp(a)
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

const axis = 1;
x.mean(axis).print();  // or dl.mean(x, axis)
Parameters:
  • x: Tensor The input tensor.
  • axis: number|number[] The dimension(s) to reduce. By default it reduces all dimensions. Optional
  • keepDims: boolean If true, retains reduced dimensions with size 1. Optional
Returns: Tensor
ƒ dl.min(x, axis?, keepDims?)

Computes the minimum value from the input.

Reduces the input along the dimensions given in axes. Unless keepDims is true, the rank of the array is reduced by 1 for each entry in axes. If keepDims is true, the reduced dimensions are retained with length 1. If axes has no entries, all dimensions are reduced, and an array with a single element is returned.

const x = dl.tensor1d([1, 2, 3]);

x.min().print();  // or dl.min(x)
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

const axis = 1;
x.min(axis).print();  // or dl.min(x, axis)
Parameters:
  • x: Tensor The input Tensor.
  • axis: number|number[] The dimension(s) to reduce. By default it reduces all dimensions. Optional
  • keepDims: boolean If true, retains reduced dimensions with size 1. Optional
Returns: Tensor
ƒ dl.sum(x, axis?, keepDims?)

Computes the sum of elements across dimensions of a Tensor.

Reduces the input along the dimensions given in axes. Unless keepDims is true, the rank of the Tensor is reduced by 1 for each entry in axes. If keepDims is true, the reduced dimensions are retained with length 1. If axes has no entries, all dimensions are reduced, and a Tensor with a single element is returned.

const x = dl.tensor1d([1, 2, 3]);

x.sum().print();  // or dl.logSumExp(x)
const x = dl.tensor2d([1, 2, 3, 4], [2, 2]);

const axis = 1;
x.sum(axis).print();  // or dl.sum(x, axis)
Parameters:
  • x: Tensor The input tensor to compute the sum over.
  • axis: number|number[] The dimension(s) to reduce. By default it reduces all dimensions. Optional
  • keepDims: boolean If true, retains reduced dimensions with size 1. Optional
Returns: Tensor
Operations  /  Normalization
ƒ dl.batchNormalization(x, mean, variance, varianceEpsilon?, scale?, offset?)

Batch normalization.

As described in http://arxiv.org/abs/1502.03167.

Mean, variance, scale, and offset can be of two shapes:

  • The same shape as the input.
  • In the common case, the depth dimension is the last dimension of x, so the values would be an Tensor1D of shape [depth].
Parameters:
Returns: Tensor
ƒ dl.localResponseNormalization(x, radius?, bias?, alpha?, beta?, normRegion?)

Normalizes the activation of a local neighborhood across or within channels.

Parameters:
  • x: Tensor3D|Tensor4D The input tensor. The 4-D input tensor is treated as a 3-D array of 1D vectors (along the last dimension), and each vector is normalized independently.
  • radius: number The number of adjacent channels or spatial locations of the 1D normalization window. In Tensorflow this param is called 'depth_radius' because only 'acrossChannels' mode is supported. Optional
  • bias: number A constant bias term for the basis. Optional
  • alpha: number A scale factor, usually positive. Optional
  • beta: number An exponent. Optional
  • normRegion: 'acrossChannels'|'withinChannel' Default is 'acrossChannels'. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/lrn.ts#40
ƒ dl.moments(x, axis?, keepDims?)

Calculates the mean and variance of x. The mean and variance are calculated by aggregating the contents of x across axes. If x is 1-D and axes = [0] this is just the mean and variance of a vector.

Parameters:
  • x: Tensor The input tensor.
  • axis: number|number[] The dimension(s) along with to compute mean and variance. By default it reduces all dimensions. Optional
  • keepDims: boolean If true, the moments have the same dimensionality as the input. Optional
Returns: {mean: Tensor, variance: Tensor}
ƒ dl.softmax(logits, dim?)

Computes the softmax normalized vector given the logits.

const a = dl.tensor1d([1, 2, 3]);

a.softmax().print();  // or dl.softmax(a)
const a = dl.tensor2d([2, 4, 6, 1, 2, 3], [2, 3]);

a.softmax().print();  // or dl.softmax(a)
Parameters:
  • logits: Tensor The logits array.
  • dim: number The dimension softmax would be performed on. Defaults to -1 which indicates the last dimension. Optional
Returns: Tensor
Defined in ops/softmax.ts#47
Operations  /  Images
ƒ dl.image.resizeBilinear(images, size, alignCorners?)

Bilinear resize a batch of 3D images to a new shape.

Parameters:
  • images: Tensor3D|Tensor4D The images, of rank 4 or rank 3, of shape [batch, height, width, inChannels]. If rank 3, batch of 1 is assumed.
  • size: [number, number] The new shape [newHeight, newWidth] to resize the images to. Each channel is resized individually.
  • alignCorners: boolean Defaults to False. If true, rescale input by (new_height - 1) / (height - 1), which exactly aligns the 4 corners of images and resized images. If false, rescale by new_height / height. Treat similarly the width dimension. Optional
Returns: Tensor3D|Tensor4D
Defined in ops/image_ops.ts#37
Operations  /  RNN
ƒ dl.basicLSTMCell(forgetBias, lstmKernel, lstmBias, data, c, h)

Computes the next state and output of a BasicLSTMCell.

Returns [newC, newH].

Derived from tf.contrib.rnn.BasicLSTMCell.

Parameters:
  • forgetBias: Scalar Forget bias for the cell.
  • lstmKernel: Tensor2D The weights for the cell.
  • lstmBias: Tensor1D The bias for the cell.
  • data: Tensor2D The input to the cell.
  • c: Tensor2D Previous cell state.
  • h: Tensor2D Previous cell output.
Returns: [Tensor2D, Tensor2D]
Defined in ops/lstm.ts#80
ƒ dl.multiRNNCell(lstmCells, data, c, h)

Computes the next states and outputs of a stack of LSTMCells.

Each cell output is used as input to the next cell.

Returns [cellState, cellOutput].

Derived from tf.contrib.rn.MultiRNNCell.

Parameters:
Returns: [Tensor2D[], Tensor2D[]]
Defined in ops/lstm.ts#44
Operations  /  Logical
ƒ dl.equal(a, b)

Returns the truth value of (a == b) element-wise. Supports broadcasting.

We also expose equalStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

Parameters:
  • a: Tensor The first input tensor.
  • b: Tensor The second input tensor. Must have the same dtype as a.
Returns: Tensor
Defined in ops/compare.ts#97
ƒ dl.greater(a, b)

Returns the truth value of (a > b) element-wise. Supports broadcasting.

We also expose greaterStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

Parameters:
  • a: Tensor The first input tensor.
  • b: Tensor The second input tensor. Must have the same dtype as a.
Returns: Tensor
Defined in ops/compare.ts#143
ƒ dl.greaterEqual(a, b)

Returns the truth value of (a >= b) element-wise. Supports broadcasting.

We also expose greaterStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

Parameters:
  • a: Tensor The first input tensor.
  • b: Tensor The second input tensor. Must have the same dtype as a.
Returns: Tensor
Defined in ops/compare.ts#166
ƒ dl.less(a, b)

Returns the truth value of (a < b) element-wise. Supports broadcasting.

We also expose lessStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

Parameters:
  • a: Tensor The first input tensor.
  • b: Tensor The second input tensor. Must have the same dtype as a.
Returns: Tensor
Defined in ops/compare.ts#66
ƒ dl.lessEqual(a, b)

Returns the truth value of (a <= b) element-wise. Supports broadcasting.

We also expose lessEqualStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

Parameters:
  • a: Tensor The first input tensor.
  • b: Tensor The second input tensor. Must have the same dtype as a.
Returns: Tensor
Defined in ops/compare.ts#120
ƒ dl.logicalAnd(a, b)

Returns the truth value of a AND b element-wise. Supports broadcasting.

Parameters:
  • a: Tensor The first input tensor. Must be of dtype bool.
  • b: Tensor The second input tensor. Must be of dtype bool.
Returns: Tensor
ƒ dl.logicalNot(x)

Returns the truth value of NOT x element-wise.

Parameters:
  • x: Tensor The input tensor. Must be of dtype 'bool'.
Returns: Tensor
ƒ dl.logicalOr(a, b)

Returns the truth value of a OR b element-wise. Supports broadcasting.

Parameters:
  • a: Tensor The first input tensor. Must be of dtype bool.
  • b: Tensor The second input tensor. Must be of dtype bool.
Returns: Tensor
ƒ dl.logicalXor(a, b)

Returns the truth value of a XOR b element-wise. Supports broadcasting.

Parameters:
  • a: Tensor The first input tensor. Must be of dtype bool.
  • b: Tensor The second input tensor. Must be of dtype bool.
Returns: Tensor
ƒ dl.notEqual(a, b)

Returns the truth value of (a != b) element-wise. Supports broadcasting.

We also expose notEqualStrict which has the same signature as this op and asserts that a and b are the same shape (does not broadcast).

Parameters:
  • a: Tensor The first input tensor.
  • b: Tensor The second input tensor. Must have the same dtype as a.
Returns: Tensor
Defined in ops/compare.ts#35
ƒ dl.where(condition, a, b)

Returns the elements, either a or b depending on the condition.

If the condition is true, select from a, otherwise select from b.

Parameters:
  • condition: Tensor The input condition. Must be of dtype bool.
  • a: Tensor If condition is rank 1, a may have a higher rank but its first dimension must match the size of condition.
  • b: Tensor A tensor with the same shape and type as a.
Returns: Tensor
Training  /  Gradients
ƒ dl.grad(f)

Provided f(x), returns another function g(x, dy?), which gives the gradient of f(x) with respect to x.

If dy is provided, the gradient of f(x).mul(dy).sum() with respect to x is computed instead. f(x) must take a single tensor x and return a single tensor y. If f() takes multiple inputs, use dl.grads() instead.

// f(x) = x ^ 2
const f = x => x.square();
// f'(x) = 2x
const g = dl.grad(f);

const x = dl.tensor1d([2, 3]);
g(x).print();
// f(x) = x ^ 3
const f = x => x.pow(dl.scalar(3, 'int32'));
// f'(x) = 3x ^ 2
const g = dl.grad(f);
// f''(x) = 6x
const gg = dl.grad(g);

const x = dl.tensor1d([2, 3]);
gg(x).print();
Parameters:
  • f: (x: Tensor) => Tensor The function f(x), to compute gradient for.
Returns: (x: Tensor, dy?: Tensor) => Tensor
Defined in gradients.ts#76
ƒ dl.grads(f)

Provided f(x1, x2,...), returns another function g([x1, x2,...], dy?), which gives an array of gradients of f() with respect to each input [x1,x2,...].

If dy is passed when calling g(), the gradient of f(x1,...).mul(dy).sum() with respect to each input is computed instead. The provided f must take one or more tensors and return a single tensor y. If f() takes a single input, we recommend using dl.grad() instead.

// f(a, b) = a * b
const f = (a, b) => a.mul(b);
// df / da = b, df / db = a
const g = dl.grads(f);

const a = dl.tensor1d([2, 3]);
const b = dl.tensor1d([-2, -3]);
const [da, db] = g([a, b]);
console.log('da');
da.print();
console.log('db');
db.print();
Parameters:
  • f: (...args: Tensor[]) => Tensor The function f(x1, x2,...) to compute gradients for.
Returns: (args: Tensor[], dy?: Tensor) => Tensor[]
Defined in gradients.ts#127
ƒ dl.customGrad(f)

Overrides the gradient computation of a function f.

Takes a function f(...inputs) => {value: Tensor, gradFunc: dy => Tensor[]} and returns another function g(...inputs) which takes the same inputs as f. When called, g returns f().value. In backward mode, custom gradients with respect to each input of f are computed using f().gradFunc.

const customOp = dl.customGrad(x => {
   // Override gradient of our custom x ^ 2 op to be dy * abs(x);
   return {value: x.square(), gradFunc: dy => [dy.mul(x.abs())]};
});

const x = dl.tensor1d([-1, -2, 3]);
const dx = dl.grad(x => customOp(x));

console.log(`f(x):`);
customOp(x).print();
console.log(`f'(x):`);
dx(x).print();
Parameters:
  • f: (a: Tensor, b: Tensor,...) => { value: Tensor, gradFunc: (dy: Tensor) => Tensor|Tensor[] } The function to evaluate in forward mode, which should return {value: Tensor, gradFunc: (dy) => Tensor[]}, where gradFunc returns the custom gradients of f with respect to its inputs.
Returns: (...args: Tensor[]) => Tensor
Defined in gradients.ts#345
ƒ dl.valueAndGrad(f)

Like dl.grad, but returns also the value of f(). Useful when f() returns a metric you want to show.

The result is a rich object with the following properties:

  • grad: The gradient of f(x) w.r.t x (result of dl.grad()).
  • value: The value returned by f(x).
// f(x) = x ^ 2
const f = x => x.square();
// f'(x) = 2x
const g = dl.valueAndGrad(f);

const x = dl.tensor1d([2, 3]);
const {value, grad} = g(x);

console.log('value');
value.print();
console.log('grad');
grad.print();
Parameters:
Returns: (x: Tensor, dy?: Tensor) => { value: Tensor; grad: Tensor; }
Defined in gradients.ts#175
ƒ dl.valueAndGrads(f)

Like dl.grads(), but returns also the value of f(). Useful when f() returns a metric you want to show.

The result is a rich object with the following properties:

  • grads: The gradients of f() w.r.t each input (result of dl.grads()).
  • value: The value returned by f(x).
// f(a, b) = a * b
const f = (a, b) => a.mul(b);
// df/da = b, df/db = a
const g = dl.valueAndGrads(f);

const a = dl.tensor1d([2, 3]);
const b = dl.tensor1d([-2, -3]);
const {value, grads} = g([a, b]);

const [da, db] = grads;

console.log('value');
value.print();

console.log('da');
da.print();
console.log('db');
db.print();
Parameters:
Returns: (args: Tensor[], dy?: Tensor) => { grads: Tensor[]; value: Tensor; }
Defined in gradients.ts#226
ƒ dl.variableGrads(f, varList?)

Computes and returns the gradient of f(x) with respect to the list of trainable variables provided by varList. If no list is provided, it defaults to all trainable variables.

const a = dl.variable(dl.tensor1d([3, 4]));
const b = dl.variable(dl.tensor1d([5, 6]));
const x = dl.tensor1d([1, 2]);

// f(a, b) = a * x ^ 2 + b * x
const f = () => a.mul(x.square()).add(b.mul(x)).sum();
// df/da = x ^ 2, df/db = x
const {value, grads} = dl.variableGrads(f);

Object.keys(grads).forEach(varName => grads[varName].print());
Parameters:
  • f: () => Scalar The function to execute. f() should return a scalar.
  • varList: Variable[] Optional
Returns: {value: Scalar, grads: {[name: string]: Tensor}}
Defined in gradients.ts#274
Training  /  Optimizers
ƒ dl.train.sgd(learningRate)

Constructs a SGDOptimizer that uses stochastic gradient descent.

// Fit a quadratic function by learning the coefficients a, b, c.
const xs = dl.tensor1d([0, 1, 2, 3]);
const ys = dl.tensor1d([1.1, 5.9, 16.8, 33.9]);

const a = dl.variable(dl.scalar(Math.random()));
const b = dl.variable(dl.scalar(Math.random()));
const c = dl.variable(dl.scalar(Math.random()));

// y = a * x^2 + b * x + c.
const f = x => a.mul(x.square()).add(b.mul(x)).add(c);
const loss = (pred, label) => pred.sub(label).square().mean();

const learningRate = 0.01;
const optimizer = dl.train.sgd(learningRate);

// Train the model.
for (let i = 0; i < 10; i++) {
   optimizer.minimize(() => loss(f(xs), ys));
}

// Make predictions.
console.log(
     `a: ${a.dataSync()}, b: ${b.dataSync()}, c: ${c.dataSync()}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
   console.log(`x: ${i}, pred: ${pred}`);
});
Parameters:
  • learningRate: number The learning rate to use for the SGD algorithm.
Returns: SGDOptimizer
ƒ dl.train.momentum(learningRate, momentum)

Constructs a MomentumOptimizer that uses momentum gradient descent.

Parameters:
  • learningRate: number The learning rate to use for the momentum gradient descent algorithm.
  • momentum: number The momentum to use for the momentum gradient descent algorithm.
ƒ dl.train.adagrad(learningRate, initialAccumulatorValue?)

Constructs a dl.train.AdagradOptimizer that uses the Adagrad algorithm. Constructs a AdagradOptimizer that uses the Adagrad algorithm. See http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf or http://ruder.io/optimizing-gradient-descent/index.html#adagrad

Parameters:
  • learningRate: number
  • initialAccumulatorValue: number Optional
Returns: AdagradOptimizer
ƒ dl.train.adadelta(learningRate?, rho?, epsilon?)

Constructs a AdadeltaOptimizer that uses the Adadelta algorithm. See https://arxiv.org/abs/1212.5701

Parameters:
  • learningRate: number Optional
  • rho: number Optional
  • epsilon: number Optional
ƒ dl.train.adam(learningRate?, beta1?, beta2?, epsilon?)

Constructs a AdamOptimizer that uses the Adam algorithm. See https://arxiv.org/abs/1412.6980

Parameters:
  • learningRate: number Optional
  • beta1: number Optional
  • beta2: number Optional
  • epsilon: number Optional
Returns: AdamOptimizer
ƒ dl.train.adamax(learningRate?, beta1?, beta2?, epsilon?, decay?)

Constructs a AdamaxOptimizer that uses the Adam algorithm. See https://arxiv.org/abs/1412.6980

Parameters:
  • learningRate: number Optional
  • beta1: number Optional
  • beta2: number Optional
  • epsilon: number Optional
  • decay: number Optional
Returns: AdamaxOptimizer
ƒ dl.train.rmsprop(learningRate, decay?, momentum?, epsilon?)

Constructs a RMSPropOptimizer that uses RMSProp gradient descent. This implementation uses plain momentum and is not centered version of RMSProp.

See http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

Parameters:
  • learningRate: number The learning rate to use for the RMSProp gradient descent algorithm.
  • decay: number The discounting factor for the history/coming gradient Optional
  • momentum: number The momentum to use for the RMSProp gradient descent algorithm. Optional
  • epsilon: number Small value to avoid zero denominator. Optional
Returns: RMSPropOptimizer
Training  /  Losses
ƒ dl.losses.softmaxCrossEntropy(labels, logits, dim?)

Computes softmax cross entropy between logits and labels.

Measures the probability error in discrete classification tasks in which the classes are mutually exclusive (each entry is in exactly one class). For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck, but not both.

NOTE: While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.

logits and labels must have the same shape, e.g. [batch_size, num_classes] and the same dtype.

Parameters:
  • labels: Tensor The labels array.
  • logits: Tensor The logits array.
  • dim: number The dimension softmax would be performed on. Defaults to -1 which indicates the last dimension. Optional
Returns: Tensor
Defined in ops/softmax.ts#103
Training  /  Classes
Class dl.train.Optimizer
Methods:
minimize(f, returnCost?, varList?) Executes f() and minimizes the scalar output of f() by computing gradients of y with respect to the list of trainable variables provided by varList. If no list is provided, it defaults to all trainable variables.
Parameters:
  • f: () => Scalar The function to execute and whose output to minimize.
  • returnCost: boolean Whether to return the scalar cost value produced by executing f(). Optional
  • varList: Variable[] An optional list of variables to update. If specified, only the trainable variables in varList will be updated by minimize. Defaults to all trainable variables. Optional
Performance  /  Memory
ƒ dl.tidy(nameOrFn, fn?, gradMode?)

Executes the provided function and after it is executed, cleans up all intermediate tensors allocated by the function except those returned by the function.

Using this method helps avoid memory leaks. In general, wrap calls to operations in dl.tidy() for automatic memory cleanup.

When in safe mode, you must enclose all Tensor creation and ops inside a dl.tidy() to prevent memory leaks.

// y = 2 ^ 2 + 1
const y = dl.tidy(() => {
   // a, b, and one will be cleaned up when the tidy ends.
   const one = dl.scalar(1);
   const a = dl.scalar(2);
   const b = a.square();

   console.log('numTensors (in tidy): ' + dl.memory().numTensors);

   // The value returned inside the tidy function will return
   // through the tidy, in this case to the variable y.
   return b.add(one);
});

console.log('numTensors (outside tidy): ' + dl.memory().numTensors);
y.print();
Parameters:
  • nameOrFn: string|Function The name of the closure, or the function to execute. If a name is provided, the 2nd argument should be the function. If a name is provided, and debug mode is on, the timing and the memory usage of the function will be tracked and displayed on the console using the provided name.
  • fn: Function The function to execute. Optional
  • gradMode: boolean If true, starts a tape and doesn't dispose tensors. Optional
Returns: Tensor|Tensor[]|{[key: string]: Tensor}|void
Defined in tracking.ts#64
ƒ dl.keep(result)

Keeps a Tensor generated inside a dl.tidy() from being disposed automatically.

let b;
const y = dl.tidy(() => {
   const one = dl.scalar(1);
   const a = dl.scalar(2);

   // b will not be cleaned up by the tidy. a and one will be cleaned up
   // when the tidy ends.
   b = dl.keep(a.square());

   console.log('numTensors (in tidy): ' + dl.memory().numTensors);

   // The value returned inside the tidy function will return
   // through the tidy, in this case to the variable y.
   return b.add(one);
});

console.log('numTensors (outside tidy): ' + dl.memory().numTensors);
console.log('y:');
y.print();
console.log('b:');
b.print();
Parameters:
  • result: Tensor The tensor to keep from being disposed.
Returns: Tensor
Defined in tracking.ts#130
ƒ dl.memory()

Returns memory info at the current time in the program. The result is an object with the following properties:

  • numBytes: number of bytes allocated (undisposed) at this time.
  • numTensors: number of unique tensors allocated
  • numDataBuffers: number of unique data buffers allocated (undisposed) at this time, which is ≤ the number of tensors (e.g. a.reshape(newShape) makes a new Tensor that shares the same data buffer with a).
  • unreliable: optional boolean:
    • On WebGL, not present (always reliable).
    • On CPU, true. Due to automatic garbage collection, these numbers represent undisposed tensors, i.e. not wrapped in dl.tidy(), or lacking a call to tensor.dispose().
Returns: MemoryInfo
Defined in environment.ts#258
Performance  /  Timing
ƒ dl.time(f)

Executes f() and returns a promise that resolves with timing information.

The result is an object with the following properties:

  • wallMs: wall execution time.
  • kernelMs: kernel execution time, ignoring data transfer.
  • On WebGL the following additional properties exist:
    • uploadWaitMs: cpu blocking time on texture uploads.
    • downloadWaitMs: cpu blocking time on texture downloads (readPixels).
const x = dl.randomNormal([20, 20]);
const time = await dl.time(() => x.matMul(x));

console.log(`kernelMs: ${time.kernelMs}, wallTimeMs: ${time.wallMs}`);
Parameters:
  • f: () => void The function to execute and time.
Returns: Promise
Defined in tracking.ts#155
ƒ dl.nextFrame()

Returns a promise that resolve when a requestAnimationFrame has completed.

This is simply a sugar method so that users can do the following: await dl.nextFrame();

Returns: Promise
Defined in browser_util.ts#26
Environment  / 
ƒ dl.setBackend(backendType, safeMode?)

Sets the backend (cpu, webgl, etc) responsible for creating tensors and executing operations on those tensors.

Parameters:
  • backendType: 'webgl'|'cpu' The backend type. Currently supports 'webgl'|'cpu'.
  • safeMode: boolean Defaults to false. In safe mode, you are forced to construct tensors and call math operations inside a dl.tidy() which will automatically clean up intermediate tensors. Optional
Returns: void
Defined in environment.ts#224
ƒ dl.getBackend()

Returns the current backend (cpu, webgl, etc). The backend is responsible for creating tensors and executing operations on those tensors.

Returns: 'webgl'|'cpu'
Defined in environment.ts#236