 Arrays with different sizes cannot be added, subtracted, or generally be used in arithmetic.

A way to overcome this is to duplicate the smaller array so that it is the dimensionality and size as the larger array. This is called array and is available in when performing array arithmetic, which can greatly reduce and simplify your code.

In this , you will discover the concept of array broadcasting and how to implement it in NumPy.

After completing this tutorial, you will know:

• The problem of arithmetic with arrays with different sizes.
• The solution of broadcasting and common examples in one and two dimensions.
• The rule of array broadcasting and when broadcasting fails.

Let’s get started. Introduction to Broadcasting with NumPy Arrays
Photo by pbkwee, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

1. Limitation with Array Arithmetic
3. Broadcasting in NumPy
4. Limitations of Broadcasting

Need help with Linear Algebra for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Limitation with Array Arithmetic

You can perform arithmetic directly on NumPy arrays, such as addition and subtraction.

For example, two arrays can be added together to create a new array where the values at each index are added together.

For example, an array a can be defined as [1, 2, 3] and array b can be defined as [1, 2, 3] and adding together will result in a new array with the values [2, 4, 6].

Strictly, arithmetic may only be performed on arrays that have the same dimensions and dimensions with the same size.

This means that a one-dimensional array with the length of can only perform arithmetic with another one-dimensional array with the length .

This limitation on array arithmetic is quite limiting indeed. Thankfully, NumPy provides a built-in workaround to allow arithmetic between arrays with differing sizes.

Broadcasting is the name given to the method that NumPy uses to allow array arithmetic between arrays with a different or size.

Although the technique was developed for NumPy, it has also been adopted more broadly in other numerical computational libraries, such as Theano, TensorFlow, and Octave.

Broadcasting solves the problem of arithmetic between arrays of differing shapes by in effect replicating the smaller array along the last mismatched dimension.

The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes.

NumPy does not actually duplicate the smaller array; instead, it makes memory and computationally efficient use of existing structures in memory that in effect achieve the same result.

The concept has also permeated linear algebra notation to simplify the explanation of simple operations.

In the context of deep learning, we also use some less conventional notation. We allow the addition of matrix and a vector, yielding another matrix: C = A + b, where Ci,j = Ai,j + bj. In other words, the vector b is added to each row of the matrix. This shorthand eliminates the need to define a matrix with b copied into each row before doing the addition. This implicit copying of b to many locations is called broadcasting.

— Page 34, Deep Learning, 2016.

We can make broadcasting concrete by looking at three examples in NumPy.

The examples in this section are not exhaustive, but instead are common to the types of broadcasting you may see or implement.

Scalar and One-Dimensional Array

A single value or scalar can be used in arithmetic with a one-dimensional array.

For example, we can imagine a one-dimensional array “a” with three values [a1, a2, a3] added to a scalar “b”.

The scalar will need to be broadcast across the one-dimensional array by duplicating the value it 2 more times.

The two one-dimensional arrays can then be added directly.

The example below demonstrates this in NumPy.

Running the example first prints the defined one-dimensional array, then the scalar, followed by the result where the scalar is added to each value in the array.

Scalar and Two-Dimensional Array

A scalar value can be used in arithmetic with a two-dimensional array.

For example, we can imagine a two-dimensional array “A” with 2 rows and 3 columns added to the scalar “b”.

The scalar will need to be broadcast across each row of the two-dimensional array by duplicating it 5 more times.

The two two-dimensional arrays can then be added directly.

The example below demonstrates this in NumPy.

Running the example first prints the defined two-dimensional array, then the scalar, then the result of the addition with the value “2” added to each value in the array.

One-Dimensional and Two-Dimensional Arrays

A one-dimensional array can be used in arithmetic with a two-dimensional array.

For example, we can imagine a two-dimensional array “A” with 2 rows and 3 columns added to a one-dimensional array “b” with 3 values.

The one-dimensional array is broadcast across each row of the two-dimensional array by creating a second copy to result in a new two-dimensional array “B”.

The two two-dimensional arrays can then be added directly.

Below is a worked example in NumPy.

Running the example first prints the defined two-dimensional array, then the defined one-dimensional array, followed by the result C where in effect each value in the two-dimensional array is doubled.

Broadcasting is a handy shortcut that proves very useful in practice when working with NumPy arrays.

That being said, it does not work for all cases, and in fact imposes a strict rule that must be satisfied for broadcasting to be performed.

Arithmetic, including broadcasting, can only be performed when the shape of each dimension in the arrays are equal or one has the dimension size of 1. The dimensions are considered in reverse order, starting with the trailing dimension; for example, looking at columns before rows in a two-dimensional case.

This make more sense when we consider that NumPy will in effect pad missing dimensions with a size of “1” when comparing arrays.

Therefore, the comparison between a two-dimensional array “A” with 2 rows and 3 columns and a vector “b” with 3 elements:

In effect, this becomes a comparison between:

This same notion applies to the comparison between a scalar that is treated as an array with the required number of dimensions:

This becomes a comparison between:

When the comparison fails, the broadcast cannot be performed, and an error is raised.

The example below attempts to broadcast a two-element array to a 2 x 3 array. This comparison is in effect:

We can see that the last dimensions (columns) do not match and we would expect the broadcast to fail.

The example below demonstrates this in NumPy.

Running the example first prints the shapes of the arrays then raises an error when attempting to broadcast, as we expected.

Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

• Create three new and different examples of broadcasting with NumPy arrays.
• Implement your own broadcasting function for manually broadcasting in one and two-dimensional cases.
• Benchmark NumPy broadcasting and your own custom broadcasting functions with one and two dimensional cases with very large arrays.

If you explore any of these extensions, I’d love to know.

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered the concept of array broadcasting and how to implement in NumPy.

Specifically, you learned:

• The problem of arithmetic with arrays with different sizes.
• The solution of broadcasting and common examples in one and two dimensions.
• The rule of array broadcasting and when broadcasting fails.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Linear Algebra for Machine Learning! Develop a working understand of linear algebra

…by writing lines of code in python

Discover how in my new Ebook:
Linear Algebra for Machine Learning

It provides self-study tutorials on topics like:
Vector Norms, Matrix Multiplication, Tensors, Eigendecomposition, SVD, PCA and much more…

Finally Understand the Mathematics of Data

Skip the Academics. Just Results.

SHARE