A Gentle Introduction to Broadcasting with NumPy Arrays
Arrays with different sizes cannot be added, subtracted, or generally be used in arithmetic.
A way to overcome this is to duplicate the smaller array so that it is the dimensionality and size as the larger array. This is called array broadcasting and is available in NumPy when performing array arithmetic, which can greatly reduce and simplify your code.
In this tutorial, you will discover the concept of array broadcasting and how to implement it in NumPy.
After completing this tutorial, you will know:
- The problem of arithmetic with arrays with different sizes.
- The solution of broadcasting and common examples in one and two dimensions.
- The rule of array broadcasting and when broadcasting fails.
Let’s get started.
Tutorial Overview
This tutorial is divided into 4 parts; they are:
- Limitation with Array Arithmetic
- Array Broadcasting
- Broadcasting in NumPy
- Limitations of Broadcasting
Need help with Linear Algebra for Machine Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Limitation with Array Arithmetic
You can perform arithmetic directly on NumPy arrays, such as addition and subtraction.
For example, two arrays can be added together to create a new array where the values at each index are added together.
For example, an array a can be defined as [1, 2, 3] and array b can be defined as [1, 2, 3] and adding together will result in a new array with the values [2, 4, 6].
1234 | a = [1, 2, 3]b = [1, 2, 3]c = a + bc = [1 + 1, 2 + 2, 3 + 3] |
Strictly, arithmetic may only be performed on arrays that have the same dimensions and dimensions with the same size.
This means that a one-dimensional array with the length of 10 can only perform arithmetic with another one-dimensional array with the length 10.
This limitation on array arithmetic is quite limiting indeed. Thankfully, NumPy provides a built-in workaround to allow arithmetic between arrays with differing sizes.
Array Broadcasting
Broadcasting is the name given to the method that NumPy uses to allow array arithmetic between arrays with a different shape or size.
Although the technique was developed for NumPy, it has also been adopted more broadly in other numerical computational libraries, such as Theano, TensorFlow, and Octave.
Broadcasting solves the problem of arithmetic between arrays of differing shapes by in effect replicating the smaller array along the last mismatched dimension.
The term broadcasting describes how numpy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes.
NumPy does not actually duplicate the smaller array; instead, it makes memory and computationally efficient use of existing structures in memory that in effect achieve the same result.
The concept has also permeated linear algebra notation to simplify the explanation of simple operations.
In the context of deep learning, we also use some less conventional notation. We allow the addition of matrix and a vector, yielding another matrix: C = A + b, where Ci,j = Ai,j + bj. In other words, the vector b is added to each row of the matrix. This shorthand eliminates the need to define a matrix with b copied into each row before doing the addition. This implicit copying of b to many locations is called broadcasting.
Broadcasting in NumPy
We can make broadcasting concrete by looking at three examples in NumPy.
The examples in this section are not exhaustive, but instead are common to the types of broadcasting you may see or implement.
Scalar and One-Dimensional Array
A single value or scalar can be used in arithmetic with a one-dimensional array.
For example, we can imagine a one-dimensional array “a” with three values [a1, a2, a3] added to a scalar “b”.
12 | a = [a1, a2, a3]b |
The scalar will need to be broadcast across the one-dimensional array by duplicating the value it 2 more times.
1 | b = [b1, b2, b3] |
The two one-dimensional arrays can then be added directly.
12 | c = a + bc = [a1 + b1, a2 + b2, a3 + b3] |
The example below demonstrates this in NumPy.
12345678 | # scalar and one-dimensionalfrom numpy import arraya=array([1,2,3])print(a)b=2print(b)c=a+bprint(c) |
Running the example first prints the defined one-dimensional array, then the scalar, followed by the result where the scalar is added to each value in the array.
12345 | [1 2 3]2[3 4 5] |
Scalar and Two-Dimensional Array
A scalar value can be used in arithmetic with a two-dimensional array.
For example, we can imagine a two-dimensional array “A” with 2 rows and 3 columns added to the scalar “b”.
1234 | a11, a12, a13A = (a21, a22, a23)b |
The scalar will need to be broadcast across each row of the two-dimensional array by duplicating it 5 more times.
12 | b11, b12, b13B = (b21, b22, b23) |
The two two-dimensional arrays can then be added directly.
1234 | C = A + B a11 + b11, a12 + b12, a13 + b13C = (a21 + b21, a22 + b22, a23 + b23) |
The example below demonstrates this in NumPy.
12345678 | # scalar and two-dimensionalfrom numpy import arrayA=array([[1,2,3],[1,2,3]])print(A)b=2print(b)C=A+bprint(C) |
Running the example first prints the defined two-dimensional array, then the scalar, then the result of the addition with the value “2” added to each value in the array.
1234567 | [[1 2 3] [1 2 3]]2[[3 4 5] [3 4 5]] |
One-Dimensional and Two-Dimensional Arrays
A one-dimensional array can be used in arithmetic with a two-dimensional array.
For example, we can imagine a two-dimensional array “A” with 2 rows and 3 columns added to a one-dimensional array “b” with 3 values.
1234 | a11, a12, a13A = (a21, a22, a23)b = (b1, b2, b3) |
The one-dimensional array is broadcast across each row of the two-dimensional array by creating a second copy to result in a new two-dimensional array “B”.
12 | b11, b12, b13B = (b21, b22, b23) |
The two two-dimensional arrays can then be added directly.
1234 | C = A + B a11 + b11, a12 + b12, a13 + b13C = (a21 + b21, a22 + b22, a23 + b23) |
Below is a worked example in NumPy.
12345678 | # one-dimensional and two-dimensionalfrom numpy import arrayA=array([[1,2,3],[1,2,3]])print(A)b=array([1,2,3])print(b)C=A+bprint(C) |
Running the example first prints the defined two-dimensional array, then the defined one-dimensional array, followed by the result C where in effect each value in the two-dimensional array is doubled.
1234567 | [[1 2 3] [1 2 3]][1 2 3][[2 4 6] [2 4 6]] |
Limitations of Broadcasting
Broadcasting is a handy shortcut that proves very useful in practice when working with NumPy arrays.
That being said, it does not work for all cases, and in fact imposes a strict rule that must be satisfied for broadcasting to be performed.
Arithmetic, including broadcasting, can only be performed when the shape of each dimension in the arrays are equal or one has the dimension size of 1. The dimensions are considered in reverse order, starting with the trailing dimension; for example, looking at columns before rows in a two-dimensional case.
This make more sense when we consider that NumPy will in effect pad missing dimensions with a size of “1” when comparing arrays.
Therefore, the comparison between a two-dimensional array “A” with 2 rows and 3 columns and a vector “b” with 3 elements:
12 | A.shape = (2 x 3)b.shape = (3) |
In effect, this becomes a comparison between:
12 | A.shape = (2 x 3)b.shape = (1 x 3) |
This same notion applies to the comparison between a scalar that is treated as an array with the required number of dimensions:
12 | A.shape = (2 x 3)b.shape = (1) |
This becomes a comparison between:
12 | A.shape = (2 x 3)b.shape = (1 x 1) |
When the comparison fails, the broadcast cannot be performed, and an error is raised.
The example below attempts to broadcast a two-element array to a 2 x 3 array. This comparison is in effect:
12 | A.shape = (2 x 3)b.shape = (1 x 2) |
We can see that the last dimensions (columns) do not match and we would expect the broadcast to fail.
The example below demonstrates this in NumPy.
12345678 | # broadcasting errorfrom numpy import arrayA=array([[1,2,3],[1,2,3]])print(A.shape)b=array([1,2])print(b.shape)C=A+bprint(C) |
Running the example first prints the shapes of the arrays then raises an error when attempting to broadcast, as we expected.
123 | (2, 3)(2,)ValueError: operands could not be broadcast together with shapes (2,3) (2,) |
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Create three new and different examples of broadcasting with NumPy arrays.
- Implement your own broadcasting function for manually broadcasting in one and two-dimensional cases.
- Benchmark NumPy broadcasting and your own custom broadcasting functions with one and two dimensional cases with very large arrays.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Books
Articles
Summary
In this tutorial, you discovered the concept of array broadcasting and how to implement in NumPy.
Specifically, you learned:
- The problem of arithmetic with arrays with different sizes.
- The solution of broadcasting and common examples in one and two dimensions.
- The rule of array broadcasting and when broadcasting fails.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Get a Handle on Linear Algebra for Machine Learning!
Develop a working understand of linear algebra
…by writing lines of code in python
It provides self-study tutorials on topics like:
Vector Norms, Matrix Multiplication, Tensors, Eigendecomposition, SVD, PCA and much more…
Finally Understand the Mathematics of Data
Skip the Academics. Just Results.
相關推薦
A Gentle Introduction to Broadcasting with NumPy Arrays
Tweet Share Share Google Plus Arrays with different sizes cannot be added, subtracted, or genera
A Gentle Introduction to Autocorrelation and Partial Autocorrelation (譯文)
A Gentle Introduction to Autocorrelation and Partial Autocorrelation 原文作者:Jason Brownlee 原文地址:https://machinelearningmastery.com/gentle-introdu
A Gentle Introduction to Applied Machine Learning as a Search Problem (譯文)
A Gentle Introduction to Applied Machine Learning as a Search Problem 原文作者:Jason Brownlee 原文地址:https://machinelearningmastery.com/applied-m
A gentle introduction to decision trees using R
Most techniques of predictive analytics have their origins in probability or statistical theory (see my post on Naïve Bayes, for example). In this post I'l
A Gentle Introduction to Transfer Learning for Deep Learning
Tweet Share Share Google Plus Transfer learning is a machine learning method where a model devel
A Gentle Introduction to RNN Unrolling
Tweet Share Share Google Plus Recurrent neural networks are a type of neural network where the o
A Gentle Introduction to Matrix Factorization for Machine Learning
Tweet Share Share Google Plus Many complex matrix operations cannot be solved efficiently or wit
A Gentle Introduction to Autocorrelation and Partial Autocorrelation
Tweet Share Share Google Plus Autocorrelation and partial autocorrelation plots are heavily used
A Gentle Introduction to Exploding Gradients in Neural Networks
Tweet Share Share Google Plus Exploding gradients are a problem where large error gradients accu
A Gentle Introduction to Deep Learning Caption Generation Models
Tweet Share Share Google Plus Caption generation is the challenging artificial intelligence prob
翻譯 COMMON LISP: A Gentle Introduction to Symbolic Computation
因為學習COMMON LISP,起步較為艱難,田春翻譯的那本書起點太高,而大多數書籍起點都很高。其實國內還有一本書,是Common LISP程式設計/韓俊剛,殷勇編著,由西安電子科技大學出版社出版,不過鑑於該書已經絕版,我決定還是找個英文版的練練手。 很多高手,比如田春,都
Gentle Introduction to Models for Sequence Prediction with Recurrent Neural Networks
Tweet Share Share Google Plus Sequence prediction is a problem that involves using historical se
A Short Introduction to Boosting
short why clu rom ons ner boosting algorithm plain http://www.site.uottawa.ca/~stan/csi5387/boost-tut-ppr.pdf Boosting is a general m
PBRT_V2 總結記錄 A Quick Introduction to Monte Carlo Methods
參考 : https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods-mathematical-foundations Mont
Gentle Introduction to the Adam Optimization Algorithm for Deep Learning
The choice of optimization algorithm for your deep learning model can mean the difference between good results in minutes, hours, and days. The Adam optim
A brief introduction to reinforcement learning
In this article, we'll discuss: Let's start the explanation with an example -- say there is a small baby who starts learning how to walk. Let's divide thi
Text Mining 101: A Stepwise Introduction to Topic Modeling using Latent Semantic Analysis (using…
Text Mining 101: A Stepwise Introduction to Topic Modeling using Latent Semantic Analysis (using Python)Have you ever been inside a well-maintained library
A Quick Introduction to Neural Arithmetic Logic Units
A Quick Introduction to Neural Arithmetic Logic Units(Credit: aitoff)Classical neural networks are extremely flexible, but there are certain tasks they are
A Quick Introduction to Text Summarization in Machine Learning
A Quick Introduction to Text Summarization in Machine LearningText summarization refers to the technique of shortening long pieces of text. The intention i
Gentle Introduction to Predictive Modeling
Tweet Share Share Google Plus When you’re an absolute beginner it can be very confusing. Frustra