1. 程式人生 > >neural networks deep learning Python Basics with numpy (optional) Homework

neural networks deep learning Python Basics with numpy (optional) Homework

Python Basics with Numpy (optional assignment)

Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you’ve used Python before, this will help familiarize you with functions we’ll need.

Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.

After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of “broadcasting”
- Be able to vectorize code

Let’s get started!

About iPython Notebooks

iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing “SHIFT”+”ENTER” or by clicking on “Run Cell” (denoted by a play symbol) in the upper bar of the notebook.

We will often specify “(≈ X lines of code)” in the comments to tell you about how much code you need to write. It is just a rough estimate, so don’t feel bad if your code is longer or shorter.

Exercise: Set test to "Hello World" in the cell below to print “Hello World” and run the two cells below.

### START CODE HERE ### (≈ 1 line of code)
test = None
test="Hello World" 
### END CODE HERE ###
print ("test: " + test)
test: Hello World

Expected output:
test: Hello World


What you need to remember:
- Run your cells using SHIFT+ENTER (or “Run cell”)
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas

1 - Building basic functions with numpy

Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

1.1 - sigmoid function, np.exp()

Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().

Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.

Reminder:
sigmoid(x)=11+ex is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.

To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().

# GRADED FUNCTION: basic_sigmoid

import math

def basic_sigmoid(x):
    """
    Compute sigmoid of x.

    Arguments:
    x -- A scalar

    Return:
    s -- sigmoid(x)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = None
    s=1/(1+math.exp(-x))
    ### END CODE HERE ###

    return s
basic_sigmoid(3)
0.9525741268224334

Expected Output:

** basic_sigmoid(3) ** 0.9525741268224334

Actually, we rarely use the “math” library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.

### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-26-2e11097d6860> in <module>()
      1 ### One reason why we use "numpy" instead of "math" in Deep Learning ###
      2 x = [1, 2, 3]
----> 3 basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.


<ipython-input-24-f2ee07699056> in basic_sigmoid(x)
     16     ### START CODE HERE ### (≈ 1 line of code)
     17     s = None
---> 18     s=1/(1+math.exp(-x))
     19     ### END CODE HERE ###
     20 


TypeError: bad operand type for unary -: 'list'

In fact, if x=(x1,x2,...,xn) is a row vector then np.exp(x) will apply the exponential function to every element of x. The output will thus be: np.exp(x)=(ex1,ex2,...,exn)

import numpy as np

# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
[  2.71828183   7.3890561   20.08553692]

Furthermore, if x is a vector, then a Python operation such as s=x+3 or s=1x will output s as a vector of the same size as x.

# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
[4 5 6]

Any time you need more info on a numpy function, we encourage you to look at the official documentation.

You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.

Exercise: Implement the sigmoid function using numpy.

Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices…) are called numpy arrays. You don’t need to know more for now.

For xRnsigmoid(x)=sigmoidx1x2...xn=11+ex111+ex2...11+exn(1)
# GRADED FUNCTION: sigmoid

import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()

def sigmoid(x):
    """
    Compute the sigmoid of x

    Arguments:
    x -- A scalar or numpy array of any size

    Return:
    s -- sigmoid(x)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = None
    s=1/(1+np.exp(-x))
    ### END CODE HERE ###

    return s
x = np.array([1, 2, 3])
sigmoid(x)
array([ 0.73105858,  0.88079708,  0.95257413])

Expected Output:

**sigmoid([1,2,3])** array([ 0.73105858, 0.88079708, 0.95257413])

1.2 - Sigmoid gradient

As you’ve seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let’s code your first gradient function.

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

sigmoid_derivative(x)=σ(x)=σ(x)(1σ(x))(2)
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute σ(x)=s(1s)
# GRADED FUNCTION: sigmoid_derivative

def sigmoid_derivative(x):
    """
    Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
    You can store the output of the sigmoid function into variables and then use it to calculate the gradient.

    Arguments:
    x -- A scalar or numpy array

    Return:
    ds -- Your computed gradient.
    """

    ### START CODE HERE ### (≈ 2 lines of code)
    s = None
    ds = None
    s=sigmoid(x)
    ds=s*(1-s)
    ### END CODE HERE ###

    return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
sigmoid_derivative(x) = [ 0.19661193  0.10499359  0.04517666]

Expected Output:

**sigmoid_derivative([1,2,3])** [ 0.19661193 0.10499359 0.04517666]

1.3 - Reshaping arrays

Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(…) is used to reshape X into some other dimension.

For example, in computer science, an image is represented by a 3D array of shape (length,height,depth=3). However, when you read an image as the input of an algorithm you convert it to a vector of shape (lengthheight3,1). In other words, you “unroll”, or reshape, the 3D array into a 1D vector.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:

v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
  • Please don’t hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
# GRADED FUNCTION: image2vector
def image2vector(image):
    """
    Argument:
    image -- a numpy array of shape (length, height, depth)

    Returns:
    v -- a vector of shape (length*height*depth, 1)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    v = None
    v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2],1))
    ### END CODE HERE ###

    return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139,  0.29380381],
        [ 0.90714982,  0.52835647],
        [ 0.4215251 ,  0.45017551]],

       [[ 0.92814219,  0.96677647],
        [ 0.85304703,  0.52351845],
        [ 0.19981397,  0.27417313]],

       [[ 0.60659855,  0.00533165],
        [ 0.10820313,  0.49978937],
        [ 0.34144279,  0.94630077]]])

print ("image2vector(image) = " + str(image2vector(image)))
image2vector(image) = [[ 0.67826139]
 [ 0.29380381]
 [ 0.90714982]
 [ 0.52835647]
 [ 0.4215251 ]
 [ 0.45017551]
 [ 0.92814219]
 [ 0.96677647]
 [ 0.85304703]
 [ 0.52351845]
 [ 0.19981397]
 [ 0.27417313]
 [ 0.60659855]
 [ 0.00533165]
 [ 0.10820313]
 [ 0.49978937]
 [ 0.34144279]
 [ 0.94630077]]

Expected Output:

**image2vector(image)** [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]]

1.4 - Normalizing rows

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to xx

相關推薦

neural networks deep learning Python Basics with numpy (optional) Homework

Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to P

Coursera 深度學習 deep learning.ai 吳恩達 神經網路和深度學習 第一課 第二週 程式設計作業 Python Basics with Numpy

Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to P

Machine Learning for Humans, Part 4: Neural Networks & Deep Learning

The same thing happens in vision, not just in humans but in animals' visual systems generally. Brains are made up of neurons which "fire" by emitting elect

Can neural networks, deep learning and GPUs help your business now?

Events If you want to exploit machine learning and AI, the range of technologies and techniques available can appear dizzying. Luckily, there's just one we

How to do Deep Learning on Graphs with Graph Convolutional Networks

Observe that the weights (the values) in each row of the adjacency matrix have been divided by the degree of the node corresponding to the row. We apply th

Convolutional Neural Networks for Beginners: Practical Guide with Python and Keras

Convolutional Neural Networks for Beginners: Practical Guide with Python and KerasAt this point, we are ready to deal with another type of neural networks,

Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data

在過去的幾個月中,我一直在收集有關人工智慧的相關資料。隨著各種的問題被越來越頻繁的提及,我決定整理並分享有關人工智慧、神經網路、機器學習、深度學習與大資料的技術合輯。同時為了內容更加生動易懂,本文將會針對各個大類展開詳細解析。 神經網路 機器學習 機器學習: Scikit-l

Artificial intelligence controls quantum computers: Neural networks enable learning of error correction strategies for computers

In 2016, the computer program AlphaGo won four out of five games of Go against the world's best human player. Given that a game of Go has more combination

Distributed Deep Learning on Kubernetes with Polyaxon

Distributed Deep Learning on Kubernetes with PolyaxonIn this short tutorial, we will be going over a new feature in Polyaxon, distributed training.Polyaxon

【DeepLearning學習筆記】Coursera課程《Neural Networks and Deep Learning》——Week2 Neural Networks Basics課堂筆記

樣本數目 and 編程 多次 之間 優化 我們 round 符號 Coursera課程《Neural Networks and Deep Learning》 deeplearning.ai Week2 Neural Networks Basics 2.1 Logistic

課程一(Neural Networks and Deep Learning),第二週(Basics of Neural Network programming)—— 1、10個測驗題(Neural N

              --------------------------------------------------中文翻譯-------

Essentials of Deep Learning: Visualizing Convolutional Neural Networks in Python

Introduction One of the most debated topics in deep learning is how to interpret and understand a trained model – particularly in the con

Deep Learning 16:用自編碼器對資料進行降維_讀論文“Reducing the Dimensionality of Data with Neural Networks”的筆記

前言 筆記 摘要:高維資料可以通過一個多層神經網路把它編碼成一個低維資料,從而重建這個高維資料,其中這個神經網路的中間層神經元數是較少的,可把這個神經網路叫做自動編碼網路或自編碼器(autoencoder)。梯度下降法可用來微調這個自動編碼器的權值,但是隻有在初始化權值較好時才能得到最優解,不然就

Neural Networks and Deep Learning(week3)Planar data classification with one hidden layer(基於單隱層的平面數據分類)

one hid 線性 deep with ica ural 神經網絡 二分 Planar data classification with one hidden layer 你會學習到如何: 用單隱層實現一個二分類神經網絡 使用一個非線性激勵函數,如

Deep Learning讀書筆記(一):Reducing the Dimensionality of Data with Neural Networks

       這是發表在Science上的一篇文章,是Deep Learning的開山之作,同樣也是我讀的第一篇文章,我的第一篇讀書筆記也從這開始吧。        文章的主要工作是資料的降維,等於說這裡使用深度學習網路主要提取資料中的特徵,但卻並沒有將這個特徵應用到分類等

Mastering the game of Go with deep neural networks and tree search

深度 策略 參數初始化 技術 以及 -1 簡單 cpu 網絡 Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.758

[譯]深度神經網絡的多任務學習概覽(An Overview of Multi-task Learning in Deep Neural Networks)

noi 使用方式 stats 基於 共享 process machines 嬰兒 sdro 譯自:http://sebastianruder.com/multi-task/ 1. 前言 在機器學習中,我們通常關心優化某一特定指標,不管這個指標是一個標準值,還是企業KPI。為

Neural Networks and Deep Learning學習筆記ch1 - 神經網絡

1.4 true ole 輸出 使用 .org ptr easy isp 近期開始看一些深度學習的資料。想學習一下深度學習的基礎知識。找到了一個比較好的tutorial,Neural Networks and Deep Learning,認真看完了之後覺

課程一(Neural Networks and Deep Learning)總結:Logistic Regression

pdf idt note hub blog bsp http learn gre -------------------------------------------------------------------------

論文筆記-Sequence to Sequence Learning with Neural Networks

map tran between work down all 9.png ever onf 大體思想和RNN encoder-decoder是一樣的,只是用來LSTM來實現。 paper提到三個important point: 1)encoder和decoder的LSTM