1. 程式人生 > 實用技巧 >神經網路歷史_神經網路完整有趣且令人費解的歷史

神經網路歷史_神經網路完整有趣且令人費解的歷史

神經網路歷史

關於DL的一切(Everything about DL)

We will be looking at the history of neural networks. After thoroughly going through various sources, I found out that the history of neural networks piqued my interest, and I became engrossed. I had a lot of fun because researching this topic was gratifying. Below is the list of the table of contents. Feel free to skip to the topic which fascinates you the most.

我們將研究神經網路的歷史。 在仔細研究了各種資源之後,我發現神經網路的歷史激起了我的興趣,並且我全神貫注。 我玩得很開心,因為研究這個主題很令人滿意。 以下是目錄列表。 隨時跳到最讓您著迷的主題。

目錄: (Table of contents:)

  1. Introduction

    介紹
  2. The initiation of thoughts and ideas

    思想和觀念的萌芽
  3. The golden era

    黃金時代
  4. The advancements

    進步
  5. The failure and disbandment of neural networks

    神經網路的失敗與解散
  6. The re-emergence and complete domination of neural networks

    神經網路的重新出現和完全統治
  7. Conclusion

    結論

介紹:(Introduction:)

Neural networks and artificial intelligence have been a popular topic since the past century. The popularity of artificial intelligence robots taking over the world in pop culture movies has undeniably intrigued a lot of curious minds. Neural Networks withdraw inspiration from biological neurons. Neural networks are a programming paradigm inspired means which enable deep learning models to learn and train effectively on complex observational datasets. Neural networks have been through various phases during the past century. Neural networks went from being a strong prospect for solving complex computational problems, then being ridiculed to only a theoretical idea, and finally, prominent for a more desirable future. Let us revisit each stage in the history of neural networks in chronological order.

自上個世紀以來,神經網路和人工智慧一直是熱門話題。 毫無疑問,人工智慧機器人在流行文化電影中席捲全球,這引起了很多好奇心。 神經網路從生物神經元中汲取了靈感。 神經網路是一種受程式設計正規化啟發的手段,它使深度學習模型能夠在複雜的觀測資料集上進行有效學習和訓練。 在過去的一個世紀中,神經網路經歷了各個階段。 神經網路從解決複雜的計算問題的強大前景開始,然後被嘲笑為僅是一種理論概念,最後對於更理想的未來而聞名。 讓我們按時間順序重新審視神經網路歷史上的每個階段。

Note: This will be the very first part of the series titled “Everything About Deep Learning.” In this topic, we will try to cover every single fact, algorithm, activation functions, and what the future holds for artificial neural networks as well as deep learning. Today, we will start with the complete history of neural networks. In the next parts, we will cover the basics that enable the functioning of neurons, and in subsequent parts, we will cover all the concepts related to deep learning.

注意:這將是系列文章“關於深度學習的一切”的第一部分。 在本主題中,我們將嘗試涵蓋每一個事實,演算法,啟用函式,以及人工神經網路和深度學習的未來發展。 今天,我們將從神經網路的完整歷史開始。 在接下來的部分中,我們將介紹啟用神經元功能的基礎知識,在隨後的部分中,我們將介紹與深度學習有關的所有概念。

Image for post
Pixabay on Pixel上的 Pexels Pixabay攝

思想和觀念的萌芽:(The Initiation of Thoughts and Ideas:)

Biologists, neurologists, and researchers have been working on the functionally of neurons since the past century. William James, an American philosopher in 1890, proposed an insightful theory that reflected the subsequent work of many researchers. The hypothesis, in simple terms, states that the activity at any given point in the brain cortex is the sum of tendencies from the overall movement discharged into it. Elaborating briefly, on the following statement, it means that the excitement of one neuron excites every other neuron until the signal has successfully reached the target.

自上個世紀以來,生物學家,神經病學家和研究人員一直在研究神經元的功能。 美國哲學家威廉·詹姆斯(William James)於1890年提出了一種有見地的理論,該理論反映了許多研究人員隨後的工作。 簡單來說,該假設指出,大腦皮層任何給定點的活動是排出到大腦皮層的整體運動的趨勢之和。 在下面的陳述中簡單地進行闡述,這意味著一個神經元的興奮會激發其他每個神經元的興奮,直到訊號成功到達目標為止。

The credit to developing the first mathematical model for a single neuron goes to McCulloch and Pitts in the year 1943. The neuron model constructed was comprehensive and far-reaching. This model built has been modified and widely used even in the modern era. This moment brought upon a colossal shift in the minds of researchers and practitioners of neural networks. The mathematical functioning of a neuron model similar to the human brain was flabbergasting to most biologists. The support bandwagon for AI to be successful and concerns over AI taking over the world started from this moment onwards.

1943年,為單個神經元開發第一個數學模型的功勞歸功於McCulloch和Pitts。構建的神經元模型是全面且影響深遠的。 所構建的模型已被修改,甚至在現代時代也已廣泛使用。 這一時刻使神經網路的研究人員和實踐者的思想發生了巨大變化。 對於大多數生物學家來說,類似於人腦的神經元模型的數學功能令人吃驚。 從此刻起,就為AI取得成功提供了支援,並引發了人們對AI接管世界的擔憂。

We will also look at each of these concepts in further detail over the next series of tutorials. Understanding each concept of neural networks from scratch, including the working of a single neuron, will also be accomplished over the series.

在接下來的系列教程中,我們還將更詳細地研究每個概念。 從頭開始理解神經網路的每個概念,包括單個神經元的工作,也將在本系列中完成。

黃金時代: (The Golden Era:)

Over the next two decades ranging from 1949 to 1969, a wide array of experiments were performed and handled. There were massive developments and expansions in existing methodologies. It would not be wrong to say that this period was the golden era of neural networks. This era started with a bang thanks to the Hebbian theory introduced by Donald Hebb in his book titled “The Organization of Behavior.” In simple terms, the Hebbian theory states that the conductance increases with repeated activation of one neuron by another, across a particular synapse.

在從1949年到1969年的接下來的二十年中,進行了大量的實驗。 現有方法已經有了巨大的發展和擴充套件。 可以肯定地說這一時期是神經網路的黃金時代。 多虧了唐納德·赫布(Donald Hebb)在他的《行為組織》一書中提出的赫比理論,這個時代開始了。 簡單來說,Hebbian理論指出,電導隨著特定突觸中一個神經元的反覆啟用而增加。

During this phase, there were several evolutions in salient topics like learning filters, gradient descent, developments in neurodynamics, and triggering and propagation of large-scale brain activity. There was extensive research in synchronous activation of multiple neurons to represent each bit of information. Information theory with principles of Shannon’s entropy became an important area for the field of research. However, the most significant invention was the Perceptron model by Rosenblatt in the year 1958.

在此階段,重要主題發生了一些演變,例如學習過濾器,梯度下降,神經動力學發展以及大規模腦活動的觸發和傳播。 在同步啟用多個神經元以表示資訊的每一方面都進行了廣泛的研究。 具有夏農熵原理的資訊理論成為研究領域的重要領域。 但是,最重要的發明是Rosenblatt於1958年發明的Perceptron模型。

The perceptron model is one of the most substantial discoveries in neural networks. The methods of backpropagation introduced by Rosenblatt were useful for training multi-layered networks. This era was candidly the golden era for neural networks due to the extensive research and continuous developments. Taylor constructed a winner-take-all circuit, with inhibition among output units and other progressions in the perceptron model were also accomplished.

感知器模型是神經網路中最重要的發現之一。 Rosenblatt引入的反向傳播方法對於訓練多層網路很有用。 由於廣泛的研究和不斷的發展,這個時代是神經網路的黃金時代。 泰勒構建了一個“贏者通吃”電路,並實現了輸出單元之間的抑制以及感知器模型中的其他程序。

進展: (The Advancements:)

There were many topics researched and investigated during the 1970s to 1990s. Unfortunately, the developments were of no avail. There was research in combinations of many neurons to form neural networks to become more powerful than a single neuron and perform complex computations. Since gradient descent was not successful in obtaining desired solutions to complex tasks, the development of other mathematical random, probabilistic, or stochastic methods became necessary. Further theoretical results and analysis became established during this time frame.

在1970年代至1990年代有許多研究和調查的主題。 不幸的是,事態發展無濟於事。 已經進行了許多神經元組合以形成神經網路的研究,以使其比單個神經元更強大並執行復雜的計算。 由於梯度下降未能成功地獲得複雜任務的理想解決方案,因此有必要開發其他數學隨機,概率或隨機方法。 在這段時間內建立了進一步的理論結果和分析。

The Boltzmann machines and hybrid systems for complex computational problems were also successfully done during the advancements period. Boltzmann machines successfully combated the issue of mathematical problems. Achieving solutions to various drawbacks could not be accomplished due to hardware and software limitations. Nonetheless, during this period, a significant amount of successful research was conducted. Updates and improvements to existing studies became established during this time frame.

在發展階段,還成功完成了用於複雜計算問題的玻爾茲曼機器和混合系統。 玻爾茲曼機器成功地解決了數學問題。 由於硬體和軟體的限制,無法實現針對各種缺點的解決方案。 儘管如此,在此期間,進行了大量的成功研究。 在此時間範圍內建立了對現有研究的更新和改進。

However, despite these advancements, nothing was crucial or fruitful to the development of neural networks. The burgeoning demand for artificial neural networks no longer existed. One of the significant reasons for this was due to the demonstration of the limitations of a simple perceptron. Minsky and Papert, in 1969, conducted this demonstration and showcased the flaws of a simple perceptron. It theoretically proved that the simple perceptron model was not computationally universal. This moment was infamous as it marked a black day for neural networks. There was a drastic reduction in funding support for the research in the field of neural networks. This motion kickstarted the fall of neural networks.

然而,儘管取得了這些進步,但對於神經網路的發展而言,沒有什麼是至關重要的或富有成果的。 對人工神經網路的Swift增長的需求已不復存在。 造成這種情況的重要原因之一是由於簡單感知器的侷限性得到證明。 Minsky和Papert於1969年進行了演示,並展示了簡單感知器的缺陷。 理論上證明了簡單的感知器模型不是計算通用的。 這一時刻聲名狼藉,因為它標誌著神經網路的黑日。 對神經網路領域研究的資金支援急劇減少。 這一動議開始了神經網路的崩潰。

神經網路的失敗與解散: (The Failure and Disbandment of Neural Networks:)

The hype for Artificial neural networks was at an all-time peak during this time, but in due time all the hype related to neural networks just vanished. Artificial intelligence being the next big thing, was no longer the talking point for intellectuals. Artificial Neural networks and deep learning became ridiculed to only a theoretical concept. The main reasons for this were due to the lack of data and advanced technologies.

在此期間,對人工神經網路的炒作一直處於高峰,但在適當的時候,所有與神經網路有關的炒作都消失了。 人工智慧是下一個大問題,已不再是知識分子的話題。 人工神經網路和深度學習只被一個理論概念嘲笑。 造成這種情況的主要原因是缺乏資料和先進技術。

At that time, there were not enough resources for the computation of complex tasks like image segmentation, image classification, face recognition, natural language processing based chatbots, etc. The data available during this time was quite limited, and there wasn’t enough data for a complex neural network architecture to provide the desired result. Albeit, even with the required data, it would be an incredibly challenging task to compute that amount of data with the resources available at that time.

當時,沒有足夠的資源來計算影象分割,影象分類,面部識別,基於自然語言處理的聊天機器人等複雜任務。這段時間內可用的資料非常有限,並且沒有足夠的資料為複雜的神經網路架構提供所需的結果。 即使具有所需的資料,使用當時可用的資源來計算該資料量也是一項非常艱鉅的任務。

There were signs of optimism like the success in reinforcement learning and other smaller positives. Unfortunately, this was not good enough to rebuild the massive hype it once had. Thanks to the researchers and scientists with their extraordinary visions were able to continue developments in the field of artificial neural networks. However, for artificial neural networks to regain their lost prestige and hype, it would take another 20 years.

有樂觀的跡象,例如在強化學習方面取得了成功以及其他一些較小的積極影響。 不幸的是,這還不足以重建其曾經的大規模宣傳。 多虧了研究人員和科學家的非凡遠見,才得以繼續在人工神經網路領域發展。 但是,要使人工神經網路重新獲得失去的聲譽和炒作,還需要20年的時間。

神經網路的重新出現和完全統治: (The Re-emergence and Complete Domination of Neural Networks:)

The next two decades were dry for the state and popularity of deep learning. During this era, the support vector machines (SVM’s) and other similar machine learning algorithms were more dominant and practiced to solve complex tasks. The machine learning algorithms performed well for most datasets, but with bigger datasets, the performance of the machine learning algorithms did not significantly improve. The machine learning algorithm’s performance after a certain threshold was stagnant. Models that could learn and improve continuously with the increasing data became important.

接下來的二十年對於深度學習的狀態和普及來說是枯燥的。 在這個時代,支援向量機(SVM)和其他類似的機器學習演算法越來越占主導地位,並被實踐用來解決複雜的任務。 機器學習演算法對於大多數資料集表現良好,但是對於更大的資料集,機器學習演算法的效能並未顯著提高。 一定閾值後機器學習演算法的效能停滯不前。 可以隨著資料的增加而不斷學習和改進的模型變得很重要。

In 2012, a team led by George E. Dahl won the “Merck Molecular Activity Challenge” using multi-task deep neural networks to predict the biomolecular target of one drug. In 2014, Hochreiter’s group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products, and drugs and won the “Tox21 Data Challenge” of NIH, FDA, and NCATS. (Reference: Wiki)

2012年,由喬治·E·達爾(George E. Dahl)領導的團隊使用多工深度神經網路預測了一種藥物的生物分子靶標,從而贏得了“默克分子活性挑戰賽”。 2014年,Hochreiter的小組使用深度學習來檢測環境化學物質在營養,家用產品和藥物中的脫靶和有毒作用,並贏得了NIH,FDA和NCATS的“ Tox21資料挑戰”。 (參考: Wiki )

A revolutionary moment started at this precise moment, and deep neural networks were now considered a game-changer. Deep learning and neural networks are now the salient features contemplated for any high-level competitions. Convolutional neural networks, long short term memory (LSTM’s), and generative adversarial networks are exceedingly popular.

革命性時刻就在這一精確時刻開始,深度神經網路現在被認為是改變遊戲規則的人。 如今,深度學習和神經網路已成為任何高水平比賽所考慮的顯著特徵。 卷積神經網路,長期短期記憶(LSTM)和生成對抗網路非常受歡迎。

The aggrandizement of deep learning is rapidly increasing each day especially, with vast improvements. It is exciting to see what the future withholds for deep neural networks and artificial intelligence.

深度學習的強化每天都在Swift增加,尤其是有了巨大的進步。 很高興看到深度神經網路和人工智慧的未來保留了什麼。

結論: (Conclusion:)

Image for post
Alexandre Debiève on AlexandreDebiève在UnsplashUnsplash 拍攝

The journey of neural networks is one to remember for the upcoming ages. Neural networks and deep learning went from a fantastic prospect to now becoming one of the best methods of solving almost any complex problem whatsoever. I am thrilled to see the advancements that will take place in the deep learning field, and I am delighted that I am a part of the current generation who can contribute to this change.

對於即將到來的時代,神經網路的旅程是值得紀念的。 神經網路和深度學習從一個奇妙的前景變成了如今成為解決幾乎任何複雜問題的最佳方法之一。 我很高興看到深度學習領域將取得的進步,我很高興我能夠為這一變革做出貢獻的這一代人。

Most of the viewers reading this article are probably fascinated as well. I will try to cover every topic from the history of neural networks to the working and understanding of every deep learning algorithm and architecture in the series titled “Everything About DL.” Let’s stick together on this journey and conquer deep learning. Other articles that you might like —

閱讀這篇文章的大多數觀眾也可能著迷。 我將嘗試在標題為“關於DL的一切”的系列中涵蓋從神經網路的歷史到對每個深度學習演算法和體系結構的工作和理解的每個主題。 讓我們共同努力,征服深度學習。 您可能喜歡的其他文章-

I hope all of you enjoyed reading this article. Have a wonderful day!

我希望大家都喜歡閱讀本文。 祝你有美好的一天!

翻譯自: https://towardsdatascience.com/the-complete-interesting-and-convoluted-history-of-neural-networks-2764a54e9e76

神經網路歷史