Complete analysis of AI artificial intelligence: 3 big waves + 3 major technologies + 3 applications

Artificial intelligence (abbreviation: AI) refers to the technology that artificially realizes the wisdom of human beings. However, the technology that can achieve the same level as human intelligence does not exist yet. The vast majority of artificial intelligence in the world can only solve a specific problem. This article is an outline of the structure after I read a few books about AI. I hope that readers who are new to AI can quickly understand what AI is by 333.

First, the three waves of AI

Complete analysis of AI artificial intelligence: 3 big waves + 3 major technologies + 3 applications

First AI wave

The first wave of AI began in the 1950s and 1960s and ended in the 1980s. Because it appears before the network, it is also called "classical artificial intelligence." The "symbolicism" and "connectionism" that emerged during this period are the prototypes of the "expert system" and "deep learning" in the future. However, although the results at that time could unlock puzzles or simple games, it was almost impossible to solve practical problems.

Second AI wave

The second AI boom followed the popularity of computers in the 1980s. The research conducted during this period was based on the “expert system” that instilled “expert knowledge” as a rule to help solve specific problems. However, even though there were examples of commercial applications at the time, the scope of application was limited, and the enthusiasm gradually subsided.

Third AI wave

The third wave of AI appeared in the 2010s. With the popularity of high-performance computers, the Internet, big data, sensors, and the decline in computing costs, "machine learning" has arisen. Machine leaning refers to letting a computer learn a lot of data so that it can recognize sounds and images like humans, or make appropriate judgments on problems.

Second, the three major technologies of AI

After a quick understanding of the history of AI, let's look at the three representative models of contemporary artificial intelligence: genetic algorithms, expert systems, and neural networks.

1. Genetic algorithm

Genetic algorithm (GA), also known as Evolutio nary algorithm, is an artificial intelligence inspired by Darwin's evolution theory. Through the rule of "survival of the fittest", it imagines "excellent individual" as a "good answer" and finds the best solution through evolution.

2, expert system

The Expert System is a pre-set problem, and a large number of corresponding methods are prepared in advance. It is used in many places, especially for disease diagnosis. However, the expert system can only prepare countermeasures against the pre-considerations of the experts. It does not have the ability to learn by itself, so it still has its limitations.

3. Neural networks

There are many ways to learn from the third AI wave of Machine Learning, the most notable of which is Deep Learning. The so-called deep learning is a method of learning a large amount of data by imitating the "neural network" of the human brain.

blob.png

The origin of neural networks

If you look inside the brain, you will find a large number of nerve cells called "neurons" connected to each other. When a neuron receives more than a certain amount of electrical signal from other neurons, it will be excited (neural impulse); below a certain value, it will not be excited.

Excited neurons send electrical signals to the next connected neuron. The next neuron will also be excited or not excited. Simply put, neurons that are connected to each other form a joint transfer behavior. By mathematically modeling this connected structure, we form a neural network.

blob.png

Neural network-like: deep learning

We can find that the modeled neural network is composed of three layers: "Input Layer", "Hidden Layer" and "Output Layer". In addition, the learning data is composed of input data and corresponding correct answers.

Taking image recognition as an example, in order for AI to learn a neural network model, the image learning data must first be segmented into pixel data, and then each pixel value is input into the input layer.

The input layer of the data is accepted, and the pixel value is multiplied by the "weight" and transmitted to the neurons of the hidden layer at the back. Each neuron in the hidden layer accumulates the value received by the previous layer and multiplies the result by the "weight" and transmits it to the neurons in the back. Finally, the prediction result of the image recognition can be obtained through the output of the neurons of the output layer.

In order for the value of the output layer to be equal to the positive solution data corresponding to each input data, an appropriate "weight" value is calculated for the input of each neuron.

The calculation of this weight is generally performed using the "Error Back Propagation", which uses the error between the data and the positive solution data, and is pushed back from the output layer. Through the adjustment of each "weight", the error between the value of the output layer and the value of the positive solution data is reduced to establish a model for completing the learning.

Since the weight values ​​transmitted between the neural networks in the past are difficult to optimize, most researchers have a negative attitude toward the research of neural networks. It wasn't until 2006 that Geoffrey Hinton developed the Autoencoder approach that broke the bottleneck.

An automatic encoder is a method of using the same data in the input layer and the output layer of a neural network, and setting a hidden layer between them, thereby adjusting a weight parameter between the neural networks. After initializing with the neural network weight parameter values ​​obtained by the automatic encoder, the "error back transfer algorithm" can be applied to improve the learning accuracy of the multilayer neural network.

Through the neural network, deep learning becomes the artificial intelligence of “as long as the data is input into the neural network, it can extract the features by itself”, which is also called “feature learning”.

What deep learning is best at is that it can recognize unsymbolizable data such as image data or waveform data. Since the 2010s, well-known American IT companies such as Google, Microsoft, and Facebook have begun to study deep learning. For example, Apple's "Siri" speech recognition, Microsoft search engine "Bing" has image search, and so on, and Google's deep learning program has more than 1,500 items.

As for the soaring growth of deep learning, it is due to the improvement of hard equipment. NVIDIA, the graphics processor (GPU), leverages the company's graphics adapters to enhance the performance of deep learning, providing libraries and framework products, and actively developing seminars. In addition, Google has also published the framework "TensorFlow", which can apply deep learning to data analysis.

Third, the three major applications of AI

The AI ​​application field can be divided into three parts: speech recognition, image recognition and natural language processing.

1, speech recognition

The speech recognition part, through the research of the speech recognition competition CHiME for many years, has already had the same human recognition (CHiME, which is an international speech recognition competition for the evaluation of speech recognition in real life environment). In addition, Apple, Google, and Amazon have also proposed services that can be applied to daily life, so their maturity has reached a practical level.

2, image recognition

In the image recognition part, although the recognition of general pictures has the same recognition rate as humans, the accuracy of dynamic image recognition is still not comparable to that of humans, and various algorithms are still being tested. Among them, image recognition is currently the hottest application field for non-automatic driving.

The entire automobile and information and communication industries are working hard in the direction of self-driving. For example, Google continues to conduct research on autonomous driving. TOYOTA also established the Toyota Research Institute in the United States. It can be known that the development at this stage is very close to practical use. Therefore, we can judge that the maturity of current image recognition is between the research and practical levels.

3, natural language processing

Natural language processing (NLP) is to try to let artificial intelligence understand the words and words spoken by human beings. NLP first decomposes part of speech, called "morphemic analysis", after decomposing the smallest unit of meaning, then "syntactic analysis", and finally through "semantic analysis" To understand the meaning.

In the output section, natural language processing is also closely related to the generative grammar. Generating grammar theory believes that as long as the rules are followed, the sentence can be generated. This also means that as long as the rules are combined, it is possible to generate an article.

In natural language processing, the most representative application is "Chatbot" (Chatbot), which is a program that can talk to people through text messages. In 2016, Facebook launched the "Facebook Messenger Platform", and Line also launched the "Messaging API", which prompted this NLP-enabled chat bot to become the focus of attention.

In addition, IBM Watson, developed by IBM, is also an artificial intelligence that uses NLP. Watson can extract knowledge from Wikipedia and other corpora to learn the correlation between vocabulary and vocabulary. Now, even the SoftBank robot Pepper is also equipped with the Watson system.

However, because in daily conversations, we often omit words and phrases, and do not necessarily mention the background of time and space, so the current Chatbot is still unable to engage in dialogue with human beings. Therefore, most current Chatbot vendors will still limit the environment and application areas of the dialogue.

48v100Ah Home Energy Storage

48V100Ah Home Energy Storage,Deep Cycle 48V Power Wall Battery,48V 100Ah Home Energie Storage Batterie,Wall-Mounted Home Energy Storage

Jiangsu Zhitai New Energy Technology Co.,Ltd , https://www.zhitainewenergy.com