Advertisement

Robotics and Artificial Intelligence Technology

Artificial Intelligence a Modern Approach

Robotics and Artificial Intelligence Technology
Robotics and Artificial Intelligence Technology

About Of Robotics and Artificial Intelligence 

In this series we discussed the differences between deep learning and machine learning, how and when the field of deep learning was officially born and its rise to mainstream popularity. The focus of this topic then will be on artificial neural networks, more specifically in techlinics - their structure. An eagle, a fighter jet. While these two distinct entities both perform the same task, flight, the way they achieve so is quite different. The fighter jet is a highly specialized and engineered machine designed for a very specific task and it executes that task extremely well. While the eagle, a biological system is arguably much more complex in certain ways, capable of a variety of more generalized tasks. This analogy draws many parallels to the difference between our brains and deep learning systems. While they both are capable for the task of pattern recognition, the brain is an extremely complex general system that can perform a huge variety of tasks, while deep learning systems are designed to excel at very specific tasks. To better understand deep learning and keeping inline with this analogy of flight, let's go back to the basics. For once the basic principles of any system are understood, it is much easier to understand the higher-level applications and capabilities of that said system. As we've discussed in videos past, deep learning is derived from the field of connectionism, a tribe of machine learning in which the goal is to digitally reconstruct the brain. Now, to digitally reconstruct the brain we must first digitally reconstruct the simplest components of the brain, neurons. This is an artistic representation of a neuron, a multipolar neuron to be exact. techlinics

There are three primary components to a neuron:

1) The soma, this is the 'brain' *meta* in other words the information processing center of the neuron, comprised of the cell body and nucleus.
2) The axon, this is the long tail of the neuron that transmits information to and from the cell body. And,
3) The dendrites, these are branching arms from the neuron that connect to other neurons. the brain has over 100 billion neurons with over 100 trillion synapses, with synapses being the connections to other neurons.
Robotics and Artificial Intelligence Technology
Robotics and Artificial Intelligence Technology

If we are to think in an extremely reductionist perspective, we can consider the brain to be one gigantic neural network, that is capable of so much and more we don't even know! Hence, it makes sense why the constructionists are so adamant on trying to reconstruct the brain, to see what emergent properties come about! Now, taking a step back, and going to individual neurons, this is one of our very first pictures of neurons. Drawn in the late 19th century, by Spanish anatomist, Santiago Ramon y Cajal. He used a stain that could be introduced to tissue and then used a microscope to draw what he saw. Now what you see here is what we've just discussed: cell bodies, long tails and dendrites connecting to one another.
Now let's flip this drawing upside down and abstractly map the components of the neuron to the right side. First, we have the soma which we will represent with a circle, then the axon represented by a long line coming out of the neuron and finally the dendrites represented by multiple lines leading into the neuron. As you can see here, we are witnessing how the basic structure of a deep learning neural 89 00:02:52,670 --> 00:02:56,360 net came to be! To begin discussion on the way that neurons work, you can consider the dendrites to be the inputs to our neuron. In the body, dendrites look for electrical activity on their ends, whether that be coming from other neurons, sensory or other activity and send those signals to the cell body. 
The soma then takes these signals and begins to accumulate them, and based on a certain signal threshold, the axon is activated, the output of the system. Essentially, in a very simplistic way, the information processing in a neuron is to just add things up. And based on that, one can correlate dendrite activity with the level of axonal activity. In other words, the more dendrites that are activated and the more frequently they are translates to how often the axon is activated. So, now that we have an abstract understanding of the function of a neuron, let's add more to our system and begin forming a neural network. As stated earlier, the connection between neurons is referred to as a synapse, this is where the dendrites, the inputs of one neuron are attached to the axon, the output of another. Going back to Ramon y Cajal's first drawing of a neuron, you can see he saw and drew these little nubs on the dendrites. This is where the axons of other neurons connect to the dendrite of our current neuron. In terms of our abstracted drawing, we will represent this connection with a circular node. Now, axons can connect to dendrites, strongly, weakly or anything in between. For now, we will use the size of the connection node to signify the connection strength, with connection strength being how active the input neurons connection was passed on to the output neurons dendrite. We will also assign this connection strength a value between 0 and 1, with 1 being very strong and approaching 0 being weak. As you can see, as we begin adding more neurons it gets interesting, as many different input neurons can connect to the dendrites of a single output neuron, with each one having different connection strengths. Let's now remove any unconnected dendrites and also remove the nodes that we had to represent the connection strength, and simply show the thickness of the line to represent the weight of that connection. Now, flipping this diagram horizontally, we can see the beginnings of modern deep learning neural network architecture. Since the start of this video, we went from our immensely complex brains with trillions of connections and subtleties in operation and interconnectedness, to this simple to understand neural network model. Keep in mind our system here is just that, a model, a very abstract one at that. Going from the brain to neural networks is a very reductionist process, and the true relationship between biological systems and neural networks is mostly metaphorical and inspirational. Our brains, with the limited understanding we have of them, are immensely complex with trillions of connections and many different types of neurons and other tissues operating in parallel, and not just connected in adjacent layers like neural networks. Coming back on topic, no matter the terminology we use to describe these networks, it remains true that they are still extremely useful in deriving representation from large amounts of data, And now that we have seen how the structure of these networks was developed, we can see how this representation is built layer by layer.
Robotics and Artificial Intelligence Technology
Robotics and Artificial Intelligence Technology

A way to think about output nodes is that they are the sum of the nodes that strongly activate them, that being the connections with the strongest weight. For example, let's say we have five input nodes that define the characters: A, B, C, D and E. In this case, the output node will then be defined by A-C-E. Here you are witnessing going from a low-level representation, individual letters, to higher levels of representation encompassing words and if we kept going on, sentences and so on - this simplistic example is the basis of natural language processing. Beyond letters, this methodology translates to any type of input, from the pixel values of an image for image recognition to the audio frequencies of speech for speech recognition to more complex, abstract inputs such as nutritional information and medical history to predict the likelihood of cancer for instance. Now before we get ahead of ourselves, and escalate to the higher level predictive abilities of the more complex abstract applications of deep learning systems, in the next set of videos in this series, we'll go through a comprehensive example, which will introduce many new terms and concepts in an intuitive way to help you understand how node networks work. However, this doesn't mean you have to wait to learn more! If you want to learn more about deep learning and I mean really learn about the field, from how these artificial learning algorithms were inspired from the brain to their foundational building blocks the perceptron, scaling up the multi-layer networks, different types of networks such as convolutional networks, recurrent networks and much more, then Brilliant. org is the place for you to go! In a world where automation through algorithms will increasingly replace more jobs, it is up to us as individuals to keep our brain sharp and think of creative solutions to multidisciplinary problems and Brilliant is a platform that allows you to do so. For instance, everyday there is a daily challenge that can cover a wide variety of courses in the STEM domain. These challenges are crafted in such a way in which they draw you in and then allow you to learn a new concept through their intuitive explanations. To support Singularity Prosperity and learn more, go to Techlinics.com

Robotics and Artificial Intelligence In Techlinics
Created By Techlinics.com