This post was originally published by Rahul Dit at Medium [AI]
Sometimes I feel like human progress is measured on how much lazier we can get. We all have heard our parents say how they used to travel miles to go to school, cross rivers, jump over ravines full of hungry lions and mountains full of snow, but nowadays getting to school is as simple as hopping onto a bus or a car, sit as someone drives you or drive yourself there and nowadays self-driving cars can drive you, and you don’t even have to push a button. So, you see what I mean by saying that the lazier humans get means the more we progress, from crossing miles on foot to not even touching the steering wheel, human technology has evolved on the basis of the least amount of effort needed to complete a task.
But obviously, technology and computer have their own limitations. How do you expect a car to decide when to stop and when to keep driving, how to make a decision on the road when even humans while making decisions still keep finding themselves in accidents? How do you expect a computer to be the perfect decision-makers when their makers themselves are imperfect?
This is where the idea of Neural Networks comes in. We may not be perfect ourselves, but we have an idea of what perfection is. Neural networks were first introduced in the year 1943 when Warren McCulloch and Walter Pitts first created computational models for neural networks. Rosenblatt (1958) created the perceptron. The first functional networks with many layers were published by Ivakhnenko and Lapa in 1965. The basics of continuous back-propagation were derived in the context of control theory by Kelleyin 1960 and by Bryson in 1961, using principles of dynamic programming.
So, what are neural networks? Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain that can transmit a signal to other neurons, hence forming an artificial neuron that receives a signal then processes it and can signal neurons connected to it. The “signal” at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. These connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), after traversing the layers multiple times.
As I mentioned earlier, humans may not be perfect themselves, but we have an idea of what perfection looks like, we know from our own experience on what kind of decisions are required to be made to have a positive outcome on certain scenarios, with the help of data we already have we can train neural networks to make more and more accurate decisions. Tell them what certain conditions mean and how to react in those scenarios, this process is also known as supervised learning.
Now enough about what Neural Networks are, let’s get to the actual point. How does Neural Networks make us even lazier? Imagine, you want to find a friend of yours in a huge crowd. Now you could simply scan the crowd for the familiar face of your friend, or you could shout like a maniac till your friend actually hears your and spots you, or let us try the lazy approach why not train a neural net to identify your friend from a bunch of other people. As I mentioned earlier, you will feed your model a bunch of pictures that signify how your friend looks like in different conditions. The neural network will understand the different features and learn them and as you feed it data it will identify the pictures in which your friend is and let you know. Now obviously taking pictures of random crowds is definitely a very tedious process and some might even tag you as a creep, but how about you use security cameras and use their live feed? Neural Networks can do that too. They process the real-time data, break it down and then in a similar way identify and send you an alert every time your friend goes by a camera.
The usefulness of ANNs falls into one of two basic categories: as tools for solving problems that are inherently difficult for both people and digital computers, and as experimental and conceptual models of something — classically, brains. Let’s talk about each one separately.
First, the real reason for interest (and, more importantly, investment) in ANNs is that they can solve problems. Google uses an ANN to learn how to better target “watch next” suggestions after YouTube videos. The scientists at the Large Hadron Collider turned to ANNs to sift the results of their collisions and pull the signature of just one particle out of the larger storm. Shipping companies use them to minimize route lengths over a complex scattering of destinations. Credit card companies use them to identify fraudulent transactions. Things are just getting started. Google’s been training its photo-analysis algorithms with more and more pictures of animals, and they’re getting pretty good at telling dogs from cats in regular photographs. Both translation and voice synthesis are progressing to the point that we could soon have a babel fish-like device offering natural, real-time conversations between people speaking different languages. And, of course, there are the Big Three ostentatious examples that really wear the machine learning on their sleeve: Siri, Now, and Cortana.
The second aspect lies in how neural networks can simulate many different conditions and give you results as if the experiment happened in real life. Neural networks can be used in conjunction with simulation modelling for system design to eliminate the trial-and-error process. This approach is used to achieve the opposite of what a simulation model can achieve. That is, given a set of desired performance measures, the neural network outputs a suitable design to meet management goals. In a real-world application, a major semiconductor manufacturing plant used this methodology to determine how the test operation should be operated to achieve the production goals.
Okay, now I have hyped Neural Networks way too much, but never get yourself into anything without actually knowing both sides of the coin. Every invention by mankind has both drawbacks as well as benefits, like the atomic energy no doubt gave us the cleanest form of energy but at the same time, they gave rise to atomic bombs and atomic wastes.
So, what is the problem with neural networks? To begin with, there isn’t a lot of things wrong with it, but just because we try to make it the most perfect versions of ourselves, doesn’t mean they are actually perfect. Now the biggest problem is, it’s almost impossible to tell how your neural networks came to the given output. Imagine, you host a video streaming service and your AI blocks a certain user or bans a certain content creator, being the host you need to have an explanation for the same, so how do you actually give the explanation if you don’t even know how your AI came to that explanation. So, in places where it’s needed to provide an explanation, you can’t explain the choices of your AI.
Now we have proved how neural networks are so effective and so versatile. So, yes neural networks are making us lazier as they slowly and slowly do more and more of our work. Now, that’s not necessarily a bad thing as whatever you might say about neural networks, they are damn cool. Imagine, having a smart home and that too a smart home you designed yourself, it might not be the most perfect one out there but the simple act of switching on a light with your voice is a very hard flex.
This post was originally published by Rahul Dit at Medium [AI]