Catching the thoughts : Neural Networks.

==Research that throws light into the way neural networks work might change our understanding on how human thought process work. Here is how.==

I have been following the [Google research blog](https://research.googleblog.com “ target=”_blank) for quiet sometime tuning on to the latest developments from the tech giant.

And then I came across something interesting today. The folks at Google call it “Inceptionism” and it left me baffled.

Before I write about what it is, let me give a primer on what neural networks are.

We all know what Computer programs do - provide step by step instructions on how a particular task is completed. Depending upon the complexity of the task at hand, a computer program can be very simple or can get really really complex.

But one thing stays the same - the programmer instructs the computer what to do and it complies. Almost all the technological brilliance we see around us is built this way - by telling the computers what to do and making them do exactly that.

The so called ‘software’ companies have invested billions of dollars into creating systems and processes that ultimately churn out the code that make systems tick.

Whatever it does, the workflow is the same - The code knows what to do. This code is again converted to simple arithmetic operations that a computer understand and the computer systematically executes these simple instructions (at insanely insane speeds) to do whatever we want it to do.

So an onlooker might be tempted to ask,

“So if you wanna make a computer do difficult tasks, you might umm… well write a complex program to make it do that. Nothing is impossible, right!!!”

But it turns out some of the tasks we humans easily do are insanely difficult for computers to replicate. These include natural language processing (learning and understanding a human tongue), detecting objects from image (like a cat in a youtube video), logical thinking based on ontologies etcetera.

The reason being that the ‘steps’ that go behind these processes are very difficult to decipher. If you ask me how to make coffee, I will be able to give you a set of steps or [an algorithm](https://en.wikipedia.org/wiki/Algorithm “ target=”_blank) that might help you to make it again.

But if you ask me to give a step by step description of how I detected a cat’s face in that video, umm well… I can’t!!

It turns out that it’s insanely hard to decipher ‘the small steps’ involved in say learning a language or looking at a cat’s image and understand that it’s actually a cat.

So the computer scientists had to look for ways to make computers do such trivial tasks.

The traditional understanding of Computer Science wouldn’t help us build the tech to do them easily. But what to do, we are a species that is never cowed down by obstacles.

Our people kept thinking on how to tackle this and build systems that will ‘act like humans’.

So if we want a computer to mimic human brain, we should make them work like one. So people started looking up to a field of science that tried to understand the human decision making system.

We humans, despite being rational beings make some insanely foolish decisions at times. We seem to weigh in a lot of factors before taking a decision.

If we weigh in the right parameters with the right amount of importance to each of them, we are more likely to end up with a better decision.

If we screw them up, we are not to be surprised when our final decision is screwed up like anything.

So a lot of research went into this field trying to understand our decision making process and build systems that mimic our decision making. They all were researched and refined under the umbrella of [Artificial Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network “ target=”_blank) (ANNs).

Here, I am presenting an over simplified view of what goes under them.

So these systems are networks that essentially contain nodes that make small decisions based on our inputs and the desired result for this inputs (an optimal or desired output).

So properties of this nodes are set such that outputs comes as close as possible to the desired results.

But how do we set these properties?

This is very similar to asking ourselves on how do we make ourselves ready for an examination after a failed test.

What do we do?!!

==We learn!!==

Yes, we make the network learn from its experience and reset the ‘properties’ so that the next time we give the same input, we get an output that is optimal.

So we keep training (teaching! ^_^) our networks with huge datasets so that our network gets better and better in what it does.

This is very similar to how we learned to walk. We tried and we failed. Then we tried again and we failed. The mistakes made us stronger, we learnt to walk at last!

The input was our desire to walk. The properties of our system (node) was the forces we applied on different point on our body. Output was if we were able to balance ourselves or not.

If we got our properties wrong, we fall. But we readjust them so that we are better off next time. Slowly we pick this up and at the end we know when and were the right amount of force is to be applied. Viola! We have our properties right, just like a neural network.

So after the laborious process of learning, practicing it comes quite easy. The same is true for neural networks as well. The entire literature in Neural Network is mostly around how to effectively make your Neural Network learn and how we design Neural Networks that give good results once they are ‘trained’.


So now we understand Neural Networks and how they make decisions and gets things done. We also know that they learn from their experiences rather than act on some rigid instructions like a trivial computer program.

But why did I go to these depths to talk about them all of a sudden? What made me think about them other than my occasional flirtations with [Tenserflow](https://www.tensorflow.org “ target=”_blank) and other neural networking toolboxes?

It’s the Inceptionism!

So what is Inceptionism? It was an attempt to understand neural networks.

At this point, we know that neural networks learn from their ‘experience’ and take decisions based on that. If the neural network is trained a lot, we can check it’s efficacy by testing it with a sample data set.

If the neural network gets a lot of decisions correct we say that it’s a good one. It means we got our design correct. If it fails a lot, we have to redesign them again.

The problem is that in the process of designing a network, we have no idea about how the ‘properties’ of the end network will look like. These properties have to be picked up by the network during the ‘learning’ process.

In short, the designing and refining of a neural network is a largely trial and error process with a lot of ‘guidelines’ we have from our past experience designing neural networks. We are still largely unclear about how to design them effectively and get them right every time.

A huge amount of research goes behind this and I found myself interested in finding out how they work. Then I stumble upon this article at the Google research blog.

It contains some really good images. You can have a look at them here:


The Pagoda.


Wait what?! Don’t they look like some psychedelic imagery that we look into when we are high?

Or why do they have uncanny resemblance to the kind of patters we see when we are high?

N.B: Drugs kill people.

Coming back, what do these images represent?

As I have explained above, a Neural Network after training is a set of ‘properties’ based on which it outputs an optimal decision for a given input.

As we know, a big problem can be split into sets of small problems and solutions to these problems can be interlinked to solve the bigger problem. This approach is called ‘divide and conquer’ approach to problem solving.

Similarly, neural networks that do decision making for smaller problems are interconnected together to solve bigger problems.

But at the end of the day, we don’t really understand the ‘properties’ of this neural networks, all we can do is to test them against test databases and see if they are performing well.

When we look at these ‘properties’, all we see are some numbers. So how do we make sense of these numbers and aid ourselves in the meaningful design of Neural Networks?

Another question that arises is, can we build Neural Networks good enough to replace the standard programming practices?

To probe in to all these, folks at google came up with this ‘inceptionist’ approach. It works like this.

Suppose we have a neural network trained to detect sparrows in an image. If we give the image of a sparrow, it will most probably give us an astounding “YES” if it’s a good neural network.

You might get a “NO” if you input the image of a crow, good enough. But does this process help us understand the working of a neural network?

No!!!

So the idea is to start with a random image and make it pass the Neural Network as “positive” . For example take an image with [Gaussian Noise](https://en.wikipedia.org/wiki/Gaussian_noise “ target=”_blank) and enhance it so that the neural network gives a “YES” when you feed this enhanced image as an input.

For example, they took a random image and enhanced it so that the enhanced image will give a “YES” from a neural network trained to detect bananas. The result is interesting.

And all the images we saw above were results generated from random noise when they were enhanced to make them meet the ‘expectations’ of a neural network.

More images:


Now when we enhance the image of sky to pass a network for birds, animals or places, we get the following image:

The image of sky used is:

A pristine forest:

When made to pass the neural network for birds and animals looked like this:


Isn’t it simply amazing, that we can relate with the ‘expectations’ of a neural network?

That these expectations are far removed from reality and mimics the psychedelic imagery we are familiar with?

So what if the psychedelic imagery we see when high (N.B: Drugs kill people) or deep in a creative thought is an expression of our inner longing to fulfil the neural networks that sit inside our heads?

True, it’s just a rant at this point of time. But I have this strange feeling that these are some how connected. Who knows, the relation might be well established someday!

The distorted image of this world we see in our thoughts or when we are inebriated forms a large part of the fodder which all forms of art and literature feed upon.

By creative thoughts, I mean the kind of thoughts we have when we are alone thinking or when we travel to distant places or again, when we are high. (N.B Drugs kill people).

When the ‘expectations’ are not met in real life, these nets might be casting an ‘ideal’ imagery that fulfils the requirements, Who knows?!!

All we can do is keep ourselves updated with the research from this field and try to form an understanding on our own. It sure helps to learn, about learning!

Addendum : In the preface to “One flew over the Cuckoos Nest”, Ken Kesey talks about writing and creating art under the influence of psycho-active drugs, especially the LSD. He was certain that he received ‘transmissions’ from ‘somewhere’ but will never be able to say from where it was. A lot of people who use LSD claim the same thing. What might be the origin of these ‘transmissions’…?

[Image Credits.](https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB “ target=”_blank)

Written on June 23, 2016