Why Is Google Censoring the Russian Language?

Google has a long and frightening history of censoring content it doesn’t like, but, as far as I am aware, the company only became fixated on Russia in 2017. That’s when they unilaterally decided to…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Spatial Transformer Networks

Making the Convolutional Neural Network(CNN) more robust

Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this article I’ll talk about a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimization process, and the use of spatial transformer results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.

The question arises, Is the current deep learning model capable towards spatial invariance? The answer is yes, but not well. In the max-pooling mechanism, the model selects one critical pixel which is the most representative one out of a pool of specific size. If a pixel from a different column or row is selected, it kind of achieves spatial invariance. However, the lower layer cannot learn this property. Since the considered field in pooling is very small, the layer can only have limited capability to resist this spatial difference. Also, pooling operation throws away a lot of information about where a feature occurs. A pool over a small grid will detect a feature regardless of where in that grid that feature occurs. It’s really a surprise for me that pooling in CNNs work!

Now another question arises, can we define a layer to learn the spatial invariance? The answer is again yes, Spatial Transformer Networks are the way to do that. The idea of spatial transformer networks or STNs was introduced by DeepMind, it makes use of image transformations specifically, Affine Transformation to transform the image feature map.

Structural view of the STN Model

The work of spatial transformer is to transform the feature map into another vector space representation. There are 3 parts in STN: Localization network, Grid generator and Sampler.

The localization network which is composed of fully-connected layers or convolution layers will generate the transformation parameters. The second part is grid generator. After we get the parameters of Affine transformation, we can compute the opposite coordinates. Notice that the input of the transformation function is the coordinate of target feature map. But what is the target coordinate? We do know the source coordinates but don’t know the target coordinates!

This is a tricky method. DeepMind wanted to compute the original coordinates for all target pixels. So, the task of grid generator is only to compute the coordinates in source feature map for each target pixel.

Calculation of Affine Transformation matrix

So what is the intensity of each pixel in the target feature map? This is where the sampler comes into picture. The sampler generates the pixel intensity of each target coordinate by bi-linear interpolation.

Calculation of pixel intensities via Bi-Linear Interpolation

Here I’ll show a simple example of how the sampler generates the pixel intensities.

An Example Image

The above image is a 4*4 grayscale image. The left one being the pixel intensities and the right one being the original image.

To simplify explaining, we can write the image in the matrix form as shown above.

Coordinates for each pixel

We can assume the coordinates as shown above. The top left one being the
(0, 0) position.

We can regard each grid as a single pixel, and the center position represents the particular pixel coordinate. As the above image shows, the pink points in each grid are the center points to represent the boxes.

Example Affine transformation

Suppose we have determined the theta parameters of Affine transformation. In the above image, the left most matrix is the transformation matrix. The middle one is the target coordinate. To reach the shifting action, we pad the target coordinate matrix by 1. By the calculation of Affine transformation, the source coordinate is [2.5, 2.5].

Illustrating the transformation operation

The above image illustrates the computation example again. By the Affine transformation, we can map the [1, 1] target coordinate into [2.5, 2.5] source coordinate. However, there’s no pink points in such area! The coordinate is a factorial value! So how to determine the intensity in this factorial coordinate? Here, we should use the bi-linear interpolation.

The idea of bi-linear interpolation with each pixel(2D)

Above image is a graphical representation of the bi-linear interpolation formula. Each pixel will contribute some weight towards this factorial point which is a bit different from the traditional bi-linear interpolation. The traditional bi-linear interpolation only considers the nearest neighbors around the coordinate, but DeepMind wanted each and every point to have some influence on this point.

The idea of bi-linear interpolation with each pixel(3D)

Now we change this image to 3D. The z-axis represents the level of influence of each point on the factorial point. In simple terms, the intensity of this point is the weighted sum of the difference in distance from each pink point multiplied by the pixel intensity for the same points.

Now I’ll show an example on how STN can be helpful in improving the classification performance. For this example I have used the MNIST-Cluttered dataset provided by DeepMind and the German Traffic Signs dataset.

STN plugged into a simple classification network MNIST-Cluttered

The above image shows how STN helps the classification network to better recognize the digits by providing spatial invariance.

The above image shows STN being trained on the German Traffic Signs Dataset.

In this article I talked about a new self-contained module for neural networks, the spatial transformer. This module can be dropped into a network and perform explicit spatial transformations of features, opening up new ways for neural networks to model data, and is learnt in an end-to-end fashion, without making any changes to the loss function. While CNNs provide an incredibly strong baseline, we can see gains in accuracy using spatial transformers across multiple tasks, resulting in state-of-the-art performance. Furthermore, the regressed transformation parameters from the spatial transformer are available as an output and could be used for subsequent tasks. While I have only explored feed-forward networks in this work, some experiments by DeepMind, show spatial transformers to be powerful in recurrent models, and useful for tasks requiring the disentangling of object reference frames, as well as easily extendable to 3D transformations.

Add a comment

Related posts:

Amaze ACV Keto Gummies

At any point amaze ACV Keto Gummies:- Do you want to get thin in the wake of checking out at your associates or your dear companions? In any case, in any event, considering it makes you a piece…

On Time and Cultural Progression.

We are supposed to assume, through the concept of time being merged undifferentiately with the idea of progression, that we are currently at the cutting edge of human civilisation; as opposed to the…

Building Your Fluz Network Using Your Professional Connections

Inviting your friends and family is a wonderful way to jump-start your passive earnings within the Fluz app. Your deep personal connection with these individuals means that they are more likely to…