Quantcast
Viewing latest article 1
Browse Latest Browse All 2

Answer by Alex R. for How does a 1-dimensional convolution layer feed into a max pooling layer neural network?

Terminology is wishy-washy here, but in this case the feature maps refer to the outputs of each convolution (filter). In this case you are applying 1x16 convolutions, stride 1, to your input of size 500x4, which gives you 500-16+1=485 positions to apply the convolution. Note that since your image depth is 4, then each convolution has 1x16x4 weights total. So your output is 485x320

You are then applying a maxpool of size 8, with stride 8, meaning that each 8x8 cell will condense into a single cell whose value is the maximum. With a stride of 8, you will do 60 maxpools (up to pixel 480). I believe the last 5 pixels are just thrown out.

Your question about one-hot encoded vector maxpooling is strange. Max-pool would operate no differently in that case, taking the maximum value in max pool region size.


Viewing latest article 1
Browse Latest Browse All 2

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>