At the moment, I am using Alexnet to do a classification task.
The size of each input sample is 480*680 like this:
Using a normal network, fed by cropped inputs of size 256*256 (generated in preprocessing steps) with the batch size of 8, gives me the accuracy rate of 92%.
But, when I try to generate 5 crops of each (480*680) sample (corners plus a center crop) using the following crop layers:
# this is the reference blob of the cropping process which determines cropping size
layer {
name: "reference-blob"
type: "Input"
top: "reference"
input_param { shape: { dim: 8 dim: 3 dim: 227 dim: 227 } }
}
# upper-left crop
layer{
name: "crop-1"
type: "Crop"
bottom: "data"
bottom: "reference"
top: "crop-1"
crop_param {
axis: 2
offset: 1
offset: 1
}
}
# upper-right crop
layer{
name: "crop-2"
type: "Crop"
bottom: "data"
bottom: "reference"
top: "crop-2"
crop_param {
axis: 2
offset: 1
offset: 412
}
}
# lower-left crop
layer{
name: "crop-3"
type: "Crop"
bottom: "data"
bottom: "reference"
top: "crop-3"
crop_param {
axis: 2
offset: 252
offset: 1
}
}
# lower-right crop
layer{
name: "crop-4"
type: "Crop"
bottom: "data"
bottom: "reference"
top: "crop-4"
crop_param {
axis: 2
offset: 252
offset: 412
}
}
# center crop
layer{
name: "crop-5"
type: "Crop"
bottom: "data"
bottom: "reference"
top: "crop-5"
crop_param {
axis: 2
offset: 127
offset: 207
}
}
# concat all the crop results to feed the next layer
layer{
name: "crop_concat"
type: "Concat"
bottom: "crop-1"
bottom: "crop-2"
bottom: "crop-3"
bottom: "crop-4"
bottom: "crop-5"
top: "all_crops"
concat_param {
axis: 0
}
}
# generating enough labels for all the crop results
layer{
name: "label_concat"
type: "Concat"
bottom: "label"
bottom: "label"
bottom: "label"
bottom: "label"
bottom: "label"
top: "all-labels"
concat_param {
axis: 0
}
}
this leads to accuracy rate of 90.6% which is strange.
Any Idea?
The typical usage of cropped versions is to get a critical feature in a canonical position for the recognition filters. For instance, the typical 5-crop method often finds "animal face near the middle of the image" often enough for that to appear as a learning icon 2-4 layers from the end.
Since a texture tends to repeat certain qualities, there's no such advantage in cropping the photos: you present 5 smaller instances of the texture, with relatively larger grain, rather than the full image.
Related
I want the gauge chart inside a 'rectangular' div, with height=300px and width =400px
If I setup the chart height=300, width=400,the resulting chart is not taking all the available height&width (see image), in fact it looks as if it is taking the necessary space for a circle instead of a semi-circle.
I set up height=300, width=400 for all internal pies, as well as these parameters in 'chart', with no improvements
spacingTop: 0,
spacingBottom: 0,
spacingLeft: 0,
spacingRight: 0,
plotBorderWidth: null,
margin: [0, 0, 0, 0],
spacing: [0, 0, 0, 0]
jsfiddle available here https://jsfiddle.net/perikut/0woz42vt/248/
thanks in advance
You are right, the space is adapted for a circle - please check this example: https://jsfiddle.net/BlackLabel/2jrch4xn/
You need to position the chart as you want by center property:
pane: {
...,
center: ['50%', '100%']
},
plotOptions: {
series: {
...,
center: ['50%', '115%']
},
...
}
Live demo: https://jsfiddle.net/BlackLabel/4m2t6p35/1/
I want to align texts left and right on the same line in swift. For example, I have a string with product names to the left and the price to the right. Both on the same line. Is this possible?
I need this for bluetooth printing, where every line has exactly 32 characters.
If I understand correctly, you want something like this:
func alignLeftAndRight(left: String, right: String, length: Int) -> String {
// calculate how many spaces are needed
let numberOfSpacesToAdd = length - left.count - right.count
// create those spaces
let spaces = Array(repeating: " ", count: numberOfSpacesToAdd < 0 ? 0 : numberOfSpacesToAdd).joined()
// join these three things together
return left + spaces + right
}
Usage:
print(alignLeftAndRight(left: "Product", right: "Price", length: 32))
print(alignLeftAndRight(left: "Foo", right: "1", length: 32))
print(alignLeftAndRight(left: "Product", right: "123", length: 32))
print(alignLeftAndRight(left: "Something", right: "44", length: 32))
print(alignLeftAndRight(left: "Hello", right: "7777", length: 32))
Output:
Product Price
Foo 1
Product 123
Something 44
Hello 7777
I have a conv layer of dimension nXmx16x1 and another filter "F" of size nxmx1x1. how to sum F with every single filter of conv layer (dimension of result: nxmx16x1).
As far I know, eltwise would need both bottoms be exactly the same size (including the number of channel)
It seems like you are looking for "Tile" layer (works like matlab's repmat). Tiling "F" along axis: 2 16 times will make "F" the same shape as the input, then you can use "Eltwise" layer:
layer {
name: "tile_f"
type: "Tile"
bottom: "F" # input shape n-c-h-w
top: "tile_f" # output shape n-c-16*h-w
tile_param { axis: 2 tiles: 16 } # tile along h-axis 16 times
}
# now you can eltwise!
layer {
name: "sum_f"
type: "Eltwise"
bottom: "x"
bottom: "tile_f" # same shape as x!!
top: "sum_f"
eltwise_param { operation: SUM }
}
I am using caffe in python. so this is my blob shape:
data 3072 3.07e+03 (1, 3, 32, 32)
conv2d1 12544 1.25e+04 (1, 16, 28, 28)
maxPool1 3136 3.14e+03 (1, 16, 14, 14)
fc1 10 1.00e+01 (1, 10)
ampl 10 1.00e+01 (1, 10)
-------------------------------- params: name,w,(b)
conv2d1 1200 1.20e+03 (16, 3, 5, 5)
fc1 31360 3.14e+04 (10, 3136)
and here is my last 2 layers in proto.txt file:
...
layer {
name: "ampl"
type: "Softmax"
bottom: "fc1"
top: "ampl"
softmax_param {
axis: 1
}
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "ampl"
bottom: "label"
top: "loss"
}
and I get this error:
euclidean_loss_layer.cpp:12] Check failed: bottom[0]->count(1) == bottom[1]->count(1) (10 vs. 1) Inputs must have the same dimension.
Your error is quite self explanatory:
Inputs must have the same dimension
You are trying to compute "EuclideanLoss" between "ampl" and "label". To do so, you must have "ampl" and "label" be blobs with the same number of elements (aka count()). However, it seems like while "ampl" has 10 elements, "label" only have one element.
I am trying to get the y-axis of my stacked Google column chart proportional. I have tweaked the settings and am not able to get it to work. If the max column value is 1, there should only be 1 horizontal grid line. There should not be three with the value of 1. That doesn't make sense.
Here are my chart options:
var options = {
colors: ['#ba1f1f', '#306b34', '#255f85', '#e28413', '#f24333'],
bar: { groupWidth: "90%" },
chartArea: { left: 50, top: 10, width: '100%', height: '75%' },
legend: { position: 'bottom' },
animation: { startup: true, duration: 250, easing: 'linear' },
isStacked: true,
hAxis: {
slantedText: true
},
vAxis: {
format: '#'
},
height: 350
};
And here is a picture of my problem:
Does anyone have any suggestions on how to fix this? Thanks!
Update with #WhiteHat's suggestions:
I tried your suggestion. It didn't really work. Here is my result if I set explicitly and not based on the column range max.
The total count for the 8 AM column is 7. 7 AM is 4 and 6 AM is 1.
I could have to do some other data manipulation to count the total number per column in the stacked column chart and find the max to not explicitly set the gridlines count. But that is after I can get it working correctly.
you can add following option when max column value is 1
vAxis: {
gridlines: {
count: 2 //<-- default is 5
}
},
or add decimals to format : '#,##0.00' to avoid repeating 1
you can use data table method to check max column value
data.getColumnRange(columnIndex).max