Caffe, operations among batches - neural-network

Since I have a classifier based on single patch scores, I would like to sum together the predictions a network produces for different images.
From
https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto, Reduction does not support operation over axis different than the last one.
Also a pooling operation would produce an average of its input but, obviously, without touching on the full batch.
I have implemented a python layer, but this is not fast enough for large scale experiments.
Is there a way to "sum" or, more generally, operate over the first axis with the tools already available?

Yes. You can. If you have N x p x q x r blob of prediction, the first use Slice (SliceLayer), creating N blobs, each of shape 1 x p x q x r. Then use these N blobs as N bottoms for the eltwise (EltwiseLayer) layer to produce a single top.

If your predictions have the dimensions: N x c (for mini-batch size of N and c channels), then you can splice this into c blobs with dimension N. You can feed these into a Reduction layer.
For example, you to write the following as a Jinja2 template:
layer {
name: "pred-slice"
type: "Slice"
bottom: "pred"
{%- for num in range(10) %}
top: "pred-{{ num }}-vector"
{%- endfor %}
slice_param {
slice_dim: 1
{%- for num in range(1, 10) %}
slice_point: {{ num }}
{%- endfor %}
}
include {
phase: TEST
}
}
{%- for num in range(10) %}
layer {
name: "pred-{{num}}"
type: "Reduction"
bottom: "pred-{{ num }}-vector"
top: "pred-{{ num }}"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
{%- endfor %}
which expands to:
layer {
name: "pred-slice"
type: "Slice"
bottom: "pred"
top: "pred-0-vector"
top: "pred-1-vector"
top: "pred-2-vector"
top: "pred-3-vector"
top: "pred-4-vector"
top: "pred-5-vector"
top: "pred-6-vector"
top: "pred-7-vector"
top: "pred-8-vector"
top: "pred-9-vector"
slice_param {
slice_dim: 1
slice_point: 1
slice_point: 2
slice_point: 3
slice_point: 4
slice_point: 5
slice_point: 6
slice_point: 7
slice_point: 8
slice_point: 9
}
include {
phase: TEST
}
}
layer {
name: "pred-0"
type: "Reduction"
bottom: "pred-0-vector"
top: "pred-0"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-1"
type: "Reduction"
bottom: "pred-1-vector"
top: "pred-1"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-2"
type: "Reduction"
bottom: "pred-2-vector"
top: "pred-2"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-3"
type: "Reduction"
bottom: "pred-3-vector"
top: "pred-3"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-4"
type: "Reduction"
bottom: "pred-4-vector"
top: "pred-4"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-5"
type: "Reduction"
bottom: "pred-5-vector"
top: "pred-5"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-6"
type: "Reduction"
bottom: "pred-6-vector"
top: "pred-6"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-7"
type: "Reduction"
bottom: "pred-7-vector"
top: "pred-7"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-8"
type: "Reduction"
bottom: "pred-8-vector"
top: "pred-8"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}
layer {
name: "pred-9"
type: "Reduction"
bottom: "pred-9-vector"
top: "pred-9"
include {
phase: TEST
}
reduction_param {
operation: MEAN
}
}

Related

ApexCharts: Hide every nth label in chart

I would like to hide some of the labels from my chart made with ApexCharts.js. I am coming from Frappé Charts, which has a feature called "continuity." It allows you to hide labels if they do not comfortably fit, because the chart is a timeseries chart.
My ApexChart looks like this:
I would like to remove many of the dates, but still have them appear in the tooltip. I was able to do this in Frappé Charts and it looked like this:
Here's my code for the Apex chart:
var options = {
chart: {
animations: { enabled: false },
toolbar: { show: false },
zoom: { enabled: false },
type: 'line',
height: 400,
fontFamily: 'PT Sans'
},
stroke: {
width: 2
},
theme: {
monochrome: {
enabled: true,
color: '#800000',
shadeTo: 'light',
shadeIntensity: 0.65
}
},
series: [{
name: 'New Daily Cases',
data: [2,0,0,0,0,0,0,1,0,1,0,7,1,1,1,8,0,11,2,9,8,21,17,28,24,20,38,39,36,21,10,49,45,44,52,74,31,29,43,28,39,58,30,47,50,31,28,79,39,54,55,33,42,39,41,52,25,30,37,26,30,35,42,64,46,25,35,45,56,45,64,34,34,32,40,65,56,64,55,37,61,51,70,81,76,64,71,61,56,52,106,108,104,33,57,82,71,67,68,63,71,32,70,65,98,52,72,87,66,85,90,47,164,123,180,119,85,66,122,65,155,191,129,144,175,224,234,240,128,99,141,131,215,228,198,152,126,201,92,137,286,139,236,238,153,170,106,61]
}],
labels: ['February 28','February 29','March 1','March 2','March 3','March 4','March 5','March 6','March 7','March 8','March 9','March 10','March 11','March 12','March 13','March 14','March 15','March 16','March 17','March 18','March 19','March 20','March 21','March 22','March 23','March 24','March 25','March 26','March 27','March 28','March 29','March 30','March 31','April 1','April 2','April 3','April 4','April 5','April 6','April 7','April 8','April 9','April 10','April 11','April 12','April 13','April 14','April 15','April 16','April 17','April 18','April 19','April 20','April 21','April 22','April 23','April 24','April 25','April 26','April 27','April 28','April 29','April 30','May 1','May 2','May 3','May 4','May 5','May 6','May 7','May 8','May 9','May 10','May 11','May 12','May 13','May 14','May 15','May 16','May 17','May 18','May 19','May 20','May 21','May 22','May 23','May 24','May 25','May 26','May 27','May 28','May 29','May 30','May 31','June 1','June 2','June 3','June 4','June 5','June 6','June 7','June 8','June 9','June 10','June 11','June 12','June 13','June 14','June 15','June 16','June 17','June 18','June 19','June 20','June 21','June 22','June 23','June 24','June 25','June 26','June 27','June 28','June 29','June 30','July 1','July 2','July 3','July 4','July 5','July 6','July 7','July 8','July 9','July 10','July 11','July 12','July 13','July 14','July 15','July 16','July 17','July 18','July 19','July 20','July 21','July 22','July 23','July 24'],
xaxis: {
tooltip: { enabled: false }
},
}
var chart = new ApexCharts(document.querySelector("#chart"), options);
chart.render();
<script src="https://cdn.jsdelivr.net/npm/apexcharts"></script>
<div id="chart"></div>
And here's my code for the Frappé Chart if it helps:
const data = {
labels: ['February 28','February 29','March 1','March 2','March 3','March 4','March 5','March 6','March 7','March 8','March 9','March 10','March 11','March 12','March 13','March 14','March 15','March 16','March 17','March 18','March 19','March 20','March 21','March 22','March 23','March 24','March 25','March 26','March 27','March 28','March 29','March 30','March 31','April 1','April 2','April 3','April 4','April 5','April 6','April 7','April 8','April 9','April 10','April 11','April 12','April 13','April 14','April 15','April 16','April 17','April 18','April 19','April 20','April 21','April 22','April 23','April 24','April 25','April 26','April 27','April 28','April 29','April 30','May 1','May 2','May 3','May 4','May 5','May 6','May 7','May 8','May 9','May 10','May 11','May 12','May 13','May 14','May 15','May 16','May 17','May 18','May 19','May 20','May 21','May 22','May 23','May 24','May 25','May 26','May 27','May 28','May 29','May 30','May 31','June 1','June 2','June 3','June 4','June 5','June 6','June 7','June 8','June 9','June 10','June 11','June 12','June 13','June 14','June 15','June 16','June 17','June 18','June 19','June 20','June 21','June 22','June 23','June 24','June 25','June 26','June 27','June 28','June 29','June 30','July 1','July 2','July 3','July 4','July 5','July 6','July 7','July 8','July 9','July 10','July 11','July 12','July 13','July 14','July 15','July 16','July 17','July 18','July 19','July 20','July 21','July 22','July 23','July 24'],
datasets: [{
name: 'Cumulative Cases',
values: [2,0,0,0,0,0,0,1,0,1,0,7,1,1,1,8,0,11,2,9,8,21,17,28,24,20,38,39,36,21,10,49,45,44,52,74,31,29,43,28,39,58,30,47,50,31,28,79,39,54,55,33,42,39,41,52,25,30,37,26,30,35,42,64,46,25,35,45,56,45,64,34,34,32,40,65,56,64,55,37,61,51,70,81,76,64,71,61,56,52,106,108,104,33,57,82,71,67,68,63,71,32,70,65,98,52,72,87,66,85,90,47,164,123,180,119,85,66,122,65,155,191,129,144,175,224,234,240,128,99,141,131,215,228,198,152,126,201,92,137,286,139,236,238,153,170,106,61],
chartType: 'line'
}]
}
const chart = new frappe.Chart('#chart', {
data: data,
type: 'line',
height: 250,
animate: false,
barOptions: {
spaceRatio: 0.25
},
colors: ['#800000'],
tooltipOptions: {
formatTooltipY: d => d.toLocaleString()
},
axisOptions: {
xAxisMode: 'tick',
xIsSeries: true
},
lineOptions: {
hideDots: true,
regionFill: true
}
})
<script src="https://cdn.jsdelivr.net/npm/frappe-charts#1.5.2/dist/frappe-charts.min.iife.min.js"></script>
<div id="chart"></div>
I've tried using the formatter callback function to return only every 10th value, but things get all out of position and the tooltips don't work. I get similar problems returning an empty string or a space for the values I wish to exclude (but still include in the tooltip).
What I do is calculate the ratio between the area's width and the number of ticks, and if that ratio is above a certain number, I add a classname to the chart or it's wrapper and there I write:
.apexcharts-xaxis-label{
display: none;
&:nth-child(5n){ display:revert; }
}
So every 5th label is shown and the rest are hidden.
You can also set up a resizeObserver to add/remove the special class.
This require the below config to be given to the chart:
xaxis: {
labels: {
rotate: 0, // no need to rotate since hiding labels gives plenty of room
hideOverlappingLabels: false // all labels must be rendered
}
}
You can try 2 things.
xaxis: {
type: 'datetime',
}
You can convert the x-axis to datetime and labels will align as shown below
Or
You can stop rotation of the x-axis labels using
xaxis: {
labels: {
rotate: 0
}
}
which produces the following result.
Vsync answer have not worked for me. It needed a little modification:
.apexcharts-xaxis-texts-g text[id^='SvgjsText'] {
display: none;
}
.apexcharts-xaxis-texts-g text[id^='SvgjsText']:nth-of-type(5n) {
display: revert;
}
labels: ['',this.props.itemNames], //"(labels: [the label , the label below])"

Caffe CNN Slice layer: 2nd Slice layer produces unknown bottom blob

Caffe CNN Slice layer: 2nd Slice layer produces unknown bottom blob
I have 2 Slice layers (see proto file). It seems the 1st one is working well; whereas the 2nd one's bottom gives "unknown bottom blob error" as following:
In fact I am not sure the error is related to Slice or Flatten!?
please note that the the 2nd Slice does not even printed in the log!!!
this is the Proto file:
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label_b4_noise"
include {
phase: TEST
}
hdf5_data_param {
source: "data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s_val_list.txt"
batch_size: 25
shuffle: true
}
}
layer {
name: "data"
type: "HDF5Data"
top: "data"
top: "label_b4_noise"
include {
phase: TRAIN
}
hdf5_data_param {
source: "data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s_train_list.txt"
batch_size: 25
shuffle: true
}
}
layer {
name: "slic0"
type: "Slice"
bottom: "data"
top: "data1"
top: "data2"
slice_param {
axis: 1
slice_point: 1
}
}
layer {
name: "conv_u0d-score_New"
type: "Convolution"
bottom: "data1"
top: "conv_last"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1
pad: 0
kernel_size: 1
weight_filler {
type: "msra"
}
}
}
layer {
name: "flat"
type: "Flatten"
bottom: "conv_last"
top: "ampl"
}
layer {
name: "slic1"
type: "Slice"
bottom: "label_b4_noise"
top: "label_b4_noise1"
top: "label_b4_noise2"
slice_param {
axis: 1
slice_point: 1
}
}
layer {
name: "flatdata"
type: "Flatten"
bottom: "label_b4_noise1"
top: "flatdata"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "ampl"
bottom: "flatdata"
top: "loss"
softmax_param {engine: CAFFE}
}
this is the log file:
GL ----------------------------------------------------------------
res/4removal_nAmp3nData2_2e5/unet_bs10/unet data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s .
res/4removal_nAmp3nData2_2e5/unet_bs10/unet data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s .
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1018 11:40:22.930601 104201 upgrade_proto.cpp:67] Attempting to upgrade input file specified using deprecated input fields: res/4removal_nAmp3nData2_2e5/unet_bs10/unet_tmp/unet_deploy.txt
I1018 11:40:22.930654 104201 upgrade_proto.cpp:70] Successfully upgraded file specified using deprecated input fields.
W1018 11:40:22.930658 104201 upgrade_proto.cpp:72] Note that future Caffe releases will only support input layers and not input fields.
I1018 11:40:23.237383 104201 net.cpp:51] Initializing net from parameters:
name: "unet"
state {
phase: TEST
level: 0
}
layer {
name: "input"
type: "Input"
top: "data"
input_param {
shape {
dim: 1
dim: 2
dim: 1
dim: 2048
}
}
}
layer {
name: "slic0"
type: "Slice"
bottom: "data"
top: "data1"
top: "data2"
slice_param {
slice_point: 1
axis: 1
}
}
layer {
name: "conv_u0d-score_New"
type: "Convolution"
bottom: "data1"
top: "conv_last"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1
pad: 0
kernel_size: 1
weight_filler {
type: "msra"
}
}
}
layer {
name: "flat"
type: "Flatten"
bottom: "conv_last"
top: "ampl"
}
layer {
name: "flatdata"
type: "Flatten"
bottom: "label_b4_noise1"
top: "flatdata"
}
F1018 11:40:23.237546 104201 insert_splits.cpp:29] Unknown bottom blob 'label_b4_noise1' (layer 'flatdata', bottom index 0)
*** Check failure stack trace: ***
/pbs/home/n/nhatami/sps/spectro/trainAndTest_4removal: line 101: 104201 Aborted $pydir/dumpLayersSize.py ${tmp_root}_deploy.txt ${oroot}
/pbs/home/n/nhatami/sps/spectro/trainAndTest_4removal: line 101: 104201 Aborted $pydir/dumpLayersSize.py ${tmp_root}_deploy.txt ${oroot}
Thu Oct 18 11:40:23 CEST 2018
/usr/bin/time -v caffe -gpu 0 --log_dir=res/4removal_nAmp3nData2_2e5/unet_bs10/unet_tmp train -solver res/4removal_nAmp3nData2_2e5/unet_bs10/unet_tmp/unet_solver.txt
Thu Oct 18 11:41:26 CEST 2018
/pbs/home/n/nhatami/sps/spectro/trainAndTest_4removal: line 206: gnuplot: command not found
/pbs/home/n/nhatami/sps/spectro/trainAndTest_4removal: line 225: gnuplot: command not found
/usr/bin/time -v -o res/4removal_nAmp3nData2_2e5/unet_bs10/unet_time_test.txt python /pbs/home/n/nhatami/sps/spectro/python/test_4removal.py -eg hist -l label -mf=data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s_met.txt res/4removal_nAmp3nData2_2e5/unet_bs10/unet_deploy.txt res/4removal_nAmp3nData2_2e5/unet_bs10/unet.caffemodel data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s_test.h5 res/4removal_nAmp3nData2_2e5/unet_bs10/restest/unet_test
WARNING: Logging before InitGoogleLogging() is written to STDERR
W1018 11:41:42.595289 105177 _caffe.cpp:139] DEPRECATION WARNING - deprecated use of Python interface
W1018 11:41:42.595335 105177 _caffe.cpp:140] Use this instead (with the named "weights" parameter):
W1018 11:41:42.595338 105177 _caffe.cpp:142] Net('res/4removal_nAmp3nData2_2e5/unet_bs10/unet_deploy.txt', 1, weights='res/4removal_nAmp3nData2_2e5/unet_bs10/unet.caffemodel')
I1018 11:41:42.597472 105177 upgrade_proto.cpp:67] Attempting to upgrade input file specified using deprecated input fields: res/4removal_nAmp3nData2_2e5/unet_bs10/unet_deploy.txt
I1018 11:41:42.597497 105177 upgrade_proto.cpp:70] Successfully upgraded file specified using deprecated input fields.
W1018 11:41:42.597501 105177 upgrade_proto.cpp:72] Note that future Caffe releases will only support input layers and not input fields.
I1018 11:41:42.597535 105177 net.cpp:51] Initializing net from parameters:
name: "unet"
state {
phase: TEST
level: 0
}
layer {
name: "input"
type: "Input"
top: "data"
input_param {
shape {
dim: 1
dim: 2
dim: 1
dim: 2048
}
}
}
layer {
name: "slic0"
type: "Slice"
bottom: "data"
top: "data1"
top: "data2"
slice_param {
slice_point: 1
axis: 1
}
}
layer {
name: "conv_u0d-score_New"
type: "Convolution"
bottom: "data1"
top: "conv_last"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1
pad: 0
kernel_size: 1
weight_filler {
type: "msra"
}
}
}
layer {
name: "flat"
type: "Flatten"
bottom: "conv_last"
top: "ampl"
}
layer {
name: "flatdata"
type: "Flatten"
bottom: "label_b4_noise1"
top: "flatdata"
}
F1018 11:41:42.597617 105177 insert_splits.cpp:29] Unknown bottom blob 'label_b4_noise1' (layer 'flatdata', bottom index 0)
*** Check failure stack trace: ***
('res/4removal_nAmp3nData2_2e5/unet_bs10/unet_deploy.txt', 'res/4removal_nAmp3nData2_2e5/unet_bs10/unet.caffemodel', 'data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s_test.h5', 'data/4removal_nAmp3nData2_2e5/2048_2e5_0.01_s_met.txt')
here is h5disp:
Dataset 'label_b4_noise'
Size: 2048x1x2x10000
MaxSize: 2048x1x2xInf
Datatype: H5T_IEEE_F32LE (single)
ChunkSize: 2048x1x2x100
Filters: none
FillValue: 0.000000
Dataset 'data'
Size: 2048x1x2x10000
MaxSize: 2048x1x2xInf
Datatype: H5T_IEEE_F32LE (single)
ChunkSize: 2048x1x2x100
Filters: none
FillValue: 0.000000

Draw d3-axis without direct DOM manipulation

Is there a way to use d3-axis without directly manipulating the DOM?
D3.js complements Vue.js nicely when used for its helper functions (especially d3-scale).
I'm currently using a simple Vue template to generate a SVG.
To generate the axis, I am first creating a <g ref="axisX"> element and then calling d3.select(this.$refs.axisX).call(d3.axisBottom(this.scale.x)).
I would like to avoid using d3.select to prevent direct DOM modifications. It works well but it conflicts with Vue.js' scoped styles.
Is there a way to access d3-axis without calling it from a DOM element? It would be useful to have access to its path generation function independently instead of via the DOM.
Here is a sample CodePen: https://codepen.io/thibautg/pen/BYRBXW
This is a situation that calls for a custom directive. Custom directives allow you to manipulate the DOM within the element they are attached to.
In this case, I created a directive that takes an argument for which axis and a value which is your scale computed. Based on whether the axis is x or y, it calls axisBottom or axisLeft with scale[axis].
No more watching. The directive will be called any time anything updates. You could put in a check to see whether scale in particular had changed from its previous value, if you wanted.
new Vue({
el: "#app",
data() {
return {
width: 600,
height: 400,
margin: {
top: 20,
right: 20,
bottom: 20,
left: 20
},
items: [
{ name: "a", val: 10 },
{ name: "b", val: 8 },
{ name: "c", val: 1 },
{ name: "d", val: 5 },
{ name: "e", val: 6 },
{ name: "f", val: 3 }
]
};
},
computed: {
outsideWidth() {
return this.width + this.margin.left + this.margin.right;
},
outsideHeight() {
return this.height + this.margin.top + this.margin.bottom;
},
scale() {
const x = d3
.scaleBand()
.domain(this.items.map(x => x.name))
.rangeRound([0, this.width])
.padding(0.15);
const y = d3
.scaleLinear()
.domain([0, Math.max(...this.items.map(x => x.val))])
.rangeRound([this.height, 0]);
return { x, y };
}
},
directives: {
axis(el, binding) {
const axis = binding.arg;
const axisMethod = { x: "axisBottom", y: "axisLeft" }[axis];
const methodArg = binding.value[axis];
d3.select(el).call(d3[axisMethod](methodArg));
}
}
});
rect.bar {
fill: steelblue;
}
<script src="//unpkg.com/vue#2"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/d3/4.11.0/d3.min.js"></script>
<div id="app">
<svg :width="outsideWidth"
:height="outsideHeight">
<g :transform="`translate(${margin.left},${margin.top})`">
<g class="bars">
<template v-for="item in items">
<rect class="bar"
:x="scale.x(item.name)"
:y="scale.y(item.val)"
:width="scale.x.bandwidth()"
:height="height - scale.y(item.val)"
/>
</template>
<g v-axis:x="scale" :transform="`translate(0,${height})`"></g>
<g v-axis:y="scale"></g>
</g>
</g>
</svg>
</div>

Reading encoded image data from lmdb database in caffe

I am relatively new to using caffe and am trying to create minimal working examples that I can (later) tweak. I had no difficulty using caffe's examples with MNIST data. I downloaded image-net data (ILSVRC12) and used caffe's tool to convert it to an lmdb database using:
$CAFFE_ROOT/build/install/bin/convert_imageset -shuffle -encoded=true top_level_data_dir/ fileNames.txt lmdb_name
To create an lmdb containing encoded (jpeg) image data. The reason for this is that encoded, the lmdb is about 64GB versus unencoded being about 240GB.
My .prototxt file that describes the net is minimal (a pair of inner product layers, mostly borrowed from the MNIST example--not going for accuracy here, I just want something to work).
name: "example"
layer {
name: "imagenet"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "train-lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "imagenet"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "test-lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "data"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 1000
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 1000
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
When train-lmdb is unencoded, this .prototxt file works fine (accuracy is abysmal, but caffe does not crash). However, if train-lmdb is encoded then I get the following error:
data_transformer.cpp:239] Check failed: channels == img_channels (3 vs. 1)
Question: Is there some "flag" I must set in the .prototxt file that indicates that the train-lmdb is encoded images? (The same flag would likely have to be given to for the testing data layer, test-lmdb.)
A little research:
Poking around with google I found a resolved issue which seemed promising. However, setting the 'force_encoded_color' to true did not resolve my problem.
I also found this answer very helpful with creating the lmdb (specifically, with directions for enabling the encoding), however, no mention was made of what should be done so that caffe is aware that the images are encoded.
The error message you got:
data_transformer.cpp:239] Check failed: channels == img_channels (3 vs. 1)
means caffe data transformer is expecting input with 3 channels (i.e., color image), but is getting an image with only 1 img_channels (i.e., gray scale image).
looking ar caffe.proto it would seems like you should set the parameter at the transformation_param:
layer {
name: "imagenet"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
force_color: true ## try this
}
data_param {
source: "train-lmdb"
batch_size: 100
backend: LMDB
force_encoded_color: true ## cannot hurt...
}
}

Rickshaw Stacked Multi-graph

I am working with this multi-graph Dashing widget. It works correctly, but it uses only two data series and I want to add one more. I have modified the .coffee file to
class Dashing.Mgraph extends Dashing.Widget
#accessor 'current', ->
return #get('displayedValue') if #get('displayedValue')
points = #get('points')
if points
points[0][points[0].length - 1].y + ' / ' + points[1][points[1].length - 1].y ' / ' + points[2][points[2].length - 1].y
ready: ->
container = $(#node).parent()
# Gross hacks. Let's fix this.
width = (Dashing.widget_base_dimensions[0] * container.data("sizex")) + Dashing.widget_margins[0] * 2 * (container.data("sizex") - 1)
height = (Dashing.widget_base_dimensions[1] * container.data("sizey"))
#graph = new Rickshaw.Graph(
element: #node
width: width
height: height
renderer: 'area'
stroke: false
series: [
{
color: "#fff",
data: [{x:0, y:0}]
},
{
color: "#222",
data: [{x:0, y:0}]
},
{
color: "#333",
data: [{x:0, y:0}]
}
]
)
#graph.series[0].data = #get('points') if #get('points')
x_axis = new Rickshaw.Graph.Axis.Time(graph: #graph)
y_axis = new Rickshaw.Graph.Axis.Y(graph: #graph, tickFormat: Rickshaw.Fixtures.Number.formatKMBT)
#graph.renderer.unstack = true
#graph.render()
onData: (data) ->
if #graph
#graph.series[0].data = data.points[0]
#graph.series[1].data = data.points[1]
#graph.series[2].data = data.points[2]
#graph.render()
However, when I run dashing, nothing it displayed (not even my other widgets). It is just a blank screen. Can anyone tell me what is going on here?
EDIT:
I have isolated the problem more. It seems that everything works until I add the third data series in series: It seems to be this that causes it to not work.
Here, try using rickshawgraph widget https://gist.github.com/jwalton/6614023
Here is my .rb
points1 = []
points2 = []
points3 = []
(1..10).each do |i|
points1 << { x: i, y: 10 }
points2 << { x: i, y: 10 }
points3 << { x: i, y: 10 }
end
last_x = points1.last[:x]
SCHEDULER.every '2s' do
points1.shift
points2.shift
points3.shift
last_x += 1
points1 << { x: last_x, y: rand(50) }
points2 << { x: last_x, y: rand(10) }
points3 << { x: last_x, y: rand(100) }
series = [
{
name: "set1",
data: points1
},
{
name: "set2",
data: points2
},
{
name: "set3",
data: points3
}
]
send_event('convergence', series: series)
end
Here is my .erb
% content_for :title do %>My super sweet dashboard<% end %>
<div class="gridster">
<ul>
<li data-row="1" data-col="1" data-sizex="2" data-sizey="1">
<div data-id="convergence" data-view="Rickshawgraph" data-title="Convergence" data-unstack="true" data-stroke="true" data-default-alpha="0.5" data-color-scheme="compliment" data-legend="true" data-summary-method="last"></div>
</li>
</ul>
</div>