Keras back propagation - neural-network

Suppose I have defined a network using Keras as follows:
model = Sequential()
model.add(Dense(10, input_shape=(10,), activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(9, activation='sigmoid'))
It has some weights:
[array([[ 0.33494413, -0.34308964, 0.12796348, 0.17187083, -0.40254939,
-0.06909397, -0.30374748, 0.14217842, 0.41163749, -0.15252003],
[-0.07053435, 0.53712451, -0.43015254, -0.28653857, 0.53299475, ...
When I give it some input:
[[ 0. 0.5 0. 0.5 1. 1. 0. 0.5 0.5 0.5]]
It produces some output:
[0.5476531982421875, 0.5172237753868103, 0.5247090458869934, 0.49434927105903625, 0.4599153697490692, 0.44612908363342285, 0.4727349579334259, 0.5116984844207764, 0.49565717577934265]
Whereas the desired output is:
[0.6776225034927386, 0.0, 0.5247090458869934, 0.0, 0.0, 0.0, 0.4727349579334259, 0.5116984844207764, 0.49565717577934265]
Making the Error Value:
[0.12996930525055106, -0.5172237753868103, 0.0, -0.49434927105903625, -0.4599153697490692, -0.44612908363342285, 0.0, 0.0, 0.0]
I can then calculate the evaluated gradients as follows:
outputTensor = model.output
listOfVariableTensors = model.trainable_weights
gradients = k.gradients(outputTensor, listOfVariableTensors)
trainingInputs = inputs
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
evaluated_gradients = sess.run(gradients, feed_dict={model.input: trainingInputs})
Which yeilds the evaluated gradients:
[array([[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0.01015381, 0. , 0. , 0.03375177, -0.05576257,
0.03318337, -0.02608909, -0.06644543, -0.03461133, 0. ],
[ 0.02030762, 0. , 0. , 0.06750354, -0.11152515,
0.06636675, -0.05217818, -0.13289087, -0.06922265, 0. ],...
I would like to use these gradients to adjust my model, but I am losing track of the math & theory of backpropagation. Am I on the right track?

Related

DICOM to Nifti metadata not transfering

I am trying to take a number of DICOM stacks and convert them to Nifti files. When I do the conversion and open the new Nifti file in a 3D viewer the volume is smashed together in the z direction. The Nifti files do not know what the spacing is between slices. To my understanding imageio.volread() does not read the metadata. I tried using pydicom.filereader.dcmread() but that only reads one file. How can I copy the metadata from the DICOM stack to the Nifti file when converting formats?
import nibabel as nib
import imageio
import numpy as np
import os, sys
DIR = '\\all scans\\'
savefold = '\\nifti\\'
for root, dirs, files in os.walk(DIR):
for directory in dirs:
vol = imageio.volread(DIR + directory).astype(int)
vol = np.transpose(vol, (2,1,0)).astype(int)
niftisave = nib.Nifti1Image(vol, affine=np.eye(4))
nib.save(niftisave, os.path.join(savefold + directory) + '.nii')
UPDATE:
I am using Nifti1Header and setting my voxel spacing but the voxel spacing is still 1x1x1 when I save and open the file in other programs. When I print the header right before saving the pixdim shows [1. 0.09 0.09 0.09 1. 1. 1. 1. ].
header = nib.Nifti1Header()
OM = np.eye(4)
header.set_data_shape((224,352,224))
voxel_spacing = ((.09,.09,.09))
header.set_zooms(voxel_spacing)
header.set_sform(OM)
header.set_dim_info(slice = 2)
vol=imageio.volread(source)
ROI_save = nib.Nifti1Image(vol, OM, header=header)
print(ROI_save.header)
HEADER:
<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
sizeof_hdr : 348
data_type : b''
db_name : b''
extents : 0
session_error : 0
regular : b''
dim_info : 48
dim : [ 3 224 352 224 1 1 1 1]
intent_p1 : 0.0
intent_p2 : 0.0
intent_p3 : 0.0
intent_code : none
datatype : float32
bitpix : 32
slice_start : 0
pixdim : [1. 0.09 0.09 0.09 1. 1. 1. 1. ]
vox_offset : 0.0
scl_slope : nan
scl_inter : nan
slice_end : 0
slice_code : unknown
xyzt_units : 0
cal_max : 0.0
cal_min : 0.0
slice_duration : 0.0
toffset : 0.0
glmax : 0
glmin : 0
descrip : b''
aux_file : b''
qform_code : unknown
sform_code : aligned
quatern_b : 0.0
quatern_c : 0.0
quatern_d : 0.0
qoffset_x : 0.0
qoffset_y : 0.0
qoffset_z : 0.0
srow_x : [1. 0. 0. 0.]
srow_y : [0. 1. 0. 0.]
srow_z : [0. 0. 1. 0.]
intent_name : b''
magic : b'n+1'
AFFINE:
np.eye(4)
--->[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
DESIRED AFFINE:
[[-0.09 0. 0. -0. ]
[ 0. -0.09 0. -0. ]
[ 0. 0. 0.09 0. ]
[ 0. 0. 0. 1. ]]
You need to directly specify pixel spacing and array shape, consider you have 512x512x128 3D volume, with 0.5 x 0.5 x 2.5 mm voxel spacing, and identity orientation matrix see example below:
from nibabel import Nifti1Header, Nifti1Image
img_array = np.zeros((512, 512, 128))
voxel_spacing = [0.5, 0.5, 2.5, 1]
OM = np.eye(4)
OM = OM * np.diag(voxel_spacing)
header = Nifti1Header()
header.set_data_shape((512, 512, 128))
header.set_dim_info(slice=2)
header.set_xyzt_units('mm')
nifti = Nifti1Image(img_array, OM, header=header)
upd.
Save file using nibabel.save (or img.to_filename) and open it in MRIcron https://people.cas.sc.edu/rorden/mricron/index.html , gives the following result:
If you use SimpleITK to read the Dicom series, it will properly read the Dicom metadata.
Here's an example of how to read a Dicom image series:
https://simpleitk.readthedocs.io/en/master/link_DicomSeriesReader_docs.html
If the output file name has a '.nii' suffix, it will write out the volume as a Nifti file.

Calculation of node betweenness from a weighted adjacency matrix

I have an adjacency matrix with the non zero elements indicating the weights of the link.The weights are decimals below 1 but are positive. For example, consider the below matrix as the weighted adjacency matrix a
array([[0. , 0.93, 0.84, 0.76],
[0.93, 0. , 0.93, 0.85],
[0.84, 0.93, 0. , 0.92],
[0.76, 0.85, 0.92, 0. ]])
I would like to obtain the node betweenness centrality of all the nodes in it. It is to be noted that my actual adjacency matrix is 2000 X 2000. I am new to networkx and hence any help will be highly appreciable.
You can try this
import networkx as nx
import numpy as np
A=np.matrix([[0. , 0.93, 0.84, 0.76, 0.64],
[0.93, 0. , 0.93, 0.85, 0 ],
[0.84, 0.93, 0. , 0.92, 0.32],
[0.76, 0.85, 0.92, 0. , 0.55],
[0.64, 0 , 0.32, 0.55, 0]])
G=nx.from_numpy_matrix(A)
betweeness_dict = nx.centrality.betweenness_centrality(G,weight='weight')
The betweeness_dict will contain the betweeness centrality of all the nodes
{0: 0.0, 1: 0.0, 2: 0.13888888888888887, 3: 0.0, 4: 0.13888888888888887}
You can read more about the documentation at this link.

How to calculate mean of function in a gaussian fit?

I'm using the curve fitting app in MATLAB. If I understand correctly the "b1" component in the left box is the mean of function i.e. the x point where y=50% and my x data is [-0.8 -0.7 -0.5 0 0.3 0.5 0.7], so why is this number in this example so big (631)?
General model Gauss1:
f(x) = a1*exp(-((x-b1)/c1)^2)
Coefficients (with 95% confidence bounds):
a1 = 3.862e+258 (-Inf, Inf)
b1 = 631.2 (-1.117e+06, 1.119e+06)
c1 = 25.83 (-2.287e+04, 2.292e+04)
Your data looks like cdf and not pdf. You can use this code for your solution
xi=[-0.8,-0.7,-0.5, 0.0, 0.3, 0.5, 0.7];
yi= [0.2, 0.0, 0.2, 0.2, 0.5, 1.0, 1.0];
fun=#(v) normcdf(xi,v(1),v(2))-yi;
[v]=lsqnonlin(fun,[1,1]); %[1,2]
mu=v(1); sigma=v(2);
x=linspace(-1.5,1.5,100);
y=normcdf(x,mu,sigma);
figure(1);clf;plot(xi,yi,'x',x,y);
annotation('textbox',[0.2,0.7,0.1,0.1], 'String',sprintf('mu=%f\nsigma=%f',mu,sigma),'FitBoxToText','on','FontSize',16);
you will get: mu=0.24537, sigma=0.213
And if you still want to fit to pdf, just change the function 'normcdf' in 'fun' (and 'y') to 'normpdf'.

Program for specific sequence of Integers

I am solving steady state heat equation with the boundary condition varying like this 10,0,0,10,0,0,10,0,0,10,0,0,10.... and so on depending upon number of points i select.
I want to construct a matrix for these boundary conditions but unable to specify the logic for the sequence in terms of ith element for a matrix.
i am using mathematica for this however i need the formula only like for odd we can specify 2n+1 and for even 2n , something like this for the sequence 10,0,0,10,0,0,10,0,0,10,....
In MATLAB, it would be
M = zeros(1000, 1);
M(1:3:1000) = 10;
to make a 1000 long vector with such structure. 1:3:1000 is 1,4,7,....
Since you specifically want a mathematical formula let me suggest a method:
seq = PadRight[{}, 30, {10, 0, 0}];
func = FindSequenceFunction[seq]
10/3 (1 + Cos[2/3 \[Pi] (-1 + #1)] + Cos[4/3 \[Pi] (-1 + #1)]) &
Test it:
Array[func, 10]
{10, 0, 0, 10, 0, 0, 10, 0, 0, 10}
There are surely simpler programs to generate this sequence, such as:
Array[10 Boole[1 == Mod[#, 3]] &, 10]
{10, 0, 0, 10, 0, 0, 10, 0, 0, 10}
A way to do this in Mathematica:
Take[Flatten[ConstantArray[{10, 0, 0}, Ceiling[1000/3] ], 1],1000]
Another way
Table[Boole[Mod[i,3]==1]*10, {i,1,1000}]

mathematic range - simple question

I have a float number which represent percentage (0.0 to 100.0) %
float represent = 50.00; // fifty present, or half.
as an example: convert this number to a range from -2 to 2
thus:
represent=0 will be represented as -2
represent=50 will be represented as 0
represent=100 will be represented as 2
EDIT:
good simple answer from Orbling and friends, Still I am looking for something along the lines of : map(value, fromLow, fromHigh, toLow, toHigh)
Is there a function in objective C to map such range for more complex values?
represent = (represent / 25.0f) - 2.0f;
This should do the trick:
define map(v, r1, r2, t1, t2)
{
norm = (v-r1)/(r2-r1);
return (t1*(1-norm) + t2*norm);
}
Explanation:
norm is v scaled to a value between 0 and 1, related to r1 and r2.
Next line is calculates the point between t1 and t2 using norm as percentage factor.
Example usage:
map (0, 0, 100, -2, 2) // 0 mapped to -2..2 in the range 0..100
-2.0
map (50, 0, 100, -2, 2) // 50 mapped to -2..2 in the range 0..100
0
map (100, 0, 100, -2, 2) // 100 mapped to -2..2 in the range 0..100
2.0
map (-90, -100, 20, -4, 2) // -90 mapped to -4..2 in the range -100..20
-3.5