How to construct a sobel filter for kernel initialization in input layer for images of size 128x128x3? - tf.keras

This is my code for sobel filter:
def init_f(shape, dtype=None):
sobel_x = tf.constant([[-5, -4, 0, 4, 5], [-8, -10, 0, 10, 8], [-10, -20, 0, 20, 10], [-8, -10, 0, 10, 8], [-5, -4, 0, 4, 5]])
ker = np.zeros(shape, dtype)
ker_shape = tf.shape(ker)
kernel = tf.tile(sobel_x, ker_shape)//*Is this correct?*
return kernel
model.add(Conv2D(filters=30, kernel_size=(5,5), kernel_initializer=init_f, strides=(1,1), activation='relu'))
So far I have managed to do this.
But, this gives me error:
Shape must be rank 2 but is rank 4 for 'conv2d_17/Tile' (op: 'Tile') with input shapes: [5,5], [4].
Tensorflow Version: 2.1.0

You're close, but the args to tile don't appear to be correct. That is why you're getting the error "Shape must be rank 2 but is rank 4 for..." You're sobel_x must be a rank 4 tensor, so you need to add two more dimensions. I used reshape in this example.
from tensorflow import keras
import tensorflow as tf
import numpy
def kernelInitializer(shape, dtype=None):
print(shape)
sobel_x = tf.constant(
[
[-5, -4, 0, 4, 5],
[-8, -10, 0, 10, 8],
[-10, -20, 0, 20, 10],
[-8, -10, 0, 10, 8],
[-5, -4, 0, 4, 5]
], dtype=dtype )
#create the missing dims.
sobel_x = tf.reshape(sobel_x, (5, 5, 1, 1))
print(tf.shape(sobel_x))
#tile the last 2 axis to get the expected dims.
sobel_x = tf.tile(sobel_x, (1, 1, shape[-2],shape[-1]))
print(tf.shape(sobel_x))
return sobel_x
x1 = keras.layers.Input((128, 128, 3))
cvl = keras.layers.Conv2D(30, kernel_size=(5,5), kernel_initializer=kernelInitializer, strides=(2,2), activation='relu')
model = keras.Sequential();
model.add(x1)
model.add(cvl)
data = numpy.ones((1, 128, 128, 3))
data[:, 0:64, 0:64, :] = 0
pd = model.predict(data)
print(pd.shape)
d = pd[0, :, :, 0]
for row in d:
for col in row:
m = '0'
if col != 0:
m = 'X'
print(m, end="")
print("")
I looked at using expand_dims instead of reshape but there didn't appear any advantage. broadcast_to seems ideal, but you still have to add the dimensions, so I don't think it was better than tile.
Why 30 filters of the same filter though? Are they going to be changed afterwards?

Related

LAPack dpbsv returns 3 for positive definite matrix

I am trying to use the LAPACK banded symmetric matrix solver dpbsv. I am testing the matrix:
4, 2, 0, 0, 0
2 4, 3, 0, 0
0, 3, 11, 7, 0
0, 0, 7, 11, 5
0, 0, 0, 5, 13
Mathematica tells me that this matrix is positive definite, with a determinate of 3684
I am using swift and have constructed the array
var a: [Double] = [ 0, 2, 3, 7, 5,
4, 4, 11, 11, 13]
var b: [Double] = [1, 2, 3, 4, 5]
And I am calling dpbsv as
var uplo = Int8("U".utf8.first!) // set to 'U'
var n = __CLPK_integer(5)
var kd = __CLPK_integer(1)
var ldab = kd + 1
var nrhs = __CLPK_integer(1)
var ldb = __CLPK_integer(5)
var info: __CLPK_integer = 0
dpbsv_(&uplo,
&n,
&kd,
&nrhs,
&a,
&ldab,
&b,
&ldb,
&info)
if info != 0 {
// here info is 3, indicating non-positive definite.
NSLog("error \(info)")
}
Any idea what the issue is here? Am I interpreting the parameters to dpbsv_ correctly? I've tried other matrices that Mathematica claims are pos-def with the same result.
So, apparently, what LAPACK documents as rows need to be coded as columns in swift. So, if the array is changed to
[ 0, 4,
2, 4,
3, 11,
7, 11,
5, 13]
everything works fine.

Efficient replacement of x < i values in sparse array

How would I replace values less than 4 with 0 in this array without triggering a SparseEfficiencyWarning and without reducing its sparsity?
from scipy import sparse
x = sparse.csr_matrix(
[[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[0, 0, 0, 2, 5]])
x[x < 4] = 0
x.toarray() # verifies that this works
Note also that the sparsity between the initial version of x is 11 stored elements, which rises to 15 stored elements after doing the masking.
Manipulate the data array directly
from scipy import sparse
x = sparse.csr_matrix(
[[0, 1, 2, 3, 4],
[1, 2, 3, 4, 5],
[0, 0, 0, 2, 5]])
x.data[x.data < 4] = 0
>>> x.toarray()
array([[0, 0, 0, 0, 4],
[0, 0, 0, 4, 5],
[0, 0, 0, 0, 5]])
>>> x.data
array([0, 0, 0, 4, 0, 0, 0, 4, 5, 0, 5])
Note that the sparsity is unchanged and there are zero values unless you run x.eliminate_zeros().
x.eliminate_zeros()
>>> x.data
array([4, 4, 5, 5])
If for some reason you don't want to use a boolean mask & fancy indexing in numpy, you can loop over the array with numba:
import numba
#numba.jit(nopython=True)
def _set_array_less_than_to_zero(array, value):
for i in range(len(array)):
if array[i] < value:
array[i] = 0
This should also be faster than the numpy indexing by a fairly substantial degree.
array = np.arange(10)
_set_array_less_than_to_zero(array, 5)
>>> array
array([0, 0, 0, 0, 0, 5, 6, 7, 8, 9])

Dimension out of range (expected to be in range of [-1, 0], but got 1) (pytorch)

I have a very simple feed forward neural network (pytorch)
import torch
import torch.nn.functional as F
import numpy as np
class Net_1(nn.Module):
def __init__(self):
super(Net_1, self).__init__()
self.fc1 = nn.Linear(5*5, 64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 3)
def forward(self,x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
net = Net_1()
and the input is this 5x5 numpy array
state = [[0, 0, 3, 0, 0],
[0, 0, 0, 0, 0],
[0, 2, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]
state = torch.Tensor(state).view(-1)
net(state) throws the following error
Dimension out of range (expected to be in range of [-1, 0], but got 1)
the problem is when F.log_softmax() is applied
at the point when you call return F.log_softmax(x, dim=1), x is a 1-dimensional tensor with shape torch.Size([3]).
dimension indexing in pytorch starts at 0, so you cannot use dim=1 for a 1-dimensional tensor, you will need to use dim=0.
replace return F.log_softmax(x, dim=1) with return F.log_softmax(x, dim=0) and you'll be good to go.
in the future you can check tensor sizes by adding print(x.shape) in forward.
You are giving a 3 element 1d array to your log_softmax function.
When saying dim=1 you are telling it to apply softmax to an axis that doesn't exist.
Just set dim=0 for a 1d array.
More on this function and what that parameter means here

Python quicksort only sorting first half

I'm taking Princeton's algorithms-divide-conquer course - 3rd week, and trying to implement the quicksort.
Here's my current implementation with some tests ready to run:
import unittest
def quicksort(x):
if len(x) <= 1:
return x
pivot = x[0]
xLeft, xRight = partition(x)
print(xLeft, xRight)
quicksort(xLeft)
quicksort(xRight)
return x
def partition(x):
j = 0
print('partition', x)
for i in range(0, len(x)):
if x[i] < x[0]:
n = x[j + 1]
x[j + 1] = x[i]
x[i] = n
j += 1
p = x[0]
x[0] = x[j]
x[j] = p
return x[:j + 1], x[j + 1:]
class Test(unittest.TestCase):
def test_partition_pivot_first(self):
arrays = [
[3, 1, 2, 5],
[3, 8, 2, 5, 1, 4, 7, 6],
[10, 100, 3, 4, 2, 101]
]
expected = [
[[2, 1, 3], [5]],
[[1, 2, 3], [5, 8, 4, 7, 6]],
[[2, 3, 4, 10], [100, 101]]
]
for i in range(0, len(arrays)):
xLeft, xRight = partition(arrays[i])
self.assertEqual(xLeft, expected[i][0])
self.assertEqual(xRight, expected[i][1])
def test_quicksort(self):
arrays = [
[1, 2, 3, 4, 5, 6],
[3, 5, 6, 10, 2, 4]
]
expected = [
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 6, 10]
]
for i in range(0, len(arrays)):
arr = arrays[i]
quicksort(arr)
self.assertEqual(arr, expected[i])
if __name__ == "__main__":
unittest.main()
so for array = [3, 5, 6, 10, 2, 4] I get [2, 3, 6, 10, 5, 4] as a result... I can't figure what's wrong with my code. It partitions just fine, but the results are off...
Can anyone chip in? :) Thank you!
it's actually so minor problem that you'd be laughing
the problem resides with quicksort function
the correct one is:
def quicksort(x):
if len(x) <= 1:
return x
pivot = x[0]
xLeft, xRight = partition(x)
print(xLeft, xRight)
quicksort(xLeft)
quicksort(xRight)
x=xLeft+xRight #this one!
return x
what happens is python created a new object out of these xleft and xright they were never an in place-sort
so this is one solution(which is not in place)
the other one is to pass the list,the start_index,end_index
and do it in place
well done fella!
edit:
and actually if you'd print xleft and xright you'd see it performed perfectly:)

Derive Matlab value matrix from Matlab key matrix and lookup vector

I have a Matlab object of integer keys in the range 1:1:7 e.g.
[3, 1, 4, 5, 6]
I also have a size 7 vector containing an associated value for each integer key, e.g.
vals = (10, 20, 30, 4000, 50, 60, 70)
what is the most efficient way to create a matrix of the values using the keys as indices, e.g. a matrix
[30, 10, 4000, 50, 60]
(in reality the key object is 6D). Must I loop?
For the case of a 1D matrix a general approach could be:
keys=[3, 1, 4, 5, 6];
vals = [10, 20, 30, 4000, 50, 60, 70]
m=vals(keys)
With this approach you use the values stored in the keys array as indices of the vals array. You can find more information about array insdexing here.
In a more general case in which keys has n rows (3 in the following example):
keys=[3, 1, 4, 5, 6;
1 3 2 4 6 ;
7 6 5 4 3];
vals = [10, 20, 30, 4000, 50, 60, 70]
m=reshape(vals(keys(:)),size(keys))
Hope this helps.
Qapla'
I think this should work. If I got the question.
inds = [3, 1, 4, 5, 6];
vals = inds;
vals(vals==1) = 10;
vals(vals==2) = 20;
vals(vals==3) = 30;
vals(vals==4) = 4000;
vals(vals==5) = 50;
vals(vals==6) = 60;
Is it like that?