Please how do i find the inverse kinematics for a 5 dof serial manipulator using matlab ikine() - matlab

I am trying to find the inverse kinematics for a 5 DOF robot. These are my parameters.
L(1) = Link([0, 0, 0, pi/2]);
L(2) = Link([0, 0, 37.5, 0]);
L(3) = Link([0, 0, 37.5, 0]);
L(4) = Link([0, 0, 0, pi/2]);
L(5) = Link([0, 30, 0, 0]);`
`Robot.tool = transl(0,150,0)*trotx(pi/2);`
I am trying to find the inverse kinematics for this position

Related

Dimension out of range (expected to be in range of [-1, 0], but got 1) (pytorch)

I have a very simple feed forward neural network (pytorch)
import torch
import torch.nn.functional as F
import numpy as np
class Net_1(nn.Module):
def __init__(self):
super(Net_1, self).__init__()
self.fc1 = nn.Linear(5*5, 64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 3)
def forward(self,x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
net = Net_1()
and the input is this 5x5 numpy array
state = [[0, 0, 3, 0, 0],
[0, 0, 0, 0, 0],
[0, 2, 1, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]]
state = torch.Tensor(state).view(-1)
net(state) throws the following error
Dimension out of range (expected to be in range of [-1, 0], but got 1)
the problem is when F.log_softmax() is applied
at the point when you call return F.log_softmax(x, dim=1), x is a 1-dimensional tensor with shape torch.Size([3]).
dimension indexing in pytorch starts at 0, so you cannot use dim=1 for a 1-dimensional tensor, you will need to use dim=0.
replace return F.log_softmax(x, dim=1) with return F.log_softmax(x, dim=0) and you'll be good to go.
in the future you can check tensor sizes by adding print(x.shape) in forward.
You are giving a 3 element 1d array to your log_softmax function.
When saying dim=1 you are telling it to apply softmax to an axis that doesn't exist.
Just set dim=0 for a 1d array.
More on this function and what that parameter means here

How to smooth interpolation of a float array into a bigger array?

I'm stuck with interpolation in Swift. Can anyone help me with that?
I want to interpolate the float array (say [0, 0, 100, 25, 0, 0, 0, 25, 0, 0, 0]) into another array with some given size (for example 128). I found an article (Use Linear Interpolation to Construct New Data Points) that shows, how to achieve this stuff.
There are two ways (you can see the results below, how they perform):
Linear Interpolation using vDSP_vgenp and
Smoother (but not for my purposes) Interpolation using vDSP_vlint
The problem is both techniques don't realize my expectations, which illustrated in Screenshot 3. How can I make my interpolated distribution smoother? I want to see a cube-like curve.
Initial Plot:
Linear Interpolation:
import Accelerate
let n = vDSP_Length(128)
let stride = vDSP_Stride(1)
let values: [Float] = [0, 0, 100, 25, 0, 0, 0, 25, 0, 0, 0]
let indices: [Float] = [0, 11, 23, 34, 46, 58, 69, 81, 93, 104, 116]
var result = [Float](repeating: 0, count: Int(n))
vDSP_vgenp(values, stride, indices, stride, &result, stride, n, vDSP_Length(values.count))
Smooth Interpolation:
import Accelerate
import AVFoundation
let n = vDSP_Length(1024)
let stride = vDSP_Stride(1)
let values: [Float] = [0, 0, 100, 25, 0, 0, 0, 25, 0, 0, 0]
let denominator = Float(n) / Float(values.count - 1)
let control: [Float] = (0 ... n).map {
let x = Float($0) / denominator
return floor(x) + simd_smoothstep(0, 1, simd_fract(x))
}
var result = [Float](repeating: 0, count: Int(n))
vDSP_vlint(values, control, stride, &result, stride, n, vDSP_Length(values.count))
It seems to me that the vDSP_vqint quadratic interpolation functions would solve the problem. See the discussion at https://developer.apple.com/documentation/accelerate/1449942-vdsp_vqint.

How to construct a sobel filter for kernel initialization in input layer for images of size 128x128x3?

This is my code for sobel filter:
def init_f(shape, dtype=None):
sobel_x = tf.constant([[-5, -4, 0, 4, 5], [-8, -10, 0, 10, 8], [-10, -20, 0, 20, 10], [-8, -10, 0, 10, 8], [-5, -4, 0, 4, 5]])
ker = np.zeros(shape, dtype)
ker_shape = tf.shape(ker)
kernel = tf.tile(sobel_x, ker_shape)//*Is this correct?*
return kernel
model.add(Conv2D(filters=30, kernel_size=(5,5), kernel_initializer=init_f, strides=(1,1), activation='relu'))
So far I have managed to do this.
But, this gives me error:
Shape must be rank 2 but is rank 4 for 'conv2d_17/Tile' (op: 'Tile') with input shapes: [5,5], [4].
Tensorflow Version: 2.1.0
You're close, but the args to tile don't appear to be correct. That is why you're getting the error "Shape must be rank 2 but is rank 4 for..." You're sobel_x must be a rank 4 tensor, so you need to add two more dimensions. I used reshape in this example.
from tensorflow import keras
import tensorflow as tf
import numpy
def kernelInitializer(shape, dtype=None):
print(shape)
sobel_x = tf.constant(
[
[-5, -4, 0, 4, 5],
[-8, -10, 0, 10, 8],
[-10, -20, 0, 20, 10],
[-8, -10, 0, 10, 8],
[-5, -4, 0, 4, 5]
], dtype=dtype )
#create the missing dims.
sobel_x = tf.reshape(sobel_x, (5, 5, 1, 1))
print(tf.shape(sobel_x))
#tile the last 2 axis to get the expected dims.
sobel_x = tf.tile(sobel_x, (1, 1, shape[-2],shape[-1]))
print(tf.shape(sobel_x))
return sobel_x
x1 = keras.layers.Input((128, 128, 3))
cvl = keras.layers.Conv2D(30, kernel_size=(5,5), kernel_initializer=kernelInitializer, strides=(2,2), activation='relu')
model = keras.Sequential();
model.add(x1)
model.add(cvl)
data = numpy.ones((1, 128, 128, 3))
data[:, 0:64, 0:64, :] = 0
pd = model.predict(data)
print(pd.shape)
d = pd[0, :, :, 0]
for row in d:
for col in row:
m = '0'
if col != 0:
m = 'X'
print(m, end="")
print("")
I looked at using expand_dims instead of reshape but there didn't appear any advantage. broadcast_to seems ideal, but you still have to add the dimensions, so I don't think it was better than tile.
Why 30 filters of the same filter though? Are they going to be changed afterwards?

How to generate vector with different prob. distributions for each element

I need to generate vector r of N values 1-6 (they can be repetitive) to given permutation p of N elements. But the values are generated with some probability distribution depending on the i-th value of the permutation.
E.g. I have permutation p = [2 3 1 4] and probabilistic distribution matrix (Nx6): Pr = [1, 0, 0, 0, 0, 0; 0, 0.5, 0, 0.5, 0, 0; 0, 0, 0, 1, 0, 0; 0.2, 0.2, 0.2, 0.2, 0.2, 0]
i-th row represents prob. distribution of values 1-6 to element i in permutation p (its value, not position), sum of rows is 1.
For example, we can assign value 1 to value 1, value 2 or 4 to value 2 etc. So it can look like this: r = [2 4 1 2] or r = [4 4 1 5].
Currently I am using this code:
for i = 1:N
r(i) = randsample(1:6,1,true,Pr(p(i),:));
end
But it is quite slow and I am trying to avoid the for-cycle, maybe by function bsxfun or something similar.
Does anyone have any clue, please? :-)
A solution to your problem is basically available in this answer, everything needed for your case is replacing the vector prob with a matrix and fix all operations to work properly on matrices.
Pr=[1, 0, 0, 0, 0, 0; 0, 0.5, 0, 0.5, 0, 0; 0, 0, 0, 1, 0, 0; 0.2, 0.2, 0.2, 0.2, 0.2, 0];
p = [2 3 1 4];
prob=Pr(p,:);
r=rand(size(pPr,1),1);
x=sum(bsxfun(#ge,r,cumsum(padarray(prob,[0,1],'pre'),2)),2);

Using 3D RANSAC to estimate the 3D affine transform

I am trying to register two volumeetric images(img1 and img2). The size of the img1 is 40x40x24. The size of the img2 is 64 x64x11.
So far, I have extracted their features (vol1 and vol2, the same size as the images) and then matched them.
Now, I have a set of corresponding points in two feature volumes that is stored in pairs which is a matrix of size 100x6 (every row of pairs is [x, y, z, X, Y, Z] where (x,y,z) are the coordinates of a voxel in vol1 and [X Y Z] is the coordinates of corresponding voxel in vol2).
Now, I am trying to use RANSAC algorithm to estimate the 3D affine transform, T. I have written the below code, but I think there is a problem with it because when I get the output transform T and multiply it by a sample voxel coordinates from vol1, I get some negative coordinates!!!
Below is my implementation for 3D RANSAC algorithm. I have used 2D RANSAC implementation in this link. Please let me know if you see any problem.
function [bp] = ransac(data,bpI,iter,num,distThresh)
% data: a nx6 dataset with #n data points
% num: the minimum number of points. Here num=4.
% iter: the number of iterations
% distThresh: the threshold of the distances between points and the fitting line
% inlierRatio: the threshold of the number of inliers
% bpI : Initialized affine transform model
number = size(data,1); % Total number of points
bestInNum = 0; % Best fitting line with largest number of inliers
% Initial parameters for best model (affine transform)
% Affine transform : T = [bp1, bp2, bp3, bp4; bp5, bp6, bp7, bp8; bp9, bp10, bp11, bp12;]
bp1 = bpI(1,1); bp2 = bpI(1,2); bp3 = bpI(1,3); bp4 = bpI(1,4);
bp5 = bpI(1,5); bp6 = bpI(1,6); bp7 = bpI(1,7); bp8 = bpI(1,8);
bp9 = bpI(1,9); bp10 = bpI(1,10); bp11 = bpI(1,11); bp12 = bpI(1,12);
for i=1:iter
% Randomly select 4 points
idx = randperm(number,num); sample = data(idx,:);
% Creating others which is the data that does not contain data in sample
idxs = sort(idx, 'descend'); % Sorting idx
others = data;
for n = 1:num
others(idxs(1,n), :) = [];
end
x1 = sample(1,1); y1 = sample(1,2); z1 = sample(1,3);
x2 = sample(2,1); y2 = sample(2,2); z2 = sample(2,3);
x3 = sample(3,1); y3 = sample(3,2); z3 = sample(3,3);
x4 = sample(4,1); y4 = sample(4,2); z4 = sample(4,3);
X1 = sample(1,4); Y1 = sample(1,5); Z1 = sample(1,6);
X2 = sample(2,4); Y2 = sample(2,5); Z2 = sample(2,6);
X3 = sample(3,4); Y3 = sample(3,5); Z3 = sample(3,6);
X4 = sample(4,4); Y4 = sample(4,5); Z4 = sample(4,6);
B = [X1; Y1; Z1; X2; Y2; Z2; X3; Y3; Z3; X4; Y4; Z4];
A = [
x1, y1, z1, 1, 0 , 0 , 0, 0, 0, 0, 0, 0;
0 , 0 , 0, 0, x1, y1, z1, 1, 0, 0 ,0, 0;
0 , 0 , 0, 0, 0 , 0 , 0, 0, x1, y1, z1, 1;
x2, y2, z1, 1, 0 , 0 , 0, 0, 0, 0, 0, 0;
0 , 0 , 0, 0, x2, y2, z2, 1, 0 , 0 ,0, 0;
0 , 0 , 0, 0, 0 , 0 , 0, 0, x2, y2, z2, 1;
x3, y3, z3, 1, 0 , 0 , 0, 0, 0, 0, 0, 0;
0 , 0 , 0, 0, x3, y3, z3, 1, 0 , 0 ,0, 0;
0 , 0 , 0, 0, 0 , 0 , 0, 0, x3, y3, z3, 1;
x4, y4, z4, 1, 0 , 0 , 0, 0, 0, 0, 0, 0;
0 , 0 , 0, 0, x4, y4, z4, 1, 0 , 0 ,0, 0;
0 , 0 , 0, 0, 0 , 0 , 0, 0, x4, y4, z4, 1
];
cbp = A\B; % calculating best parameters of the model (affine transform)
T = [reshape(cbp',[4,3])'; 0, 0, 0, 1]; % Current affine transform matrix
% Computing other points in the dataset that their distance from the
% calculated model is less than the threshold.
numOthers = size(others,1);
inliers = [];
for j = 1:numOthers
% b = T a
d = others(j,:); % Explanation: d = [ax, ay, az, bx, by, bz]
a = [d(1,1:3), 1]'; % Explanation a = [ax, ay, az]'
b = [d(1,4:6),1]'; % b = [bx, by, bz]'
cb = T*a; % Calculated b
dist = sum((cb-b).^2);
if(dist<=distThresh)
inliers = [inliers; d];
end
end
numinliers = size(inliers,1);
% Update the number of inliers and fitting model if better model is found
if (numinliers >= bestInNum)
% Better model is estimated
bestInNum = numinliers;
bp1 = cbp(1,1); bp2 = cbp(2,1); bp3 = cbp(3,1); bp4 = cbp(4,1);
bp5 = cbp(5,1); bp6 = cbp(6,1); bp7 = cbp(7,1); bp8 = cbp(8,1);
bp9 = cbp(9,1); bp10 = cbp(10,1); bp11 = cbp(11,1); bp12 = cbp(12,1);
bp = [bp1, bp2, bp3, bp4, bp5, bp6, bp7, bp8, bp9, bp10, bp11, bp12];
end
end
bp
end