For What Can I Use Transform.lossyScale in Unity For Example in this Script - unity3d

Body<>
private float UnitPerPixel;
UnitPerPixel = PrefabsWallTile.transform.lossyScale.x;
float HalfUnitPerPixel = UnitPerPixel / 2f;

Have you tried reading the API?
-> it is the absolute scale after all parent object scaling has been applied. In contrary to the localScale which is only the scale in the parents space.
For instance let's say you have a hierarchy with local sales (the ones displayed and configured in the Inspector) like e.g.
A (2, 2, 2)
|--B (5, 5, 5)
|--C (3, 3, 3)
then object C with a localScale of 3,3,3 will have a lossy scale of 30, 30, 30 which is the result of
2 * 5 * 3, 2 * 5 * 3, 2 * 5 * 3

Related

Part Size seems to be ignored

I have a part, created with
local p = Instance.new("Part")
p.Size = Vector3.new(2, 2, 2)
That part uses a mesh like
local m = Instance.new("SpecialMesh", p)
m.MeshType = Enum.MeshType.FileMesh
m.MeshId = "rbxassetid://7974596857"
which is a cube with rounded corners that I created in blender
When I put those beside each other, it seems like the Size property actually is ignored.
Why?
size 2
p1.Position = Vector3.new(0, 0, 0)
p1.Size = Vector3.new(2, 2, 2)
p2.Position = Vector3.new(5, 5, 0)
p2.Size = Vector3.new(2, 2, 2)
size 5
p1.Position = Vector3.new(0, 0, 0)
p1.Size = Vector3.new(5, 5, 5)
p2.Position = Vector3.new(5, 5, 0)
p2.Size = Vector3.new(5, 5, 5)
That's because special meshes have their own scaling property. If possible, use a MeshPart instead.

Proper use of withMemoryRebound

I am using the following code:
audioBuff.audioBuffer.floatChannelData![0].withMemoryRebound(to: DSPComplex.self, capacity: bufferSizePOT / 2) {dspComplexStream in
vDSP_ctoz(dspComplexStream, 2, &output, 1, UInt(bufferSizePOT / 2))
}
I'd like to jump to some further samples doing this:
audioBuff.audioBuffer.floatChannelData![1024].withMemoryRebound(to: DSPComplex.self, capacity: bufferSizePOT / 2) {dspComplexStream in
vDSP_ctoz(dspComplexStream, 2, &output, 1, UInt(bufferSizePOT / 2))
}
when doing so, I have an EXC_BAD_ACCESS (code=1, address=0x0)
someone could explain how to use it properly?
I used the .withMemoryRebound because i initialy tried:
vDSP_ctoz(audioBuff.audioBuffer.floatChannelData!, 2, &output, 1, UInt(bufferSizePOT / 2))
which gave me the error:
Cannot convert value of type 'UnsafePointer>' to expected argument type 'UnsafePointer'
what I would like to do is to move into the audioBuff.audioBuffer.floatChannelData! by chunks to do FFTs
audioBuffer.floatChannelData![0] represents a pointer to samples of channel#0.
You can access samples of channel#1 with audioBuffer.floatChannelData![1] when the buffer is non-interleved stereo.
But, I believe any of the Apple's sound hardware does not support channel#1024.
You may need to write something like this when you want to use the samples from the 1024th:
audioBuffer.floatChannelData![0]
.advanced(by: 1024)
.withMemoryRebound(to: DSPComplex.self, capacity: bufferSizePOT / 2) {
dspComplexStream in
vDSP_ctoz(dspComplexStream, 2, &output, 1, UInt(bufferSizePOT / 2))
}

Querying polygons that contain 4 points

I have 4 points that I always get, I would like to query if the polygon defined by a multipoint contains those 4 points. I’m using PostGIS and Postgres.
I'm also using OGR/GDAL for that purpose. Would someone provide me with the Query using SQL for that purpose.
This checks if the points (1 1), (2 2), (3 3), and (4 4) all lie inside the polygon defined by (0 0), (10 0), (10 10), (0 10) and (0 0):
SELECT st_contains(
st_polygon(
st_linefrommultipoint(
st_mpointfromtext(
'MULTIPOINT(0 0, 10 0, 10 10, 0 10, 0 0)'
)
),
0
),
st_mpointfromtext(
'MULTIPOINT(1 1, 2 2, 3 3, 4 4)'
)
);
So to find all multipoints that satisfy the criterion, you could use something like that:
SELECT id
FROM multipoints
WHERE st_contains(
st_polygon(
st_addpoint(
st_linefrommultipoint(
multipoints.geom
),
st_startpoint(
st_linefrommultipoint(
multipoints.geom
)
),
-1
),
st_srid(multipoints.geom)
),
st_mpointfromtext(
'MULTIPOINT(1 1, 2 2, 3 3, 4 4)',
8307
)
);
This assumes that the multipoints don't form a closed polygon (i.e., first point is equal to last).
I used SRID 8307 in my example, replace it with the one you need.

Target value shape in Lasagne

I am trying to train a Siamese Lasagne model in batches of 100.
The inputs are X1 (100x3x100x100) and X2 (same size) and Y(100x1) and my last layer is a Dense layer of one output dimension as I am expecting a value of 0 or 1 as a target value. However, it is throwing an error for unexpected dimension. Below are the code excerpts:
input1 = lasagne.layers.InputLayer(shape=(None,3, 100, 100), input_var=None)
conv1_a = lasagne.layers.Conv2DLayer(input1,
num_filters=24,
filter_size=(7, 7),
nonlinearity=lasagne.nonlinearities.rectify)
pool1_a = lasagne.layers.MaxPool2DLayer(conv1_a, pool_size=(3, 3), stride=2)
Layer 2 is same as above.
Output Layer:
dense_b = lasagne.layers.DenseLayer(dense_a,
num_units=128,
nonlinearity=lasagne.nonlinearities.rectify)
dense_c = lasagne.layers.DenseLayer(dense_b,
num_units=1,
nonlinearity=lasagne.nonlinearities.softmax)
net_output = lasagne.layers.get_output(dense_c)
true_output = T.ivector('true_output')
The training code is below:
loss_value = train(X1_train,X2_train,Y_train.astype(np.int32))
print loss_value
ValueError: Input dimension mis-match. (input[0].shape[1] = 100,
input[1].shape[1] = 1) Apply node that caused the error:
Elemwise{Composite{((i0 * i1) + (i2 *
log1p((-i3))))}}(InplaceDimShuffle{x,0}.0, LogSoftmax.0,
Elemwise{sub,no_inplace}.0, SoftmaxWithBias.0) Toposort index: 113
Inputs types: [TensorType(int32, row), TensorType(float32, matrix),
TensorType(float64, row), TensorType(float32, matrix)] Inputs shapes:
[(1, 100), (100, 1), (1, 100), (100, 1)] Inputs strides: [(400, 4),
(4, 4), (800, 8), (4, 4)] Inputs values: ['not shown', 'not shown',
'not shown', 'not shown'] Outputs clients:
[[Sum{acc_dtype=float64}(Elemwise{Composite{((i0 * i1) + (i2 *
log1p((-i3))))}}.0)]]
Try using draw_net.py as follows:
import draw_net
dot = draw_net.get_pydot_graph(lasagne.layers.get_all_layers(your_last_layer),
verbose = True)
dot.write("test.pdf", format="pdf")
to dump the Lasagne graph in pdf format (requires graphviz to be installed)

How to check if a number can be represented as a sum of some given numbers

I've got a list of some integers, e.g. [1, 2, 3, 4, 5, 10]
And I've another integer (N). For example, N = 19.
I want to check if my integer can be represented as a sum of any amount of numbers in my list:
19 = 10 + 5 + 4
or
19 = 10 + 4 + 3 + 2
Every number from the list can be used only once. N can raise up to 2 thousand or more. Size of the list can reach 200 integers.
Is there a good way to solve this problem?
4 years and a half later, this question is answered by Jonathan.
I want to post two implementations (bruteforce and Jonathan's) in Python and their performance comparison.
def check_sum_bruteforce(numbers, n):
# This bruteforce approach can be improved (for some cases) by
# returning True as soon as the needed sum is found;
sums = []
for number in numbers:
for sum_ in sums[:]:
sums.append(sum_ + number)
sums.append(number)
return n in sums
def check_sum_optimized(numbers, n):
sums1, sums2 = [], []
numbers1 = numbers[:len(numbers) // 2]
numbers2 = numbers[len(numbers) // 2:]
for sums, numbers_ in ((sums1, numbers1), (sums2, numbers2)):
for number in numbers_:
for sum_ in sums[:]:
sums.append(sum_ + number)
sums.append(number)
for sum_ in sums1:
if n - sum_ in sums2:
return True
return False
assert check_sum_bruteforce([1, 2, 3, 4, 5, 10], 19)
assert check_sum_optimized([1, 2, 3, 4, 5, 10], 19)
import timeit
print(
"Bruteforce approach (10000 times):",
timeit.timeit(
'check_sum_bruteforce([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 200)',
number=10000,
globals=globals()
)
)
print(
"Optimized approach by Jonathan (10000 times):",
timeit.timeit(
'check_sum_optimized([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 200)',
number=10000,
globals=globals()
)
)
Output (the float numbers are seconds):
Bruteforce approach (10000 times): 1.830944365834205
Optimized approach by Jonathan (10000 times): 0.34162875449254027
The brute force approach requires generating 2^(array_size)-1 subsets to be summed and compared against target N.
The run time can be dramatically improved by simply splitting the problem in two. Store, in sets, all of the possible sums for one half of the array and the other half separately. It can now be determined by checking for every number n in one set if the complementN-n exists in the other set.
This optimization brings the complexity down to approximately: 2^(array_size/2)-1+2^(array_size/2)-1=2^(array_size/2 + 1)-2
Half of the original.
Here is a c++ implementation using this idea.
#include <bits/stdc++.h>
using namespace std;
bool sum_search(vector<int> myarray, int N) {
//values for splitting the array in two
int right=myarray.size()-1,middle=(myarray.size()-1)/2;
set<int> all_possible_sums1,all_possible_sums2;
//iterate over the first half of the array
for(int i=0;i<middle;i++) {
//buffer set that will hold new possible sums
set<int> buffer_set;
//every value currently in the set is used to make new possible sums
for(set<int>::iterator set_iterator=all_possible_sums1.begin();set_iterator!=all_possible_sums1.end();set_iterator++)
buffer_set.insert(myarray[i]+*set_iterator);
all_possible_sums1.insert(myarray[i]);
//transfer buffer into the main set
for(set<int>::iterator set_iterator=buffer_set.begin();set_iterator!=buffer_set.end();set_iterator++)
all_possible_sums1.insert(*set_iterator);
}
//iterator over the second half of the array
for(int i=middle;i<right+1;i++) {
set<int> buffer_set;
for(set<int>::iterator set_iterator=all_possible_sums2.begin();set_iterator!=all_possible_sums2.end();set_iterator++)
buffer_set.insert(myarray[i]+*set_iterator);
all_possible_sums2.insert(myarray[i]);
for(set<int>::iterator set_iterator=buffer_set.begin();set_iterator!=buffer_set.end();set_iterator++)
all_possible_sums2.insert(*set_iterator);
}
//for every element in the first set, check if the the second set has the complemenent to make N
for(set<int>::iterator set_iterator=all_possible_sums1.begin();set_iterator!=all_possible_sums1.end();set_iterator++)
if(all_possible_sums2.find(N-*set_iterator)!=all_possible_sums2.end())
return true;
return false;
}
Ugly and brute force approach:
a = [1, 2, 3, 4, 5, 10]
b = []
a.size.times do |c|
b << a.combination(c).select{|d| d.reduce(&:+) == 19 }
end
puts b.flatten(1).inspect