How do I find the amount of levels in an Octree if I have N nodes? - algebra

If the octree level is 0, then I have 8 nodes. Now, if the octree level is 1, then I have 72 nodes. But if I have (for example) 500 nodes, how do I calculate what the level would be?

To calculate the max number of nodes at level n you would calculate:
8**1 + 8**2 + 8**3 ... 8**n
So at level 2, that's 8 + 64
This can be generalized as:
((8 ** (h + 1)) - 1)/7 - 1
In javascript you might write:
function maxNodes(h){
return ((8 ** (h + 1)) - 1)/7 - 1
}
for (let i = 1; i < 4; i++){
console.log(maxNodes(i))
}
To find the inverse you will need to use Log base 8 and some algebra and you'll arrive at a formula:
floor (log(base-8)(7 * n + 1))
In some languages like python you can calculate math.floor(math.log(7*n + 1, 8)), but javascript doesn't have logs to arbitrary bases so you need to depend on the identity:
Log(base-b)(n) == Log(n)/Log(b)
and calculate something like:
function height(n){
return Math.floor(Math.log(7 * n + 1)/Math.log(8))
}
console.log(height(8))
console.log(height(72)) // last on level 2
console.log(height(73)) // first on level 3
console.log(height(584)) // last on level 3
console.log(height(585)) // first on level 4

Related

Find the number at the n position in the infinite sequence

Having an infinite sequence s = 1234567891011...
Let's find the number at the n position (n <= 10^18)
EX: n = 12 => 1; n = 15 => 2
import Foundation
func findNumber(n: Int) -> Character {
var i = 1
var z = ""
while i < n + 1 {
z.append(String(i))
i += 1
}
print(z)
return z[z.index(z.startIndex, offsetBy: n-1)]
}
print(findNumber(n: 12))
That's my code but when I find the number at 100.000th position, it returns an error, I thought I appended too many i to z string.
Can anyone help me, in swift language?
The problem we have here looks fairly straight forward. Take a list of all the number 1-infinity and concatenate them into a string. Then find the nth digit. Straight forward problem to understand. The issue that you are seeing though is that we do not have an infinite amount of memory nor time to be able to do this reasonably in a computer program. So we must find an alternative way around this that does not just add the numbers onto a string and then find the nth digit.
The first thing we can say is that we know what the entire list is. It will always be the same. So can we use any properties of this list to help us?
Let's call the input number n. This is the position of the digit that we want to find. Let's call the output digit d.
Well, first off, let's look at some examples.
We know all the single digit numbers are just in the same position as the number itself.
So, for n<10 ... d = n
What about for two digit numbers?
Well, we know that 10 starts at position 10. (Because there are 9 single digit numbers before it). 9 + 1 = 10
11 starts at position 12. Again, 9 single digits + one 2 digit number before it. 9 + 2 + 1 = 12
So how about, say... 25? Well that has 9 single digit numbers and 15 two digit numbers before it. So 25 starts at 9*1 + 15*2 + 1 = 40 (+ 1 as the sum gets us to the end of 24 not the start of 25).
So... 99 starts at? 9*1 + 89*2 + 1 = 188.
The we do the same for the three digit numbers...
100... 9*1 + 90*2 + 1 = 190
300... 9*1 + 90*2 + 199*3 + 1 = 787
1000...? 9*1 + 90*2 + 900*3 + 1 = 2890
OK... so now I'm seeing a pattern here that seems to need to know the number of digits in each number. Well... I can get the number of digits in a number by rounding up the log(base 10) of that number.
rounding up log base 10 of 5 = 1
rounding up log base 10 of 23 = 2
rounding up log base 10 of 99 = 2
rounding up log base 10 of 627 = 3
OK... so I think I need something like...
// in pseudo code
let lengthOfNumber = getLengthOfNumber(n)
var result = 0
for each i from 0 to lengthOfNumber - 1 {
result += 9 * 10^i * (i + 1) // this give 9*1 + 90*2 + 900*3 + ...
}
let remainder = n - 10^(lengthOfNumber - 1) // these have not been added in the loop above
result += remainder * lengthOfNumber
So, in the above pseudo code you can give it any number and it will return the position in the list that that number starts on.
This isn't the exact same as the problem you are trying to solve. And I don't want to solve it for you.
This is just a leg up on how I would go about solving it. Hopefully, this will give you some guidance on how you can take this further and solve the problem that you are trying to solve.

Maxima. How to prevent degree calculations

Is it possible for all calculations in the expression for numbers in a power to be prevented? Perhaps by pre-processing the expression or adding tellsimp rules? Or some other way?
For example, to
distrib (10 ^ 10 * (x + 1));
which produces:
1000000000 x + 1000000000
instead issued:
10 ^ 10 * x + 10 ^ 10
And similarly
factor (10 ^ 10 * x + 10 ^ 10);
returned:
10 ^ 10 * (x + 1);
Just as
factor(200);
2^3*5^2
represents power of numbers, only permanently?
Interesting question, although I don't see a good solution. Here's something I tried as an experiment, which is to display integers in factored form. I am working with Maxima 5.44.0 + SBCL.
(%i1) :lisp (defun integer-formatter (x) ($factor x))
INTEGER-FORMATTER
(%i1) :lisp (setf (get 'integer 'formatter) 'integer-formatter)
INTEGER-FORMATTER
(%i1) (x + 1000)^3;
3 3 3
(%o1) (x + 2 5 )
(%i2) 10^10*(x + 1);
2 5 2 5
(%o2) (2 5 ) (x + 1)
This is only a modification of the display; the internal representation is just a single integer.
(%i3) :lisp $%
((MTIMES SIMP) 10000000000 ((MPLUS SIMP) 1 $X))
That seems kind of clumsy, since e.g. 2^(2*5)*5^(2*5) isn't really more comprehensible than 10000000000.
A separate question is whether the arithmetic on 10^10 could be suppressed, so it actually stays as 10^10 and isn't represented internally as 10000000000. I'm pretty sure that would be difficult. Unfortunately Maxima is not too good with retracting identities which are applied, particularly with the built-in identities which are applied to perform arithmetic and other operations.

Why can't I divide integers correctly within reduce in Swift?

I'm trying to get the average of an array of Ints using the following code:
let numbers = [1,2,3,4,5]
let avg = numbers.reduce(0) { return $0 + $1 / numbers.count }
print(avg) // 1
Which is obviously incorrect. However, if I remove the division to the outside of the closure:
let numbers = [1,2,3,4,5]
let avg = numbers.reduce(0) { return $0 + $1 } / numbers.count
print(avg) // 3
Bingo! I think I remember reading somewhere (can't recall if it was in relation to Swift, JavaScript or programming math in general) that this has something to do with the fact that dividing the sum by the length yields a float / double e.g. (1 + 2) / 5 = 0.6 which will be rounded down within the sum to 0. However I would expect ((1 + 2) + 3) / 5 = 1.2 to return 1, however it too seems to return 0.
With doubles, the calculation works as expected whichever way it's calculated, as long as I box the count integer to a double:
let numbers = [1.0,2.0,3.0,4.0,5.0]
let avg = numbers.reduce(0) { return $0 + $1 / Double(numbers.count) }
print(avg) // 3
I think I understand the why (maybe not?). But I can't come up with a solid example to prove it.
Any help and / or explanation is very much appreciated. Thanks.
The division does not yield a double; you're doing integer division.
You're not getting ((1 + 2) + 3 etc.) / 5.
In the first case, you're getting (((((0 + (1/5 = 0)) + (2/5 = 0)) + (3/5 = 0)) + (4/5 = 0)) + (5/5 = 1)) = 0 + 0 + 0 + 0 + 0 + 1 = 1.
In the second case, you're getting ((((((0 + 1) + 2) + 3) + 4) + 5) / 5) = 15 / 5 = 3.
In the third case, double precision loss is much smaller than the integer, and you get something like (((((0 + (1/5.0 = 0.2)) + (2/5.0 = 0.4)) + (3/5.0 = 0.6)) + (4/5.0 = 0.8)) + (5/5.0 = 1.0)).
The problem is that what you are attempting with the first piece of code does not make sense mathematically.
The average of a sequence is the sum of the entire sequence divided by the number of elements.
reduce calls the lambda function for every member of the collection it is being called on. Thus you are summing and dividing all the way through.
For people finding it hard to understand the original answer.
Consider.
let x = 4
let y = 3
let answer = x/y
You expect the answer to be a Double, but no, it is an Int. For you to get an answer which is not a rounded down Int. You must explicitly state the values to be Double. See below
let doubleAnswer = Double(x)/Double(y)
Hope this helped.

Calculations with Real Numbers, Verilog HDL

I noticed that Verilog rounds my real number results into integer results. For example when I look at simulator, it shows the result of 17/2 as 9. What should I do? Is there anyway to define something like a: output real reg [11:0] output_value ? Or is it something that has to be done by simulator settings?
Simulation only (no synthesis). Example:
x defined as a signed input and output_value defined as output reg.
output_value = ((x >>> 1) + x) + 5;
If x=+1 then output value has to be: 13/2=6.5.
However when I simulate I see output_value = 6.
Code would help, but I suspect your not dividing reals at all. 17 and 2 are integers, and so a simple statement like that will do integer division.
17 / 2 = 8 (not 9, always rounds towards 0)
17.0 / 2.0 = 8.5
In your second case
output_value = ((x >>> 1) + x) + 5
If x is 1, x >>> 1 is 0, not 0.5 because you've just gone off the bottom of the word.
output_value = ((1 >>> 1) + 1) + 5 = 0 + 1 + 5 = 6
There's nothing special about verilog here. This is true for the majority of languages.

Crystal Reports Cross-tab with mix of Sum, Percentages and Computed values

Being new to crystal, I am unable to figure out how to compute rows 3 and 4 below.
Rows 1 and 2 are simple percentages of the sum of the data.
Row 3 is a computed value (see below.)
Row 4 is a sum of the data points (NOT a percentage as in row 1 and row 2)
Can someone give me some pointers on how to generate the display as below.
My data:
2010/01/01 A 10
2010/01/01 B 20
2010/01/01 C 30
2010/02/01 A 40
2010/02/01 B 50
2010/02/01 C 60
2010/03/01 A 70
2010/03/01 B 80
2010/03/01 C 90
I want to display
2010/01/01 2010/02/01 2010/03/01
========== ========== ==========
[ B/(A + B + C) ] 20/60 50/150 80/240 <=== percentage of sum
[ C/(A + B + C) ] 30/60 60/150 90/240 <=== percentage of sum
[ 1 - A/(A + B + C) ] 1 - 10/60 1 - 40/150 1 - 70/240 <=== computed
[ (A + B + C) ] 60 150 250 <=== sum
Assuming you are using a SQL data source, I suggest deriving each of the output rows' values (ie. [B/(A + B + C)], [C/(A + B + C)], [1 - A/(A + B + C)] and [(A + B + C)]) per date in the SQL query, then using Crystal's crosstab feature to pivot them into the output format desired.
Crystal's crosstabs aren't particularly suited to deriving different calculations on different rows of output.