in picture below. Can somebody tell me why the length is 4?
I would expect a length of 3 (because of 3 arrays) but I don't know where the 4 is coming from?
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
caches.append(linear_cache)
print("Z = " + str(Z))
print("cache =" + str(linear_cache))
print("Length of caches list = " + str(len(caches)))
Outcome
Z = [[ 3.26295337 -1.23429987]]
cache =(array([[ 1.62434536, -0.61175641],
[-0.52817175, -1.07296862],
[ 0.86540763, -2.3015387 ]]), array([[ 1.74481176, -0.7612069 , 0.3190391 ]]), array([[-0.24937038]]))
Length of caches list = 4
Related
I'm looking to create a function that returns a solve for x math equation that can be preformed in ones head (Clearly thats a bit subjective but I'm not sure how else to phrase it).
Example problem: (x - 15)/10 = 6
Note: Only 1 x in the equation
I want to use the operations +, -, *, /, sqrt (Only applied to X -> sqrt(x))
I know that let mathExpression = NSExpression(format: question) converts strings into math equations but when solving for x I'm not sure how to go about doing this.
I previously asked Generating random doable math problems swift for non solving for x problems but I'm not sure how to convert that answer into solving for x
Edit: Goal is to generate an equation and have the user solve for the variable.
Since all you want is a string representing an equation and a value for x, you don't need to do any solving. Just start with x and transform it until you have a nice equation. Here's a sample: (copy and paste it into a Playground to try it out)
import UIKit
enum Operation: String {
case addition = "+"
case subtraction = "-"
case multiplication = "*"
case division = "/"
static func all() -> [Operation] {
return [.addition, .subtraction, .multiplication, .division]
}
static func random() -> Operation {
let all = Operation.all()
let selection = Int(arc4random_uniform(UInt32(all.count)))
return all[selection]
}
}
func addNewTerm(formula: String, result: Int) -> (formula: String, result: Int) {
// choose a random number and operation
let operation = Operation.random()
let number = chooseRandomNumberFor(operation: operation, on: result)
// apply to the left side
let newFormula = applyTermTo(formula: formula, number: number, operation: operation)
// apply to the right side
let newResult = applyTermTo(result: result, number: number, operation: operation)
return (newFormula, newResult)
}
func applyTermTo(formula: String, number:Int, operation:Operation) -> String {
return "\(formula) \(operation.rawValue) \(number)"
}
func applyTermTo(result: Int, number:Int, operation:Operation) -> Int {
switch(operation) {
case .addition: return result + number
case .subtraction: return result - number
case .multiplication: return result * number
case .division: return result / number
}
}
func chooseRandomNumberFor(operation: Operation, on number: Int) -> Int {
switch(operation) {
case .addition, .subtraction, .multiplication:
return Int(arc4random_uniform(10) + 1)
case .division:
// add code here to find integer factors
return 1
}
}
func generateFormula(_ numTerms:Int = 1) -> (String, Int) {
let x = Int(arc4random_uniform(10))
var leftSide = "x"
var result = x
for i in 1...numTerms {
(leftSide, result) = addNewTerm(formula: leftSide, result: result)
if i < numTerms {
leftSide = "(" + leftSide + ")"
}
}
let formula = "\(leftSide) = \(result)"
return (formula, x)
}
func printFormula(_ numTerms:Int = 1) {
let (formula, x) = generateFormula(numTerms)
print(formula, " x = ", x)
}
for i in 1...30 {
printFormula(Int(arc4random_uniform(3)) + 1)
}
There are some things missing. The sqrt() function will have to be implemented separately. And for division to be useful, you'll have to add in a system to find factors (since you presumably want the results to be integers). Depending on what sort of output you want, there's a lot more work to do, but this should get you started.
Here's sample output:
(x + 10) - 5 = 11 x = 6
((x + 6) + 6) - 1 = 20 x = 9
x - 2 = 5 x = 7
((x + 3) * 5) - 6 = 39 x = 6
(x / 1) + 6 = 11 x = 5
(x * 6) * 3 = 54 x = 3
x * 9 = 54 x = 6
((x / 1) - 6) + 8 = 11 x = 9
Okay, let’s assume from you saying “Note: Only 1 x in the equation” that what you want is a linear equation of the form y = 0 = β1*x + β0, where β0 and β1 are the slope and intercept coefficients, respectively.
The inverse of (or solution to) any linear equation is given by x = -β0/β1. So what you really need to do is generate random integers β0 and β1 to create your equation. But since it should be “solvable” in someone’s head, you probably want β0 to be divisible by β1, and furthermore, for β1 and β0/β1 to be less than or equal to 12, since this is the upper limit of the commonly known multiplication tables. In this case, just generate a random integer β1 ≤ 12, and β0 equal to β1 times some integer n, 0 ≤ n ≤ 12.
If you want to allow simple fractional solutions like 2/3, just multiply the denominator and the numerator into β0 and β1, respectively, taking care to prevent the numerator or denominator from getting too large (12 is again a good limit).
Since you probably want to make y non-zero, just generate a third random integer y between -12 and 12, and change your output equation to y = β1*x + β0 + y.
Since you mentioned √ could occur over the x variable only, that is pretty easy to add; the solution (to 0 = β1*sqrt(x) + β0) is just x = (β0/β1)**2.
Here is some very simple (and very problematic) code for generating random integers to get you started:
import func Glibc.srand
import func Glibc.rand
import func Glibc.time
srand(UInt32(time(nil)))
print(rand() % 12)
There are a great many answers on this website that deal with better ways to generate random integers.
I know how to convert first and second term to the first term of the simplified expression, but I don't know how to convert the rest.
By simplifying, I can get rid of A_Bar in the third term and A in the fifth term and get =B*C_bar
How is it that B*C_bar + the fourth term = becomes XOR(B,C) ?
The two expressions are clearly the same. This can be easily proven by truth tables.
The first one is:
And the second one:
However, this does not fully answer your question.
B*C_bar + the fourth term = becomes XOR(B,C)
This is clearly true if A is true, since per definitionem, B XOR C = B_bar and C OR B and C_bar.
If A is false, these terms are always false and you cannot simplify these two to B XOR C! They are not equal!
Note: Tables generated with http://web.stanford.edu/class/cs103/tools/truth-table-tool/
Note2: ^= OR, ¬ = NOT, ∨ = AND
let play a game.
Let a=not(A), b=not(B) and c=not(C) and *=xor
Y = ab + (B*C)
Y = ab + Bc + bC
Y = ab(1) + Bc(1) + bC(1)
Y = ab(c+C) + Bc(a+A) + bC(a+A)
Y = abc + abC + Bca + BcA + bCa + bCA
Y = abc + abC + aBc + ABc + abC + AbC
Y = abc + abC + aBc + ABc + AbC
That is the first equ.
I'm trying to get the average of an array of Ints using the following code:
let numbers = [1,2,3,4,5]
let avg = numbers.reduce(0) { return $0 + $1 / numbers.count }
print(avg) // 1
Which is obviously incorrect. However, if I remove the division to the outside of the closure:
let numbers = [1,2,3,4,5]
let avg = numbers.reduce(0) { return $0 + $1 } / numbers.count
print(avg) // 3
Bingo! I think I remember reading somewhere (can't recall if it was in relation to Swift, JavaScript or programming math in general) that this has something to do with the fact that dividing the sum by the length yields a float / double e.g. (1 + 2) / 5 = 0.6 which will be rounded down within the sum to 0. However I would expect ((1 + 2) + 3) / 5 = 1.2 to return 1, however it too seems to return 0.
With doubles, the calculation works as expected whichever way it's calculated, as long as I box the count integer to a double:
let numbers = [1.0,2.0,3.0,4.0,5.0]
let avg = numbers.reduce(0) { return $0 + $1 / Double(numbers.count) }
print(avg) // 3
I think I understand the why (maybe not?). But I can't come up with a solid example to prove it.
Any help and / or explanation is very much appreciated. Thanks.
The division does not yield a double; you're doing integer division.
You're not getting ((1 + 2) + 3 etc.) / 5.
In the first case, you're getting (((((0 + (1/5 = 0)) + (2/5 = 0)) + (3/5 = 0)) + (4/5 = 0)) + (5/5 = 1)) = 0 + 0 + 0 + 0 + 0 + 1 = 1.
In the second case, you're getting ((((((0 + 1) + 2) + 3) + 4) + 5) / 5) = 15 / 5 = 3.
In the third case, double precision loss is much smaller than the integer, and you get something like (((((0 + (1/5.0 = 0.2)) + (2/5.0 = 0.4)) + (3/5.0 = 0.6)) + (4/5.0 = 0.8)) + (5/5.0 = 1.0)).
The problem is that what you are attempting with the first piece of code does not make sense mathematically.
The average of a sequence is the sum of the entire sequence divided by the number of elements.
reduce calls the lambda function for every member of the collection it is being called on. Thus you are summing and dividing all the way through.
For people finding it hard to understand the original answer.
Consider.
let x = 4
let y = 3
let answer = x/y
You expect the answer to be a Double, but no, it is an Int. For you to get an answer which is not a rounded down Int. You must explicitly state the values to be Double. See below
let doubleAnswer = Double(x)/Double(y)
Hope this helped.
I'm porting and updating an old app and I came across this function. I'd like to know more about it, but I don't actually know what it's called. I'm assuming it has a popular name. Does anyone know?
This version is in Python, although it was originally in Java.
def encode(msg): # msg is a string
msg_len = len(msg)
j = (msg_len + 6) / 7
k = 0
cbytesOutput = [ctypes.c_byte(0)]*(msg_len + j) # return is msg length + j bytes long
for l in xrange(j):
i1 = l * 8
j1 = i1
byte0 = ctypes.c_byte(-128)
byte1 = ctypes.c_byte(1)
k1 = 0
while k1 < 7 and k < msg_len:
byte2 = ctypes.c_byte(ord(msg[k]))
if (byte2.value & 0xffffff80) != 0:
byte0 = ctypes.c_byte(byte0.value | byte1.value)
j1 += 1
cbytesOutput[j1] = ctypes.c_byte(byte2.value | 0xffffff80)
byte1 = ctypes.c_byte(byte1.value << 1)
k += 1
k1 += 1
cbytesOutput[i1] = byte0
return cbytesOutput
Any comments on the algorithm in general? I'm thinking of replacing it. Profiler says it's the worst function in the entire app (it's 60% of the time on the slowest code path) and it bloats the data as well.
Thanks
probably a simple question but I seem to be suffering from programmer's block. :)
I have three boolean values: A, B, and C. I would like to save the state combination as an unsigned tinyint (max 255) into a database and be able to derive the states from the saved integer.
Even though there are only a limited number of combinations, I would like to avoid hard-coding each state combination to a specific value (something like if A=true and B=true has the value 1).
I tried to assign values to the variables so (A=1, B=2, C=3) and then adding, but I can't differentiate between A and B being true from i.e. only C being true.
I am stumped but pretty sure that it is possible.
Thanks
Binary maths I think. Choose a location that's a power of 2 (1, 2, 4, 8 etch) then you can use the 'bitwise and' operator & to determine the value.
Say A = 1, B = 2 , C= 4
00000111 => A B and C => 7
00000101 => A and C => 5
00000100 => C => 4
then to determine them :
if( val & 4 ) // same as if (C)
if( val & 2 ) // same as if (B)
if( val & 1 ) // same as if (A)
if((val & 4) && (val & 2) ) // same as if (C and B)
No need for a state table.
Edit: to reflect comment
If the tinyint has a maximum value of 255 => you have 8 bits to play with and can store 8 boolean values in there
binary math as others have said
encoding:
myTinyInt = A*1 + B*2 + C*4 (assuming you convert A,B,C to 0 or 1 beforehand)
decoding
bool A = myTinyInt & 1 != 0 (& is the bitwise and operator in many languages)
bool B = myTinyInt & 2 != 0
bool C = myTinyInt & 4 != 0
I'll add that you should find a way to not use magic numbers. You can build masks into constants using the Left Logical/Bit Shift with a constant bit position that is the position of the flag of interest in the bit field. (Wow... that makes almost no sense.) An example in C++ would be:
enum Flags {
kBitMask_A = (1 << 0),
kBitMask_B = (1 << 1),
kBitMask_C = (1 << 2),
};
uint8_t byte = 0; // byte = 0b00000000
byte |= kBitMask_A; // Set A, byte = 0b00000001
byte |= kBitMask_C; // Set C, byte = 0b00000101
if (byte & kBitMask_A) { // Test A, (0b00000101 & 0b00000001) = T
byte &= ~kBitMask_A; // Clear A, byte = 0b00000100
}
In any case, I would recommend looking for Bitset support in your favorite programming language. Many languages will abstract the logical operations away behind normal arithmetic or "test/set" operations.
Need to use binary...
A = 1,
B = 2,
C = 4,
D = 8,
E = 16,
F = 32,
G = 64,
H = 128
This means A + B = 3 but C = 4. You'll never have two conflicting values. I've listed the maximum you can have for a single byte, 8 values or (bits).