How do I fix Unbound Local Error in Python 3 - python-3.7

im doing a homework assignment for one of my classes EMT 1111 and Im stuck on this situation at the moment. The question that im trying to answer ask me this question: Write an interactive console program that prompts the user to read in two input values: a number of feet, followed on a separate line by a number of inches. The program should convert this amount to centimeters. Here is a sample run of the program (user input is shown like this):
This program converts feet and inches to centimeters.
Enter number of feet: 5
Enter number of inches: 11
5 ft 11 in = 180.34 cm
Here the coding that I had done so far for this program assignment
centimeters = 2.54
feet_to_inches = feet * 12
print("This program converts feet and inches to centimeters.")
feet = int(input("Enter number of feet: "))
inches = int(input("Enter number of inches: "))
inches_to_centimeters = (feet_to_inches + inches) * centimeters
print = float(input(feet, "ft", inches, "in =",inches_to_centimeters, "cm"))
Every time I keep submitting the code I keep getting an unbound local error. Can someone point the mistake im making so I can fix it

I’m not sure if it’s the reason for the error, but in your last line you use print as a variable name. print is a keyword in python, so you can’t use it as a variable name.

You have a number of issues:
On your 2nd line, you are using feet before it is defined.
On your 9th line, you are using print as a variable instead of a function.
Also on your 9th line, you have what should be printed wrapped in an input function
It's minor, but I would suggest self-descriptive variable names.
So with this in mind, let's refactor your code:
#!/usr/bin/env python3.7
It's a good idea to include a shebang line to make sure you target the correct Python version.
feet_to_inches_multiplier = 12
inches_to_centimeters_multiplier = 2.54
As I said, use self descriptive variables. This way it is more obvious what their intended purpose is.
print("This program converts feet and inches to centimeters.")
This line is fine.
feet = int(input("Enter number of feet: "))
inches = int(input("Enter number of inches: "))
centimeters = (feet * feet_to_inches_multiplier) * inches_to_centimeters_multiplier
Hopefully, you can see the increase in readability here and how the centimeters calculation flows naturally.
print(feet, "ft", inches, "in =", centimeters, "cm")
And this, I assume, is supposed to be a simple print statement.
Here's the output:
This program converts feet and inches to centimeters.
Enter number of feet: 1
Enter number of inches: 1
1 ft 1 in = 30.48 cm

I don't really understand what you want to do but, print() doesn't support the way you're trying to pass those arguments.
For the piece of code provided, the following code may be what you're looking for:
centimeters = 2.54
print("This program converts feet and inches to centimeters.")
feet = int(input("Enter number of feet: "))
feet_to_inches = feet * 12
inches = int(input("Enter number of inches: "))
inches_to_centimeters = (feet_to_inches + inches) * centimeters
print(feet, "ft", inches, "in =", inches_to_centimeters, "cm")
Hope this help you.

Related

truncatingRemainder(dividingBy: ) returning nonZero remainder even if number is completely divisible

I am trying to get remainder using swift's truncatingRemainder(dividingBy:) method.
But I am getting a non zero remainder even if value I am using is completely divisible by deviser. I have tried number of solutions available here but none worked.
P.S. values I am using are Double (Tried Float also).
Here is my code.
let price = 0.5
let tick = 0.05
let remainder = price.truncatingRemainder(dividingBy: tick)
if remainder != 0 {
return "Price should be in multiple of tick"
}
I am getting 0.049999999999999975 as remainder which is clearly not the expected result.
As usual (see https://floating-point-gui.de), this is caused by the way numbers are stored in a computer.
According to the docs, this is what we expect
let price = //
let tick = //
let r = price.truncatingRemainder(dividingBy: tick)
let q = (price/tick).rounded(.towardZero)
tick*q+r == price // should be true
In the case where it looks to your eye as if tick evenly divides price, everything depends on the inner storage system. For example, if price is 0.4 and tick is 0.04, then r is vanishingly close to zero (as you expect) and the last statement is true.
But when price is 0.5 and tick is 0.05, there is a tiny discrepancy due to the way the numbers are stored, and we end up with this odd situation where r, instead of being vanishingly close to zero, is vanishing close to tick! And of course the last statement is then false.
You'll just have to compensate in your code. Clearly the remainder cannot be the divisor, so if the remainder is vanishingly close to the divisor (within some epsilon), you'll just have to disregard it and call it zero.
You could file a bug on this but I doubt that much can be done about it.
Okay, I put in a query about this and got back that it behaves as intended, as I suspected. The reply (from Stephen Canon) was:
That's the correct behavior. 0.05 is a Double with the value 0.05000000000000000277555756156289135105907917022705078125. Dividing 0.5 by that value in exact arithmetic gives 9 with a remainder of 0.04999999999999997501998194593397784046828746795654296875, which is exactly the result you're seeing.
The only rounding error that occurs in your example is in the division price/tick, which rounds up to 10 before your .rounded(.towardZero) has a chance to take effect. We'll add an API to let you do something like price.divided(by: tick, rounding: .towardZero) at some point, which will eliminate this rounding, but the behavior of truncatingRemainder is precisely as intended.
You really want to have either a decimal type (also on the list of things to do) or to scale the problem by a power of ten so that your divisor become exact:
1> let price = 50.0
price: Double = 50
2> let tick = 5.0
tick: Double = 5
3> let r = price.truncatingRemainder(dividingBy: tick)
r: Double = 0

Why are my printed outputs different?

I've been slowly building up skillsets in Swift. Drawing with loops is a great way I find, to understand the subtleties of the language.
Here is an interesting puzzle I can't quite figure out:
I've been trying to generate a truncated pyramid like this for a little while.
I finally got a rough one produced using a for loop. screenshot here BUT, as you can see, one of my earlier attempts generated a half truncated pyramid.
The only difference between the two is that on lines 19 and 33, the variable "negativeSpaceThree" is diminished by 2 and 1 respectively.
Can anyone explain why the outputs are so different? I'd really like to understand these nuances. It might simply be my math, but I'm wondering if its a bug.
Many thanks for any input offered.
Code added below:
let space = " "
var negativeSpaceTwo = 22
var xTwo = 3
for circumTwo in 1...11{
xTwo += 2
negativeSpaceTwo -= 2
print(String(repeating: "-", count: negativeSpaceTwo) , String(repeating:"*", count: xTwo ))
}
print(space)
print(space)
var negativeSpaceThree = 11
var xThree = 3
for circumTwo in 1...11{
xThree += 2
negativeSpaceThree -= 1
print(String(repeating: "-", count: negativeSpaceThree) , String(repeating:"*", count: xThree ))
}
It's because of the difference in how many * characters you are printing on each line. If your total line length is total = dashes + stars and you subtract 2 from dashes each time while adding 2 to stars each time, the total line length will remain the same.
In the second pyramid, you reduce the dashes by one, but add 2 to stars, so the total length increases by 1 each line, giving the pyramid effect on the right-hand side of the text.

In Swift 3.0 How to make one character in a string move backward when you typing?

I am new in Swift.
I am trying to make a budget application. This app have a Calculator like keyboard. My idea is when users enter the money app will automatically add a decimal place for users.
For example, if you type 1230 it will give you 12.30 and type 123 it will display 1.23
I wrote a couple lines of code down below. The problem is it only can add decimal point after first digit it won't go backwards when you give more digits. It only can display as X.XXXXX
I tried solve this problem with String.index(maybe increase index?) and NSNumber/NSString format. But I don't know this is the right direction or not.
let number = sender.currentTitle!
let i: String = displayPayment.text!
if (displayPayment.text?.contains("."))!{
displayPayment.text = i == "0" ? number : displayPayment.text! + number
}
else {
displayPayment.text = i == "0" ? number : displayPayment.text! + "." + number
}
Indexing Strings in Swift is not as "straightforward" as many would like, simply due to how Strings are represented internally. If you just want to add a . at before the second to last position of the user input you could do it like this:
let amount = "1230"
var result = amount
if amount.characters.count >= 2 {
let index = amount.index(amount.endIndex, offsetBy: -2)
result = amount[amount.startIndex..<index] + "." + amount[index..<amount.endIndex]
} else {
result = "0.0\(amount)"
}
So for the input of 1230 result will be 12.30. Now You might want to adjust this depending on your specific needs. For example, if the user inputs 30 this code would result in .30 (this might or might not be what you want).

Accountant rounding in swift

I'm not aware how to round numbers in the following manner in Swift:
6.51,6.52,6.53, 6.54 should be rounded down to 6.50
6.56, 6.57, 6.58, 6.59 should be rounded down to 6.55
I have already tried
func roundDown(number: Double, toNearest: Double) -> Double {
return floor(number / toNearest) * toNearest
}
to no success. Any thoughts ?
Here's your problem (and it has nothing to do with Swift whatsoever): Floating point arithmetic is not exact. Let's say you try to divide 6.55 by 0.05 and expect a result of 131.0. In reality, 6.55 is "some number close to 6.55" and 0.05 is "some number close to 0.05", so the result that you get is "some number close to 131.0". That result is likely just a tiny little bit smaller than 131.0, maybe 130.999999999999 and floor () returns 130.0.
What you do: You decide what is the smallest number that you still want to round up. For example, you'd want 130.999999999999 to give a result of 131.0. You'd probably want 130.9999 to give a result of 131.0. So change your code to
floor (number * 20.0 + 0.0001);
This will round 6.549998 to 6.55, so check if you are Ok with that. Also, floor () works in an unexpected way for negative input, so -6.57 would be rounded down to -6.60, which is likely not what you want.

Rounding Off Decimal Value to Pevious and Next Hundreds

I am working in C#. I have decimal variable startFilter that contains value say 66.76. Now I want this decimal value to appear in the seach filter $0 to $100. But what I also want is, that the search filter starts from the first decimal value that comes in startFilter variable. So for instance in this case the search filter will start from $0 to $100 because the value in startFilter variable is 66.76, but in another case it can be $100 to $200 if the first value that comes in searchFilter is say $105.
Having said that, how should I round off the value in seachFilter to previous hundreds and the next hundreds. Like if the value is 66.76 it rounds off to 0 as floor and 100 as ceiling, so on and so forth.
Any idea how to do that in C#?
double value = ...
int rounded = ((int) Math.Round(value / 100.0)) * 100;
divide your original number by 100. get floor and celing values. Multiply each of them by 100.