PIL - can't change pixel values after converting type of image from L to F - AttributeError: 'int' object has no attribute 'data' - python-imaging-library

I have an "L" type image that I converted to float "F". I get an error when trying to change its pixel values after conversion. Below is a snippet.
a = mnist_images[i].convert(mode="F")
a = a.point(lambda j: 0)
And the error message
<ipython-input-45-f89cda3cf30e> in generate_moving_mnist(shape, seq_len, seqs, num_sz, nums_per_image)
86 b=mnist_images[i].point(lambda j: 0)
87 a=mnist_images[i].convert(mode="F")#.point(lambda j: 0)
---> 88 a = a.point(lambda j: 0)
89
90 # paste the mnist_images[i] at position[i] on the current canvas
~\anaconda3\envs\tf\lib\site-packages\PIL\Image.py in point(self, lut, mode)
1572 # UNDONE wiredfool -- I think this prevents us from ever doing
1573 # a gamma function point transform on > 8bit images.
-> 1574 scale, offset = _getscaleoffset(lut)
1575 return self._new(self.im.point_transform(scale, offset))
1576 # for other modes, convert the function to a table
~\anaconda3\envs\tf\lib\site-packages\PIL\Image.py in _getscaleoffset(expr)
483 def _getscaleoffset(expr):
484 stub = ["stub"]
--> 485 data = expr(_E(stub)).data
486 try:
487 (a, b, c) = data # simplified syntax
AttributeError: 'int' object has no attribute 'data'
I am able to change the pixel values if I do not convert the image type i.e. the line below on its own works just fine.
b = mnist_images[i].point(lambda j: 0)
I am currently using a workaround but I think its quite slow
a=mnist_images[i].copy()
a=a.convert(mode="F")
for row in range(a.size[0]):
for col in range(a.size[1]):
a.putpixel((row, col), new_pixel_value)

It appears from line 1581 of PIL/Image.py that the point() function is not supported on type "F" float images:
if self.mode == "F":
# FIXME: _imaging returns a confusing error message for this case
raise ValueError("point operation not supported for this mode")
If you would care to say what you are actually trying to do, I can maybe suggest a workaround.

Related

Confused about some hexadecimals, longs, and int

I am following along my Scala textbook and I see this:
scala> val hex = 0x5
hex: Int = 5
scala> val hex2 = 0x00ff
hex2: Int = 255
scala> val hex3 = 0xff
hex2: Int = 255
scala> var hex4 = 0xbe
magic: Int = 190
scala> var hex5 = 0xFF
magic: Int = 255
val magic = 0xcafebabe
magic: Int = -889275714
scala> var prog = 0xCAFEBABEL
prog: Long = 3405691582
scala> val tower = 35l
tower: Long = 35
My questions:
why do you need the extra 00 after the x in 0x00FF?
I get why FF = 255... hexadecimal is base16 starting at 00 = 0 and 0F = 15. But why does 0xcafebabe = -889275714?
Why is going on with the Longs? I don't understand what is going on?
You don't, it's just to show that leading 0s are ignored as far as I can tell
int is a 32-bit signed integer: if you exceed 2^31, the highest-value bit gets set but is interpreted as a minus. In short, you have an overflow.
If you add "l", the variable is a long which uses 64 bits, so the overflow doesn't happen
00FF needs the 2 zeros to make sure that this is a SIGNED number, proving that it is positive by using the two zeros.
The cafebabe doesn't have that since it is a negative number. We found that out because of the lack of zeros at the end.
Finally, the point of the long (though im not sure of that one) is to set the idea that there are unseen zeros stretching backwards, thus giving us a positive number.

Julia: Inexact error when trying to get the integer part of a BigFloat

I am interested in getting the digits of a BigFloat in the form of bytes. I get a very strange error that I cannot debug. I provide a minimal example where the error appears.
function floatToBytes(x::BigFloat)
ret = zeros(UInt8, 4)
xs = significand(x)/2
b = UInt8(0)
for i = 1:4
xs *= 256
b = trunc(UInt8, xs)
ret[i] = b
xs -= b
end
return ret
end
println( floatToBytes(BigFloat(0.9921875001164153)) )
println( floatToBytes(BigFloat(0.9960937501164153)) )
What I get when running this is
UInt8[0xfe, 0x00, 0x00, 0x00]
ERROR: LoadError: InexactError()
Stacktrace:
[1] trunc(::Type{UInt8}, ::BigFloat) at ./mpfr.jl:201
etc.
It seems that it doesn't want to turn 255 into a UInt8. I can circumvent the problem by defining the function as
function floatToBytes(x::BigFloat)
ret = zeros(UInt8, 4)
xs = significand(x)/2
b = UInt8(0)
for i = 1:4
xs *= 256
try
b = trunc(UInt8, xs)
catch
b = trunc(UInt8, xs-1)+UInt8(1)
end
ret[i] = b
xs -= b
end
return ret
end
But this is highly unsatisfactory. What is going on here?
The problem looks like a bug in trunc for BigFloat. The problem is the current code does (typemin(T) <= x <= typemax(T)) || throw(InexactError(:trunc, T, x)) which throws an error because x is larger than 255 which is the typemax.
It actually needs to do the trunc in BigFloat domain and then cast to T (and have the cast check for typemax).
I've opened an issue regarding this at: https://github.com/JuliaLang/julia/issues/24041
In the meantime, a solution could be to do:
UInt8(trunc(xs))
i.e. trunc first and cast later. For example:
julia> UInt8(trunc(BigFloat(0.9960937501164153)*256))
0xff

How can I sum up functions that are made of elements of the imported dataset?

See the code and error. I have already tried Do, For,...and it is not working.
CODE + Error from Mathematica:
Import of survival probabilities _{k}p_x and _{k}p_y (calculated in excel)
px = Import["C:\Users\Eva\Desktop\kpx.xlsx"];
px = Flatten[Take[px, All], 1];
NOTE: The probability _{k}p_x can be found on the position px[[k+2, x -16]
i = 0.04;
v = 1/(1 + i);
JointLifeIndep[x_, y_, n_] = Sum[v^k*px[[k + 2, x - 16]]*py[[k + 2, y - 16]], {k , 0, n - 1}]
Part::pkspec1: The expression 2+k cannot be used as a part specification.
Part::pkspec1: The expression 2+k cannot be used as a part specification.
Part::pkspec1: The expression 2+k cannot be used as a part specification.
General::stop: Further output of Part::pkspec1 will be suppressed during this calculation.
Part of dataset (left corner of the dataset):
k\x 18 19 20
0 1 1 1
1 0.999478086278185 0.999363078716059 0.99927911905056
2 0.998841497412202 0.998642656911039 0.99858030519133
3 0.998121451605207 0.99794428814123 0.99788275311401
4 0.997423447323642 0.997247180349674 0.997174407432264
5 0.996726703362208 0.996539285828369 0.996437857252448
6 0.996019178300768 0.995803204773039 0.99563600297737
7 0.995283481416241 0.995001861216016 0.994823584922968
8 0.994482556091416 0.994189960607964 0.99405569519175
9 0.993671079225432 0.99342255996206 0.993339856748282
10 0.992904079096455 0.992707177451333 0.992611817294026
11 0.992189069953677 0.9919796017009 0.991832027835091
Without having the exact same data files to work with it is often easy for each of us to make mistakes that the other cannot reproduce or understand.
From your snapshot of your data set I used Export in Mathematica to try to reproduce your .xlsx file. Then I tried the following
px = Import["kpx.xlsx"];
px = Flatten[Take[px, All], 1];
py = px; (* fake some py data *)
i = 0.04;
v = 1/(1 + i);
JointLifeIndep[x_, y_, n_] := Sum[v^k*px[[k+2,x-16]]*py[[k+2,y-16]], {k,0,n-1}];
JointLifeIndep[17, 17, 12]
and it displays 362.402
Notice I used := instead of = in my definition of JointLifeIndep. := and = do different things in Mathematica. = will immediately evaluate the right hand side of that definition. This is possibly the reason that you are getting the error that you do.
You should also be careful with your subscript values and make sure that every subscript is between 1 and the number of rows (or columns) in your matrix.
So see if you can try this example with an Excel sheet containing only the snapshot of data that you showed and see if you get the same result that I do.
Hopefully that will be enough for you to make progress.

Cryptic TypeError: 'decimal.Decimal' object cannot be interpreted as an integer

I am struggling to understand why this function apparently fails in the Jupyter Notebook, but not in the IPython shell:
def present_value( r, n, fv = None, pmt = None ):
'''
Function to compute the Present Value based on interest rate and
a given future value.
Arguments accepted
------------------
* r = interest rate,
which should be given in its original percentage, eg.
5% instead of 0.05
* n = number of periods for which the cash flow,
either as annuity or single flow from one present value
* fv = future value in dollars,
if problem is annuity based, leave this empty
* pmt = each annuity payment in dollars,
if problem is single cash flow based, leave this empty
'''
original_args = [r, n, fv, pmt]
dec_args = [Decimal( arg ) if arg != None
else arg
for arg in original_args
]
if dec_args[3] == None:
return dec_args[2] / ( ( 1 + ( dec_args[0] / 100 ) )**dec_args[1] )
elif dec_args[2] == None:
# annuity_length = range( 1, dec_args[1] + 1 )
# Not allowed to add a Decimal object
# with an integer and to use it
# in the range() function,
# so we dereference the integer from original_args
annuity_length = range( 1, original_args[1] + 1 )
# Apply discounting to each annuity payment made
# according to number of years left till end
all_compounded_pmt = [dec_args[3] * ( 1 / ( ( 1 + dec_args[0] / 100 ) ** time_left ) ) \
for time_left in annuity_length
]
return sum( all_compounded_pmt )
When I imported the module that this function resides in, named functions.py, using from functions import *, and then executed present_value(r=7, n=35, pmt = 11000), I got the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-93-c1cc587f7e27> in <module>()
----> 1 present_value(r=7, n=35, pmt = 11000)
/path_to_file/functions.py in present_value(r, n, fv, pmt)
73 if dec_args[3] == None:
74 return dec_args[2]/((1 + (dec_args[0]/100))**dec_args[1])
---> 75
76 elif dec_args[2] == None:
77 # annuity_length = range(1, dec_args[1]+1)
TypeError: 'decimal.Decimal' object cannot be interpreted as an integer
but in the IPython shell, evaluating this function it works perfectly fine:
In [42]: functions.present_value(r=7, n=35, pmt = 11000)
Out[42]: Decimal('142424.39530474029537')
Can anyone please help me with this really confusing and obscure issue?

Three boolean values saved in one tinyint

probably a simple question but I seem to be suffering from programmer's block. :)
I have three boolean values: A, B, and C. I would like to save the state combination as an unsigned tinyint (max 255) into a database and be able to derive the states from the saved integer.
Even though there are only a limited number of combinations, I would like to avoid hard-coding each state combination to a specific value (something like if A=true and B=true has the value 1).
I tried to assign values to the variables so (A=1, B=2, C=3) and then adding, but I can't differentiate between A and B being true from i.e. only C being true.
I am stumped but pretty sure that it is possible.
Thanks
Binary maths I think. Choose a location that's a power of 2 (1, 2, 4, 8 etch) then you can use the 'bitwise and' operator & to determine the value.
Say A = 1, B = 2 , C= 4
00000111 => A B and C => 7
00000101 => A and C => 5
00000100 => C => 4
then to determine them :
if( val & 4 ) // same as if (C)
if( val & 2 ) // same as if (B)
if( val & 1 ) // same as if (A)
if((val & 4) && (val & 2) ) // same as if (C and B)
No need for a state table.
Edit: to reflect comment
If the tinyint has a maximum value of 255 => you have 8 bits to play with and can store 8 boolean values in there
binary math as others have said
encoding:
myTinyInt = A*1 + B*2 + C*4 (assuming you convert A,B,C to 0 or 1 beforehand)
decoding
bool A = myTinyInt & 1 != 0 (& is the bitwise and operator in many languages)
bool B = myTinyInt & 2 != 0
bool C = myTinyInt & 4 != 0
I'll add that you should find a way to not use magic numbers. You can build masks into constants using the Left Logical/Bit Shift with a constant bit position that is the position of the flag of interest in the bit field. (Wow... that makes almost no sense.) An example in C++ would be:
enum Flags {
kBitMask_A = (1 << 0),
kBitMask_B = (1 << 1),
kBitMask_C = (1 << 2),
};
uint8_t byte = 0; // byte = 0b00000000
byte |= kBitMask_A; // Set A, byte = 0b00000001
byte |= kBitMask_C; // Set C, byte = 0b00000101
if (byte & kBitMask_A) { // Test A, (0b00000101 & 0b00000001) = T
byte &= ~kBitMask_A; // Clear A, byte = 0b00000100
}
In any case, I would recommend looking for Bitset support in your favorite programming language. Many languages will abstract the logical operations away behind normal arithmetic or "test/set" operations.
Need to use binary...
A = 1,
B = 2,
C = 4,
D = 8,
E = 16,
F = 32,
G = 64,
H = 128
This means A + B = 3 but C = 4. You'll never have two conflicting values. I've listed the maximum you can have for a single byte, 8 values or (bits).