getting unicode decode error while trying to load pre-trained model using torch.load(PATH) - unicode

Trying to load a ResNet 18 pre-trained model using the torch.load(PATH) but getting Unicode decode error please help.
Traceback (most recent call last):
File "main.py", line 312, in <module>
main()
File "main.py", line 138, in main
checkpoint = torch.load(args.resume)
File "F:\InsSoft\Anaconda\lib\site-packages\torch\serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "F:\InsSoft\Anaconda\lib\site-packages\torch\serialization.py", line 773, in _legacy_load
result = unpickler.load()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbe in position 2: invalid start byte

This error hits whenever the model is pretrained on torch version < 0.4 but using torch version > 0.4 for testing / resuming.
so use checkpoint = torch.load(args.resume,encoding='latin1')

Related

Exporting trained instrument in DDSP-VST

I bought the pro colab+ and uploaded my own instrument to google drive and I initiated the training module and after 30 mins it is finished but gives me an error saying that the instrument is not found.
this is the error code :
Exporting model...
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/train_util.py", line 165, in get_latest_operative_config
restore_dir, prefix='operative_config-', suffix='.gin')
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/train_util.py", line 106, in get_latest_file
f'No files found matching the pattern '{search_pattern}'.')
FileNotFoundError: No files found matching the pattern '/content/gdrive/MyDrive/My/operative_config-*.gin'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ddsp_export", line 8, in
sys.exit(console_entry_point())
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/ddsp_export.py", line 364, in console_entry_point
app.run(main)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 312, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 258, in _run_main
sys.exit(main(argv))
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/ddsp_export.py", line 333, in main
export_impulse_response(model_path, save_dir, FLAGS.reverb_sample_rate)
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/ddsp_export.py", line 272, in export_impulse_response
ddsp.training.inference.parse_operative_config(model_path)
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/inference.py", line 41, in parse_operative_config
operative_config = train_util.get_latest_operative_config(ckpt_dir)
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/train_util.py", line 168, in get_latest_operative_config
os.path.dirname(restore_dir), prefix='operative_config-', suffix='.gin')
File "/usr/local/lib/python3.7/dist-packages/ddsp/training/train_util.py", line 106, in get_latest_file
f'No files found matching the pattern '{search_pattern}'.')
FileNotFoundError: No files found matching the pattern '/content/gdrive/MyDrive/operative_config-*.gin'.
Export complete! Zipping /content/gdrive/MyDrive/My Instrument/ddsp-training-2022-07-18-0955/My_Instrument to /content/gdrive/MyDrive/My Instrument/ddsp-training-2022-07-18-0955/My_Instrument.zip
/bin/bash: line 0: cd: too many arguments
Zipping Complete! Downloading... My_Instrument.zip
You can also find your model at /content/gdrive/MyDrive/My Instrument/ddsp-training-2022-07-18-0955/My_Instrument
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/ipyfilechooser/filechooser.py in _on_select_click(self, _b)
315 if self._callback is not None:
316 try:
--> 317 self._callback(self)
318 except TypeError:
319 # Support previous behaviour of not passing self
3 frames
/usr/local/lib/python3.7/dist-packages/google/colab/files.py in download(filename)
187 raise OSError(msg)
188 else:
--> 189 raise FileNotFoundError(msg) # pylint: disable=undefined-variable
190
191 comm_manager = _IPython.get_ipython().kernel.comm_manager
FileNotFoundError: Cannot find file: /content/gdrive/MyDrive/My Instrument/ddsp-training-2022-07-18-0955/My_Instrument.zip

Does Caffee Model work on images downloaded from google search?

So, I started with this article https://towardsdatascience.com/predict-age-and-gender-using-convolutional-neural-network-and-opencv-fd90390e3ce6 for age and gender detection and, I am facing a trivial problem. I am not able to run caffe on pictures downloaded from google. Actually, it's running only on the pictures that I take from my phone or webcam. Is there any specific reason or am I doing something incorrectly? Also I wrapping all of this with flask.
for example:- when I feed this image that i took from google search https://www.hanselman.com/blog/content/binary/WindowsLiveWriter/DIYMakingaWideAngleWebcam_1478B/2010-02-16%2023-01-29.283_2.jpg
I get this as my logs:-
127.0.0.1 - - [12/Mar/2020 11:51:57] "?[1m?[35mPOST /predicWithImage HTTP/1.1?[0m" 500 -
Traceback (most recent call last):
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask_cors\extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 1952, in full_dispatch_request
return self.finalize_request(rv)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 1967, in finalize_request
response = self.make_response(rv)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2097, in make_response
"The view function did not return a valid response. The"
TypeError: The view function did not return a valid response. The
function either returned None or ended without a return statement.
vs Logs If I feed the picture taken from my webcam/phone.
Found 1 faces
printing the blob
Gender: Male
Age Range: (15, 20)
127.0.0.1 - - [12/Mar/2020 11:56:07] "?[37mPOST /predicWithImage HTTP/1.1?[0m" 200 -
As you can see i am getting 200 for pictures from webcam vs 500 for google pictures. It's not an issue with flask wrapper, rather I tested the code directly with a picture on my disk into cv.imread(), the Caffe model is not picking it up.

PyPDF2.PdfFileReader hangs indefinitely

I'm trying to read this pdf file (https://www.accessdata.fda.gov/cdrh_docs/pdf14/K141693.pdf) and am following these suggestions from SO
Opening pdf urls with pyPdf
I have actually downloaded the file locally and am running the following code
import PyPDF2
pdf_file = open("K141693.pdf")
pdf_read = PyPDF2.PdfFileReader(pdf_file)
but my code hangs indefinitely. I'm running Python 2.7 and here is the stacktrace.
Traceback (most recent call last):
File "", line 1, in
runfile('C:/PoC/pdf_reader.py', wdir='C:/PoC')
File
"C:\ProgramData\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py",
line 880, in runfile
execfile(filename, namespace)
File
"C:\ProgramData\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py",
line 87, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/PoC/pdf_reader.py", line 13, in
pdf_read = PyPDF2.PdfFileReader(pdf_file)
File "C:\ProgramData\Anaconda2\lib\site-packages\PyPDF2\pdf.py",
line 1084, in init
self.read(stream)
File "C:\ProgramData\Anaconda2\lib\site-packages\PyPDF2\pdf.py",
line 1697, in read
line = self.readNextEndLine(stream)
File "C:\ProgramData\Anaconda2\lib\site-packages\PyPDF2\pdf.py",
line 1938, in readNextEndLine
x = stream.read(1)
KeyboardInterrupt
I came across another post here PyPDF2 hangs on processing but that too doesn't have a response.
You need to parse the file in binary ('rb') mode. (This works in Python 3:)
import PyPDF2
pdf_file = open("K141693.pdf", "rb")
read_pdf = PyPDF2.PdfFileReader(pdf_file)

Pandas HDF5 store unicode error on select query

I have unicode data as read from this file:
Mdt,Doccompra,OrgC,Cen,NumP,Criadopor,Dtcriacao,Fornecedor,P,Fun
400,8751215432,2581,,1,MIGRAÇÃO,01.10.2004,75852214,,TD
400,5464282154,9874,,1,MIGRAÇÃO,01.10.2004,78995411,,FO
I have two problems:
When I try to query this unicode data I get a UnicodeDecodeError:
Traceback (most recent call last):
File "<ipython-input-1-4423dceb2b1d>", line 1, in <module>
runfile('C:/Users/u5en/Documents/SAP/Programação/Problema HDF.py', wdir='C:/Users/u5en/Documents/SAP/Programação')
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 580, in runfile
execfile(filename, namespace)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 48, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
File "C:/Users/u5en/Documents/SAP/Programação/Problema HDF.py", line 15, in <module>
store.select("EKKA", "columns=['Mdt', 'Fornecedor']")
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 665, in select
return it.get_result()
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 1359, in get_result
results = self.func(self.start, self.stop, where)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 658, in func
columns=columns, **kwargs)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 3968, in read
if not self.read_axes(where=where, **kwargs):
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 3201, in read_axes
a.convert(values, nan_rep=self.nan_rep, encoding=self.encoding)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 2058, in convert
self.data, nan_rep=nan_rep, encoding=encoding)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 4359, in _unconvert_string_array
data = f(data)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 1700, in __call__
return self._vectorize_call(func=func, args=vargs)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\numpy\lib\function_base.py", line 1769, in _vectorize_call
outputs = ufunc(*inputs)
File "C:\Users\u5en\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\io\pytables.py", line 4358, in <lambda>
f = np.vectorize(lambda x: x.decode(encoding), otypes=[np.object])
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc3 in position 7: unexpected end of data
How can I store and query my unicode data in hdf5?
I have many tables with column names I do not know beforehand and which are not proper pytable names (NaturalNameWarning). I would like the user to be able to query on this columns, so I wonder how could I query these when their name prevents me? I see this used to have no easy fix, so if that is still the case I will just remove the offending characters from the heading.
import csv
import pandas as pd
dados = pd.read_csv("EKKA - Cópia.csv")
print(dados)
store= pd.HDFStore('teste.h5' , encoding="utf-8")
store.append("EKKA", dados, format="table", data_columns=True)
store.select("EKKA", "columns=['Mdt', 'Fornecedor']")
store.close()
Would I be better off doing this in sqlite?
Environment:
Windows 7 64bit
Pandas 15.2
NumPy 1.9.2
So under Python 2.7 on Windows 7, pandas 0.15.2, everything worked as expected, no encoding necessary. However on Python 3.4, the following worked for me. Apparently some characters are not representable in 'utf-8'; 'latin1' encoding usually solves these issues. Note that I had to read the csv in the first place with this encoding.
>>> df = pd.read_csv('../../test.csv',encoding='latin1')
>>> df
Mdt Doccompra OrgC Cen NumP Criadopor Dtcriacao Fornecedor P Fun
0 400 8751215432 2581 NaN 1 MIGRAÇ\xc3O 01.10.2004 75852214 NaN TD
1 400 5464282154 9874 NaN 1 MIGRAÇ\xc3O 01.10.2004 78995411 NaN FO
Further, the encoding must be specified not when opening the store, but on the append/put calls
>>> df.to_hdf('test.h5','df',format='table',mode='w',data_columns=True,encoding='latin1')
>>> pd.read_hdf('test.h5','df')
Mdt Doccompra OrgC Cen NumP Criadopor Dtcriacao Fornecedor P Fun
0 400 8751215432 2581 NaN 1 MIGRAÇ\xc3O 01.10.2004 75852214 NaN TD
1 400 5464282154 9874 NaN 1 MIGRAÇ\xc3O 01.10.2004 78995411 NaN FO
Once it is written encoded, it is not necessary to specify the encoding when reading.

Bzr: Can't import Mercurial (Hg) Repository due to UTF-8 error

I have a Mercurial repository that exported successfully to the "fast-import" format. However, when I try to import, it fails with a 'utf8' error:
...
16:02:09 800/2100 commits processed at 199/minute (800)
16:03:52 900/2100 commits processed at 157/minute (900)
ABORT: exception occurred processing commit :901
bzr: ERROR: exceptions.UnicodeDecodeError: 'utf8' codec can't decode byte 0xb9 in position 14: unexpected code byte
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.6/bzrlib/commands.py", line 946, in exception_to_return_code
return the_callable(*args, **kwargs)
File "/usr/lib/pymodules/python2.6/bzrlib/commands.py", line 1150, in run_bzr
ret = run(*run_argv)
File "/usr/lib/pymodules/python2.6/bzrlib/commands.py", line 699, in run_argv_aliases
return self.run(**all_cmd_args)
File "/usr/lib/pymodules/python2.6/bzrlib/commands.py", line 721, in run
return self._operation.run_simple(*args, **kwargs)
File "/usr/lib/pymodules/python2.6/bzrlib/cleanup.py", line 135, in run_simple
self.cleanups, self.func, *args, **kwargs)
File "/usr/lib/pymodules/python2.6/bzrlib/cleanup.py", line 165, in _do_with_cleanups
result = func(*args, **kwargs)
File "/usr/lib/pymodules/python2.6/bzrlib/plugins/fastimport/cmds.py", line 314, in run
user_map=user_map)
File "/usr/lib/pymodules/python2.6/bzrlib/plugins/fastimport/cmds.py", line 40, in _run
return proc.process(p.iter_commands)
File "/usr/lib/pymodules/python2.6/bzrlib/plugins/fastimport/processors/generic_processor.py", line 311, in process
super(GenericProcessor, self)._process(command_iter)
File "/usr/lib/pymodules/python2.6/fastimport/processor.py", line 76, in _process
handler(self, cmd)
File "/usr/lib/pymodules/python2.6/bzrlib/plugins/fastimport/processors/generic_processor.py", line 536, in commit_handler
handler.process()
File "/usr/lib/pymodules/python2.6/fastimport/processor.py", line 158, in process
handler(self, fc)
File "/usr/lib/pymodules/python2.6/bzrlib/plugins/fastimport/bzr_commit_handler.py", line 890, in modify_handler
self._modify_item(filecmd.path.decode('utf8'), kind,
File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xb9 in position 14: unexpected code byte
I'm running this on Ubuntu. bzr version "2.4.0-1~bazaar1~lucid1" and bzr-fastimport version "0.11.0-1~lucid1".
Any ideas for converting this repository successfully?
This error occurs because there is data in the input stream that is not valid utf8; Mercurial usually stores only utf8 data, but older commits might still contain utf8-invalid data (See https://www.mercurial-scm.org/wiki/EncodingStrategy).
Please file a bug about this issue against bzr-fastimport (https://launchpad.net/bzr-fastimport). It should handle this sort of situation more gracefully; presumably it should warn you that there is utf8 invalid data and then replace the utf8-invalid characters.
As a stopgap fix, you could change path.decode('utf-8') to path.decode('utf-8', 'replace') on /usr/lib/pymodules/python2.6/bzrlib/plugins/fastimport/bzr_commit_handler.py line 890.